<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Transformation of the higher education ecosystem in the context of artificial intelligence integration⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Liudmyla Maliuta</string-name>
          <email>maliuta@tntu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Rudan</string-name>
          <email>vitaliyrudan@tntu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Vladymyr</string-name>
          <email>olhavlada@tntu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Halyna Humeniuk</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viktor</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University of Water and Environmental Engineering</institution>
          ,
          <addr-line>11 Soborna St., Rivne, 33028</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>Ruska str.56, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ternopil Volodymyr Hnatiuk National Pedagogical University</institution>
          ,
          <addr-line>M. Kryvonosa str. 2, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>The integration of generative artificial intelligence (AI) into higher education has triggered a profound transformation of the educational ecosystem. This paper presents the results of an empirical study conducted among 114 students and 104 university instructors in Ukraine, revealing both the widespread adoption of AI tools and a series of critical risks - notably the erosion of critical thinking, breaches of academic integrity, and the decline of students' cognitive autonomy. In response to the identified challenges, the paper substantiates three interrelated directions of pedagogical transformation: a sessionbased model of course design, an updated format of distance learning based on prompt strategies and interpretive reflection, and a concept of AI literacy in teacher training. The proposed approaches aim to preserve intellectual complexity, ethical sensitivity, and learner agency in the era of generative AI.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;generative AI</kwd>
        <kwd>cognitive autonomy</kwd>
        <kwd>higher education</kwd>
        <kwd>prompt engineering</kwd>
        <kwd>AI literacy</kwd>
        <kwd>academic integrity</kwd>
        <kwd>digital pedagogy</kwd>
        <kwd>distance learning</kwd>
        <kwd>assessment transformation</kwd>
        <kwd>instructor's role 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The rapid advancement of generative artificial intelligence (AI) in recent years marks the beginning
of a new era of cognitive interaction in which the university environment can no longer remain on
the sidelines. Models such as ChatGPT, Claude, Copilot, and Gemini have ceased to be merely
auxiliary digital tools — they have become everyday cognitive partners for students in learning,
problem-solving, content creation, and argument construction.</p>
      <p>At the same time, most pedagogical practices remain rooted in the transmission model of
education, which took shape prior to the digital revolution. This model is predominantly based on
linear knowledge transfer, fragmented questioning, and standardized forms of control. Such
institutional inertia is increasingly misaligned with the cognitive profiles of the new student
generations, particularly Generation Z and Generation Alpha, who exhibit habits of fragmented
information processing, hyperfast attention switching, and constant digital presence.</p>
      <p>The result of this mismatch is not only a decline in engagement with the learning process but
also a deeper phenomenon — the delegation of thinking to algorithms without critical verification,
ethical sensitivity, or conscious cognitive effort. The educational discourse no longer revolves
around the question of whether to allow or prohibit AI, as prohibition in an era of open access is
both technically and pedagogically utopian. The real challenge lies in developing students’ capacity</p>
      <p>0000-0002-7569-9982 (L. Maliuta); 0000-0002-1357-9643 (V. Rudan); 0000-0002-1244-101X (O. Vladymyr);
0000-00027423-9968 (H. Humeniuk); 0000-0002-7088-6930 (V. Zhukovskyy)
to think in collaboration with AI: to formulate questions, construct hypotheses, verify facts, engage
in self-reflection, and practice reasoned doubt about “ready-made” answers.</p>
      <p>The aim of this paper is to explore the transformation of the higher education ecosystem under
the conditions of intensive AI integration by analyzing educational practices, rethinking the
instructor’s role, and substantiating new approaches to organizing the learning process.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>The integration of artificial intelligence into higher education has become the focus of a growing
body of research encompassing the transformation of pedagogical approaches, students’ cognitive
behavior, ethical risks, and institutional readiness for digital change. Existing studies provide a
multidimensional perspective on the impact of generative AI on the educational ecosystem — with
particular emphasis on rethinking assessment, the role of instructors, and the cognitive profiles of
Generation Z and Generation Alpha.</p>
      <p>
        A foundational theoretical framework is offered by the review study of Dwivedi et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which
systematizes contemporary approaches to the “responsible” use of AI in education. The authors
emphasize the need to align technological implementation with ethical criteria and educational
outcomes, while also pointing out the lack of empirical models linking AI to the development of
metacognitive skills — an aspect we advance in this work.
      </p>
      <p>
        In the context of rethinking assessment, Balducci [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] justifies the need to shift toward a
humancentered model in which evaluation serves as a tool for fostering autonomy and critical thinking.
The ideas are further developed by Perkins et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], who proposed the AI Assessment Scale
(AIAS) as an ethical instrument for evaluating the integration of generative AI into academic
control systems. Both approaches highlight the relevance of moving from assessment of
“correctness” to assessment of “depth of understanding and reflection.”
      </p>
      <p>
        The cognitive consequences of using AI in education are thoroughly analyzed by Skulmowski
and Xu [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], who, based on cognitive load theory, argue that without proper pedagogical design,
generative models can increase extraneous load and reduce the depth of knowledge acquisition. In
turn, Mallik and Gangopadhyay [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] differentiate between proactive and reactive uses of AI in
education, emphasizing the importance of strategically aligning tools with learning objectives.
      </p>
      <p>
        The preparation of instructors for the new digital era is examined in the review study by Viberg
et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which asserts that the effective integration of AI is linked to the level of instructors’ AI
literacy: the ability to formulate prompts, understand model functioning, and ethically manage
interactions with technology. This resonates with the conclusions of Knoth et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], who view
prompt engineering as a cognitive-linguistic strategy capable of activating students’ analytical
thinking.
      </p>
      <p>
        The risks associated with loss of agency and students’ dependency on AI are explored by Han et
al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and Yan et al. [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Their research highlights student concerns about the displacement of
their own cognitive activity by algorithms, particularly in the absence of facilitation, critical
verification, and pedagogical support. Institutional responses to these risks are already emerging,
as reflected in the AI Policy 2024–2025 of the Harvard Graduate School of Education [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], which sets
explicit standards for responsible AI use in academic contexts.
      </p>
      <p>
        Special attention is drawn to studies focusing on general and embodied AI in education. Latif et
al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] investigate the prospects of artificial general intelligence (AGI), particularly the potential of
hybrid cognitive architectures. Memarian and Doleck [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] emphasize the importance of a
multisensory and context-sensitive approach to AI in educational environments, proposing a model
in which embodiment, environment, and cognition function as a unified cognitive system.
      </p>
      <p>
        The review is concluded by Kamalov and Gurrib [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], who introduce the concept of a
“multivector revolution” in education under the influence of AI. The authors propose a typology of
changes — automation, augmentation, and transformation — which provides a conceptual
foundation for rethinking the architecture of the educational process.
      </p>
      <p>
        As highlighted by Owoc et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], AI adoption in education is characterized by a dual nature —
significant benefits for personalization and efficiency, but also major challenges related to
implementation strategies and pedagogical adaptation.
      </p>
      <p>
        Overall, contemporary literature establishes a solid conceptual basis for the transformation of
higher education in the era of artificial intelligence. In particular, similar problems were mentioned
in the paper [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. However, questions regarding the practical linkage between empirical models of
AI use by students and instructors and updated pedagogical formats remain insufficiently explored.
Our article aims to address this gap by combining quantitative data with an original architecture of
the learning session, an updated distance-learning model, and new approaches to instructor
retraining — with a focus on cognitive ethics and the preservation of student agency.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>The aim of this study was to empirically assess the degree of integration of generative artificial
intelligence (AI) into teaching and learning practices at higher education institutions (HEIs) in
Ukraine, to identify key risks, and to justify pedagogical responses to the emerging challenges.</p>
      <p>The study was conducted in two stages. At the first stage, a standardized survey was carried out
among two key respondent groups: HEI instructors (N = 104) and students (N = 114). The survey
was distributed via email to 145 instructors and 150 students, respectively. The response rates were
71.7% for instructors and 76.0% for students. The sample included representatives from 12
universities across different regions of Ukraine — Ternopil, Lviv, Khmelnytskyi,
KamianetsPodilskyi, Ivano-Frankivsk, Kyiv, and Odesa — ensuring territorial representativeness and
interdisciplinarity.</p>
      <p>The respondent selection method was a convenience sampling approach, which allowed the
inclusion of diverse academic profiles but limits the generalizability of the results to the entire
Ukrainian higher education system. This limitation is explicitly stated within the methodological
framework, and the conclusions are formulated with consideration of the sample’s characteristics.</p>
      <p>Additional empirical verification was conducted during an experimental session in a computer
laboratory, where students interacted with the ChatGPT-4o model while completing cognitively
demanding tasks. To record behavioral parameters of prompt-based interaction, the AI Prompt
Logger plugin was employed, followed by event data import into Google BigQuery via Apps Script.
Data processing was performed using Python with the pandas, tiktoken, and numpy libraries, while
data visualization was generated using matplotlib. This technical toolkit enabled a detailed analysis
of prompt structure, interaction iterativity, and session duration, providing a foundation for
formulating new IT-based criteria for the quality assessment of educational analytics.</p>
      <p>Two separate questionnaires were developed:
• for instructors — focusing on AI usage practices in teaching, risk assessment, and the need
to adapt educational processes;
• for students — emphasizing areas of AI application in learning, levels of awareness, and
anticipated benefits and risks.</p>
      <p>All questions were closed-ended, ensuring response standardization and enabling quantitative
analysis. The survey was conducted online in March 2025, aligning with the conditions of remote
access under wartime restrictions.</p>
      <p>For data processing, the following analytical methods were used:
• descriptive statistics on the prevalence of AI use among respondents;
• ranking of key risks and challenges;
• identification of major trends in awareness, attitudes toward AI, and needs for developing</p>
      <p>AI-related competencies.</p>
      <p>The proposed models are based both on the results of empirical analysis and on a systematic
understanding of global trends in the transformation of higher education under digitalization,
enabling the formation of an integrated vision for its renewal in the era of generative AI.</p>
      <p>To evaluate the effectiveness of AI-assisted learning, we introduce two accuracy indicators: (i)
High-Relevance Response Rate (HRR≥4), defined as the proportion of model responses rated 4 or
higher on a 0–5 scale; and (ii) Iterative Prompting Effect Size (Cohen’s d), measuring the
standardized difference in response relevance between iterative and one-shot interactions.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and discussion</title>
      <p>The empirical data collected through the survey enable an analysis of the actual scale and specific
features of the integration of generative artificial intelligence into higher education practices, using
the example of students and instructors from 12 Ukrainian universities. Although the sample was
formed using a convenience sampling method, its structure encompasses a wide range of regions,
educational levels, and disciplines, allowing the identification of several representative trends in AI
use within the academic environment.</p>
      <p>The analysis of the obtained data revealed a high level of AI engagement in everyday
educational activities on the part of both students and instructors. At the same time, significant
differences were identified in the nature of use, levels of awareness, and perceived risks. The
systematized results are presented in Table 1.</p>
      <p>The empirical data obtained allow for several key conclusions regarding the current state of
artificial intelligence technology integration into Ukrainian higher education, as well as the
identification of major challenges and directions for future change.</p>
      <p>
        First, the high level of AI use among both students (83.3%) and instructors (74.1%) indicates that
generative AI technologies have already become an organic part of the educational environment.
This level of penetration aligns with global trends: according to the Stanford HAI report [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
ChatGPT is known to 63% of respondents in an international survey, with about half using it
weekly. While the report does not provide separate data specifically for U.S. students, the overall
figures demonstrate the rapid spread of generative AI among education users. However, the mere
fact of technology use is not a sufficient indicator of successful integration; more important are the
ways in which it is applied and the depth of its impact on students’ cognitive processes.
      </p>
      <p>
        Second, the survey results revealed a critical gap between the intensity of AI use and the level of
risk awareness. Approximately 68% of students and 68.5% of instructors rate their awareness level
as medium or high; at the same time, both groups clearly identify significant threats: the loss of
critical thinking skills, violations of academic integrity, and decreased motivation for independent
work. These findings correspond to the conclusions of the OECD report “AI and the Future of
Skills” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which notes that without pedagogical guidance, AI integration can promote passive
learning and superficial knowledge processing, thereby requiring a rethinking of approaches to
educational process organization.
      </p>
      <p>
        Third, the analysis of AI usage areas shows a predominance of auxiliary functions (such as
preparing presentations, spellchecking, and idea generation; see Figure 1) over analytical or
creative information processing. This confirms the assumption made by UNESCO [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] that, in the
absence of specific pedagogical strategies, students primarily use AI to automate memorization and
reproduction of information, which does not contribute to the development of higher-order
cognitive skills.
      </p>
      <p>At the same time, the structure of AI use among instructors reveals different priorities: the focus
is on preparing teaching materials (70.8%) and generating educational content (41.7%). The use of
adaptive learning (31.3%) and automatic knowledge assessment (22.9%) is significantly less
common, which may indicate cautious implementation of AI in the critical elements of the
educational process. The least applied are chatbots for student support (10.4%), likely due to
insufficient technical support or doubts about their pedagogical relevance (see Figure 2).</p>
      <p>Unlike students, who actively use AI in individual cognitive activities, instructors
predominantly integrate it as a tool to enhance didactic practices, highlighting a distinct functional
focus — knowledge consumption versus knowledge transmission — and necessitating a
differentiated approach to the development of educational content and instructor training
programs.</p>
      <p>Particularly noteworthy is the identified distribution of perceived risks among instructors. They
consider the most critical threats to be violations of academic integrity by students (70.4%) and
decreased motivation for deep learning (68.5%). Only 18.5% of instructors express concern about
breaches of data confidentiality, reflecting an underestimated risk in the context of working with
personalized educational systems. It is important to emphasize that contemporary academic
research [11; 15] draws attention to personal data confidentiality as one of the key ethical
challenges in the use of AI in education, underscoring the necessity of strict adherence to
information protection standards.</p>
      <p>Student risks are primarily concentrated around the loss of independent thinking skills (66.7%)
and dependence on technology (47.2%). This indicates a partial student awareness of the potential
negative consequences of technological reliance but simultaneously reflects a lack of practical
strategies in the learning process to mitigate such risks.</p>
      <p>Analyzing these results overall, it can be argued that higher education in Ukraine faces a dual
challenge:
• on the one hand, it is necessary to maximally integrate AI technologies as a tool for
enhancing educational potential;
• on the other hand, conditions must be created under which AI use strengthens critical
thinking, fosters reflection, and promotes the development of autonomous intellectual
activity among students.</p>
      <p>
        Of particular importance is the reconsideration of the instructor’s role in the AI era. The
replacement of instructors by technology is seen as a real threat by 38.9% of respondents. This
aligns with the conclusions of the World Economic Forum [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], which notes that in the future, the
instructor’s function will increasingly shift from knowledge transmission to moderating the
learning process, fostering critical analysis skills, and cultivating students’ ethical responsibility.
      </p>
      <p>Thus, the research results not only confirm the relevance of global trends but also highlight
several specific challenges faced by the Ukrainian higher education system in the context of AI
integration. Preserving students’ cognitive autonomy, adapting the content of educational
programs and assessment systems, transforming the instructor’s role, and creating a safe
environment for AI use are key factors for the successful modernization of the educational
ecosystem.</p>
      <p>Given the results of the empirical study and the identified challenges associated with the spread
of artificial intelligence technologies in higher education, there arises a need to rethink the
traditional structure of the learning session. Classical approaches that rely on linear knowledge
transmission, checking understanding through direct questioning, testing, and discussing isolated
issues are increasingly proving insufficient in the context of digital hyperreality.</p>
      <p>A key complicating factor is the radical change in the cognitive behavior of Generation Z and
Alpha students. Under the influence of dynamic formats of digital culture (such as short videos on
TikTok, YouTube Shorts, Twitter/X, and Telegram channels offering texts of just 1–3 paragraphs),
a phenomenon has emerged that can be conditionally termed the “clickable thinking syndrome” —
a cognitive predisposition toward consuming vivid, compressed, and fragmented information that
does not require deep reflection. This leads to a decline in the ability to engage in consistent
analytical work with texts, concepts, and sources.</p>
      <p>
        Under such conditions, the traditional linear structure of 80–90-minute sessions, built on a
passive perception model, loses its effectiveness even among academically motivated students. This
is especially pronounced in the context of multiscreening, parallel access to AI services,
notifications, and social media, which create a constant backdrop of digital stimuli. According to
the study by Dwivedi et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the average cognitive endurance of students in digital environments
has decreased by 25–30% compared to levels in the 2010s. This requires a rethinking of the learning
session format as a modular, dynamically structured system that alternates between instrumental,
reflective, and interpretive stages, taking into account the altered cognitive profiles of learners.
      </p>
      <p>
        The sessional construction of the class, proposed as a response to the challenges of the AI era,
allows variation in pace, depth, and type of cognitive activity, ensuring a balance between
technological support and the student’s thinking autonomy. The initial session is based on
Gregersen’s Question Burst methodology — a tool for stimulating productive curiosity, which
involves generating numerous questions without immediately seeking answers. This approach has
proven effective in the context of fostering innovative thinking in business, education, and R&amp;D
environments [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and its adaptation to the academic setting creates the necessary cognitive
tension from the very first minutes of the session.
      </p>
      <p>The subsequent sessions involve the structured use of AI tools (including ChatGPT, Claude,
Copilot, Perplexity) not as sources of ready-made answers but as partners in the analytical process.
This approach aligns with recent studies emphasizing that only critically guided interaction with
artificial intelligence fosters the growth of cognitive independence, rather than diminishing it [2;
4]. Solution-seeking, fact-checking, and analytical comparison of AI-generated responses with
other sources become not only means of task-solving but also methods for developing
metacognitive thinking.</p>
      <p>To verify the effectiveness of this approach, a targeted experiment was conducted in a computer
laboratory, where students interacted with ChatGPT-4o while performing cognitively demanding
tasks. During a laboratory session, 17 undergraduate students majoring in Management at the
Ternopil Ivan Puluj National Technical University (TNTU) engaged with the ChatGPT-4o model to
solve three types of analytical tasks:
• Case-based scenario: analysis of a business problem with justification of a managerial
decision;
• Fact-checking: verification of claims using open-source data;
• Reflective essay: critical reflection on a given topic with formulation of an individual
position.</p>
      <p>Interaction with the AI model was carried out individually during an 80-minute session in a
dedicated computer lab.</p>
      <p>To capture students’ digital behavior, an engineering data pipeline was implemented:
• AI Prompt Logger (a browser-based open-source plugin) automatically recorded the
parameters: timestamp, model, prompt, tokens, and latency_ms in JSON format;
• Event logs were streamed in real time to Google Sheets, and subsequently imported into</p>
      <p>BigQuery via Apps Script using the onFormSubmit trigger;
• Data processing was conducted in Python 3.12 using the libraries pandas, tiktoken, and
numpy. The relevance of AI-generated responses was assessed by expert raters on a 0–5
scale.</p>
      <p>Table 2 presents the results of the laboratory experiment.</p>
      <p>The analysis of AI–student interactions demonstrates that iterative prompting consistently
yields higher relevance compared to one-shot queries. This is reflected in both the HRR≥4 metric
and the Cohen’s d effect size, confirming the pedagogical significance of structured, multi-step
prompting.</p>
      <p>Figure 3 presents a heatmap visualizing the average relevance score of AI responses depending
on prompt category and length.</p>
      <p>The visualization was generated using the seaborn and matplotlib libraries.</p>
      <p>Despite the increased cognitive complexity of the tasks, 62% of student–AI interactions were
one-shot prompts without further clarification or refinement, indicating the predominance of an
impulsive prompting strategy. Conversely, the highest relevance scores (4.2 out of 5) were observed
in cases involving detailed and iterative prompts, empirically confirming the significance of
thoughtful prompt design.</p>
      <p>These findings support the introduction of a micro-module on prompt engineering, the
development of evaluation criteria based not only on final outputs but also on interaction metadata
(such as prompt length, number of iterations, and degree of revision), and the use of automated
logging tools as instruments for learning analytics in digital educational environments.</p>
      <sec id="sec-4-1">
        <title>Session</title>
      </sec>
      <sec id="sec-4-2">
        <title>1. Question Formulation</title>
        <p>(Gregersen’s Question</p>
      </sec>
      <sec id="sec-4-3">
        <title>Burst)</title>
      </sec>
      <sec id="sec-4-4">
        <title>2. Solution Seeking with</title>
        <p>AI</p>
      </sec>
      <sec id="sec-4-5">
        <title>3. Analysis and</title>
      </sec>
      <sec id="sec-4-6">
        <title>Verification of AI Results 10 min 20 min</title>
        <p>20 min</p>
      </sec>
      <sec id="sec-4-7">
        <title>4. Reflection without AI 20 min</title>
      </sec>
      <sec id="sec-4-8">
        <title>5. Final Session 10 min</title>
      </sec>
      <sec id="sec-4-9">
        <title>Activating curiosity, problem identification</title>
      </sec>
      <sec id="sec-4-10">
        <title>Information processing,</title>
        <p>hypothesis generation</p>
      </sec>
      <sec id="sec-4-11">
        <title>Developing critical thinking, assessing credibility</title>
      </sec>
      <sec id="sec-4-12">
        <title>Fostering autonomous thinking, self-assessment</title>
      </sec>
      <sec id="sec-4-13">
        <title>Summarization, setting follow-up tasks</title>
      </sec>
      <sec id="sec-4-14">
        <title>Tools</title>
      </sec>
      <sec id="sec-4-15">
        <title>Small group work, instructor facilitation</title>
      </sec>
      <sec id="sec-4-16">
        <title>ChatGPT, Claude,</title>
      </sec>
      <sec id="sec-4-17">
        <title>Perplexity, Copilot</title>
      </sec>
      <sec id="sec-4-18">
        <title>Sources, fact-checking, comparison</title>
      </sec>
      <sec id="sec-4-19">
        <title>Written reflection,</title>
        <p>“Thought Cards” method,
“Empty Chair”</p>
      </sec>
      <sec id="sec-4-20">
        <title>Whiteboard, Jamboard, polling, cards</title>
        <p>The central component of the session is a phase of reflection without AI use, which ensures the
preservation of the student’s cognitive agency, the development of emotional self-regulation, and
interpretive skills. This component is critically important in light of contemporary research
findings [13; 19], which document the loss of depth in thinking when learning tasks are automated
without integrating a reflective component.</p>
        <p>Thus, the sessional model of the learning class reflects an adaptive response to the
transformation of the cognitive environment and shifts in student learning behavior. It combines
elements of digital flexibility with tools for developing critical and autonomous thinking and is
therefore regarded as a promising form of organizing a content-rich educational process in the
context of generative AI dominance. The model structure of a learning session under AI integration
is presented in Table 2.</p>
        <p>After implementing the proposed sessional structure, not only does the content of the class
change, but so does the functional role of the instructor. Within this format, the educator no longer
serves solely as a source of knowledge or evaluator but is transformed into a manager of the
educational process, a facilitator of cognitive interaction, and a moderator of students’ intellectual
activity. The instructor’s task becomes one of strategically guiding students’ thinking — from
question formulation to hypothesizing, verification, analytical interpretation, and deep
selfreflection.</p>
        <p>The effective and thoughtful integration of AI in the second and third sessions does not replace
students’ thinking; on the contrary, it creates situations of cognitive tension where generative
models function as intellectual tools rather than substitutes for student activity. At the same time,
the planned session of reflection without gadgets allows the instructor to maintain a balance
between technological support and the preservation of students’ autonomous intellectual
engagement. This approach creates the conditions for developing critically thinking, self-aware,
and responsible learners — key agents in the era of AI.</p>
        <p>However, for the full implementation of this model — especially under the conditions of war,
limited access to classrooms, or the predominance of distance learning — there arises a need to
rethink the very logic of the remote format.</p>
        <p>In such an instructional architecture, not only does the functional role of the instructor change,
but so does the approach to organizing learning interaction itself. Instead of controlling knowledge
acquisition, the educator transforms into a mentor of cognitive action, guiding students’ thinking
through question formulation, managing the informational environment, and supporting
autonomous judgment.</p>
        <p>Yet, to fully realize this model, particularly in wartime conditions, limited classroom access, or
predominantly remote learning, it becomes necessary to reconsider the logic underlying distance
education.</p>
        <p>Today, in many universities, distance learning continues to operate under a scheme that leans
toward formalism: standard Moodle tests, essay or short-answer uploads, and limited feedback
mechanisms. Such a model increasingly mismatches both the actual cognitive efforts of students
and the level of AI use in their everyday lives. In an environment of unrestricted access to
generative models, learning tasks are often perceived as mere technical actions: querying, copying,
submitting. The student frequently acts not as a subject but as a transmitter of AI-generated
results, with minimal internal engagement.</p>
        <p>In response to these challenges, a conceptual update of the distance learning format is proposed
— shifting the focus from “answering questions” to a “prompt strategy.” The task is not to generate
a text but to construct an optimal prompt that produces a result surpassing the instructor’s sample
in logic, structure, and substance. The student must not only formulate the query but also explain
their approach, assess the relevance of the AI response, and interpret its content.</p>
        <p>For example:</p>
        <p>Task: Formulate a prompt for ChatGPT to build a comparative table of the economic models of
Ukraine and Poland, taking into account GDP, export structure, and tax policy.</p>
        <p>Criterion: The result must be deeper and more precise than the instructor’s example, with
justification for the chosen prompt structure and the selected indicators.</p>
        <p>This approach shapes a new cognitive profile for the student — not merely as an AI user, but as
a strategist and critic of the interaction process. The next stage involves analyzing the outcome:
what worked well, what requires improvement, and which aspects could be enhanced. The task
concludes with a brief oral reflection (up to 5 minutes) delivered via Zoom or as a video recording,
in which the student publicly evaluates their problem-solving pathway.</p>
        <p>Within this logic, the assessment system also transforms. A proposed model focuses on the
quality of AI use as a cognitive tool, with the following weighting:
• 50% — quality and complexity of the formulated prompt (depth, structure, relevance);
• 30% — analytical evaluation of the AI result (fact-checking, comparison, interpretation);
• 20% — oral or written reflection (argumentation, logical consistency, ability to draw
conclusions).</p>
        <p>
          Such a system moves away from binary assessment (“right/wrong”) and fosters a
multidimensional view of student work — as a process of critical construction rather than
mechanical execution. This aligns with the ideas of Balducci [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], who emphasizes the need to
develop cognitive autonomy in the context of AI, as well as with the recommendations of Perkins
et al. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] regarding the creation of ethical evaluation systems (AI Assessment Scale, AIAS) aimed
at integrating technologies without losing the human subject.
        </p>
        <p>Accordingly, the instructor’s role undergoes a second transformation — in the distance-learning
format, they act as a curator of thinking rather than an administrator of the system. Their function
is not to check for the correctness of the answer but to pose a task that provokes intellectual action
and to accompany the student in the process of formulating a high-quality inquiry strategy. Under
conditions of widespread AI use, this is the only path to preserving cognitive agency and shaping a
competent learner in the era of artificial intelligence.</p>
        <p>Institutional modernization of higher education under conditions of intensive digitalization is
impossible without the redefinition of the instructor’s role. In a context where students are already
actively and pervasively integrating AI into their educational practices, technological passivity on
the part of instructors leads to several critical consequences: the loss of pedagogical authority, the
decline in the relevance of learning content, and a disconnect between the substance of
assignments and students’ actual cognitive practices.</p>
        <p>
          The problem does not lie in the mere fact of AI use but in the lack of conscious pedagogical
guidance. Without proper facilitation, the learning process is reduced to the automated execution
of instructions, where generative models effectively displace the need for deep analysis, the
formulation of independent judgments, and the evaluation of information. As Balducci [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] argues,
it is precisely the absence of accompanying reflective thinking in interactions with AI that poses
the main threat to academic integrity and students’ cognitive development. In this context,
instructor retraining becomes a decisive condition for shaping a technologically mature educational
ecosystem.
        </p>
        <p>
          This primarily concerns the inclusion of AI literacy modules in professional development
programs for instructors, covering the following components:
• Understanding the architecture and limitations of generative models, including GPT-4,
Claude, Gemini, and Copilot: principles of design, types of training, vulnerabilities, and
constraints [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ];
• Ethical risks associated with their use in education, focusing on issues of authorship, data
privacy, and result manipulation [13; 19];
• Methods for constructing effective prompts, adapting assignments to the logic of
generative models, and designing scenarios oriented toward developing critical thinking
rather than mere information reproduction [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>Particular importance is given to training instructors in designing assignments that cannot be
solved through simple copying from AI. Such tasks should require students to reconstruct the logic
of the response, justify their choice of prompt, compare outcomes with alternative sources, and
formulate combined queries for multi-step analytical scenarios. For example:
• “Explain the logic behind the AI’s response when comparing tax models. What did it
overlook? How would you modify the prompt?”
• “Construct three prompts with the same goal but using different strategic approaches —
and compare the results.”</p>
        <p>
          In this context, the instructor no longer serves as a traditional source of knowledge. Instead,
they are transformed into an architect of thinking, who models situations of cognitive choice,
guides students’ interpretive strategies, and facilitates analytical engagement with technological
tools. According to Viberg et al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], instructors proficient in prompt engineering and
knowledgeable about the logic of generative systems demonstrate higher effectiveness in
developing students’ metacognitive skills.
        </p>
        <p>Moreover, educational institutions that systematically support instructors in this area —
through regular training, pilot programs, and experience-sharing — reduce the risks of AI misuse
and foster an ethically resilient academic environment capable of self-regulation in conditions of
technological uncertainty [7; 13].</p>
        <p>Thus, rethinking instructor preparation is not merely a technical adaptation task, but a strategic
step toward building a new educational culture in which technologies are not tools of
simplification but instruments for cultivating intellectual complexity.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The integration of generative artificial intelligence (AI) into higher education in Ukraine is already
a reality, as confirmed by the results of an empirical study involving students and academic staff
from 12 universities. The survey revealed:
• a high level of penetration of AI tools into everyday educational practices (over 80% of
students and about three-quarters of instructors reported usage);
• a predominance of instrumental applications such as essay writing, exam preparation,
and material generation, with limited use in analytical or reflective tasks;
• awareness of critical risks, particularly the erosion of critical thinking, academic
integrity violations, and reduced student motivation.</p>
      <p>These empirical findings point to a systemic gap: while AI is actively used, pedagogical practices
and institutional frameworks remain insufficiently adapted. Most universities still rely on
transmissive models of teaching and standardized assessment formats that do not foster
higherorder skills such as reflection, analytical flexibility, or learner autonomy.</p>
      <p>To address this mismatch, the paper substantiates a pedagogical framework consisting of four
key elements:
1. Course architecture – session-based models that alternate between question
formulation, AI-supported analysis, and reflection without AI.
2. Distance learning – updated logic of interaction built on prompt strategies rather than
one-way answer submission.
3. Assessment system – criteria that evaluate not only learning outcomes but also
interaction processes (prompt quality, iterations, revisions).
4. Instructor’s role – transformation into a cognitive architect and ethical mediator,
requiring systematic retraining and AI literacy development.</p>
      <p>The accuracy of the proposed model is theoretically supported by two indicators:
HighRelevance Response Rate (HRR≥4) and Iterative Prompting Effect Size (Cohen’s d). These metrics
demonstrate that iterative prompting enhances the relevance and depth of AI-generated outputs
compared to one-shot interactions, thereby empirically validating the educational significance of
structured prompting strategies.</p>
      <p>The experimental “Prompt-Lab-80” session confirmed this assumption: while most students
defaulted to one-shot prompting, the highest relevance scores were achieved in iterative tasks with
critical revisions. This highlights both the risks of superficial AI use and the potential of targeted
pedagogical interventions such as micro-modules on prompt engineering.</p>
      <p>Overall, the study contributes an original theoretical and practical framework for integrating
generative AI into higher education. It not only illustrates the risks of unstructured adoption but
also offers concrete mechanisms for preserving cognitive autonomy, enhancing critical thinking,
and redefining the role of educators.</p>
      <p>Future research should focus on:
• large-scale validation of the session-based model using experimental or
quasiexperimental designs;
• cross-country comparisons to identify adaptive mechanisms suitable for Ukraine;
• development of instruments to measure AI literacy among instructors and students;
• longitudinal analysis of the cognitive impact of different prompting strategies.</p>
      <p>By combining empirical evidence with conceptual innovations, this work lays a foundation for a
new educational paradigm in which generative AI functions not as a substitute for thinking but as
a structured catalyst for intellectual development.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4o solely for grammar and spelling
checks. All content was independently reviewed, verified, and edited by the author(s), who take full
responsibility for the final version of the publication.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Balducci</surname>
            ,
            <given-names>B. AI</given-names>
          </string-name>
          <article-title>and student assessment in human-centered education</article-title>
          .
          <source>Front. Educ</source>
          .
          <volume>9</volume>
          (
          <year>2024</year>
          ):
          <fpage>1383148</fpage>
          . https://doi.org/10.3389/feduc.
          <year>2024</year>
          .
          <volume>1383148</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Dwivedi</surname>
            ,
            <given-names>Y.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hughes</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ismagilova</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aarts</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coombs</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crick</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , et al.
          <article-title>Responsible AI for teaching and learning: A systematic literature review and research agenda</article-title>
          .
          <source>J. Bus. Res</source>
          .
          <volume>153</volume>
          (
          <year>2022</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . https://doi.org/10.1016/j.jbusres.
          <year>2022</year>
          .
          <volume>08</volume>
          .010.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Gregersen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>Questions Are the Answer: A Breakthrough Approach to Your Most Vexing Problems at Work</article-title>
          and in Life. Harper
          <string-name>
            <surname>Business</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://halgregersen.com/questions-arethe-answer.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coghlan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buchanan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McKay</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>Who is Helping Whom? Student Concerns about AI-Teacher Collaboration in Higher Education Classrooms</article-title>
          . arXiv preprint,
          <source>arXiv:2412.14469</source>
          (
          <year>2024</year>
          ). https://arxiv.org/abs/2412.14469.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[5] Harvard Graduate School of Education. Artificial Intelligence Policy</source>
          <year>2024</year>
          -2025.
          <article-title>Harvard GSE (</article-title>
          <year>2024</year>
          ). https://registrar.gse.harvard.edu/AI-policy.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kamalov</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurrib</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <article-title>A New Era of Artificial Intelligence in Education: A Multifaceted Revolution</article-title>
          . CoRR, abs/2305.18303 (
          <year>2023</year>
          ). https://arxiv.org/abs/2305.18303.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Knoth</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tolzin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Janson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leimeister</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          <article-title>AI literacy and its implications for prompt engineering strategies</article-title>
          .
          <source>Comput. Educ.: Artif. Intell</source>
          .
          <volume>6</volume>
          (
          <year>2024</year>
          ):
          <fpage>100225</fpage>
          . https://doi.org/10.1016/j.caeai.
          <year>2024</year>
          .
          <volume>100225</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Latif</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mai</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nyaaba</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , et al. Artificial
          <string-name>
            <surname>General</surname>
          </string-name>
          <article-title>Intelligence (AGI) for Education</article-title>
          . CoRR, abs/2304.12479 (
          <year>2023</year>
          ). https://arxiv.org/abs/2304.12479.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Mallik</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gangopadhyay</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Proactive and Reactive Engagement of Artificial Intelligence Methods for Education: A Review</article-title>
          .
          <source>Front. Artif. Intell</source>
          .
          <volume>6</volume>
          (
          <year>2023</year>
          ). https://doi.org/10.3389/frai.
          <year>2023</year>
          .
          <volume>1132363</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Memarian</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Doleck</surname>
            ,
            <given-names>T. Embodied</given-names>
          </string-name>
          <article-title>AI in Education: A Review on the Body, Environment, and</article-title>
          <string-name>
            <given-names>Mind. Educ.</given-names>
            <surname>Inf</surname>
          </string-name>
          . Technol.
          <volume>29</volume>
          (
          <issue>1</issue>
          ) (
          <year>2024</year>
          ):
          <fpage>895</fpage>
          -
          <lpage>916</lpage>
          . https://doi.org/10.1007/s10639-023-11880-y.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <article-title>Organisation for Economic Co-operation and Development. Artificial intelligence and the future of skills, Volume 2</article-title>
          .
          <string-name>
            <given-names>OECD</given-names>
            <surname>Publishing</surname>
          </string-name>
          (
          <year>2023</year>
          ). https://www.oecd.org/en/publications/aiand
          <article-title>-the-future-of-skills-volume-2_a9fe53cb-en</article-title>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Owoc</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sawicka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weichbroth</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Artificial Intelligence</surname>
          </string-name>
          <article-title>Technologies in Education: Benefits, Challenges and Strategies of Implementation</article-title>
          . CoRR, abs/2102.09365 (
          <year>2021</year>
          ). https://arxiv.org/abs/2102.09365.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Perkins</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Furze</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roe</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , MacVaugh,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>The AI Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment</article-title>
          . arXiv preprint,
          <source>arXiv:2312.07086</source>
          (
          <year>2023</year>
          ). https://arxiv.org/abs/2312.07086.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Skulmowski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>M. Understanding Cognitive Load in Digital and Online Learning: A New Perspective on Extraneous Cognitive Load</article-title>
          .
          <source>Educ. Psychol. Rev</source>
          .
          <volume>34</volume>
          (
          <issue>1</issue>
          ) (
          <year>2022</year>
          ):
          <fpage>171</fpage>
          -
          <lpage>196</lpage>
          . https://doi.org/10.1007/s10648-021-09624-7.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <article-title>Stanford Institute for Human-Centered Artificial Intelligence</article-title>
          .
          <source>AI Index Report</source>
          <year>2024</year>
          . Stanford University (
          <year>2024</year>
          ). https://hai.stanford.edu/ai-index/2024-ai
          <article-title>-index-report.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>United</given-names>
            <surname>Nations</surname>
          </string-name>
          <string-name>
            <surname>Educational</surname>
          </string-name>
          , Scientific and
          <string-name>
            <given-names>Cultural</given-names>
            <surname>Organization</surname>
          </string-name>
          .
          <article-title>Artificial intelligence technologies in education: Prospects and consequences</article-title>
          .
          <source>UNESCO</source>
          (
          <year>2024</year>
          ). https://unesdoc.unesco.org/ark:/48223/pf0000382446.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Viberg</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hatakka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mavroudi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khalil</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Teacher perspectives on using AI tools for instruction and assessment in higher education: A scoping review</article-title>
          .
          <source>Br. J. Educ. Technol</source>
          .
          <volume>54</volume>
          (
          <issue>6</issue>
          ) (
          <year>2023</year>
          ):
          <fpage>1355</fpage>
          -
          <lpage>1374</lpage>
          . https://doi.org/10.1111/bjet.13390.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>[18] World Economic Forum. The future of jobs report 2024</source>
          . World Economic Forum (
          <year>2024</year>
          ). https://www.weforum.org/reports/the-future
          <source>-of-jobs-report-2024/.</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sha</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez-Maldonado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , et al.
          <article-title>Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review</article-title>
          . arXiv preprint,
          <source>arXiv:2303.13379</source>
          (
          <year>2023</year>
          ). https://arxiv.org/abs/2303.13379.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>O.</given-names>
            <surname>Duda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kochan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kunanets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Matsiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pasichnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sachenko</surname>
          </string-name>
          , T. Pytlenko, “
          <source>Data Processing in IoT for Smart City Systems,” The 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications</source>
          (IDAACS'
          <year>2019</year>
          ),
          <fpage>18</fpage>
          -
          <lpage>21</lpage>
          September,
          <year>2019</year>
          , Metz, France, vol.
          <volume>1</volume>
          , pp.
          <fpage>96</fpage>
          -
          <lpage>99</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>