<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Vanbecelaere);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Does Personalised Learning Influence Students' Self- Evaluation of Learning in Digital Learning Environments?⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefanie Vanbecelaere</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ine Windey</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tian He</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sameh Said-Metwaly</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>imec research group at KU Leuven</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kortrijk</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Belgium</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centre for Instructional Psychology and Technology, KU Leuven</institution>
          ,
          <addr-line>Leuven</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Psychology and Educational Sciences, KU Leuven</institution>
          ,
          <addr-line>Kortrijk</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This study examines the impact of digital personalised learning (DPL) environments on students' selfevaluation of learning. The DPL environment in this study adapts to individual characteristics such as prior knowledge, metacognition, and motivation. While previous research on DPL has produced mixed results, few studies have explored its influence on self-evaluation of learning, which is associated with higher motivation and improved learning outcomes. This study aims to examine the following: (1) the difference in students' self-evaluation between adaptive and non-adaptive learning environments, (2) the moderating effect of the subject area on this difference, and (3) the impact of the amount of adaptivity on self-evaluation of learning. Data from 2,634 secondary school students using the DPL tool in a large-scale innovation project were analysed through multilevel linear regression. Results showed that students in adaptive learning tracks reported significantly higher self-evaluation scores compared to those in non-adaptive environments. While the subject area did not significantly moderate this effect, students reported higher self-evaluations in Science &amp; Technology than in Social Sciences. Finally, no significant association was observed between the amount of adaptivity and students' self-evaluations. Our study highlights that adaptive learning environments positively influence self-evaluation though this influence does not differ by subject area. Furthermore, the extent to which the learning environment is personalised was not associated with self-evaluation of learning. This study demonstrates the benefits of personalised learning in real-world settings, despite the challenges of controlling variables in such environments.</p>
      </abstract>
      <kwd-group>
        <kwd>personalised learning</kwd>
        <kwd>digital learning technologies</kwd>
        <kwd>self-evaluation of learning 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>1.1. Digital Personalised Learning</title>
        <p>
          Students vary widely in terms of prior knowledge, experiences, motivation, language proficiency,
socio-economic background etc. Addressing these differences in traditional learning environments
is a significant challenge for educators. However, advances in digital technology and data analytics
now allow the creation of personalised learning environments tailored to individual learner
characteristics. These personalised learning environments hold promise in different contexts such as
K12-education and the corporate sector where learners have less guidance of an educator who takes
into account individual differences but are required to learn autonomously. Digital personalised
learning (DPL) is defined as “enabling and supporting learning based upon particular characteristics
of relevance or importance to students through technology, potentially adapting to students’ needs
by teaching at the right level” [1, p. 10]. Personalised learning supports individual needs and goals
by tailoring the learning process to each student's unique characteristics, such as their prior
knowledge and motivation [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. A concept that is often used in research on how personalised learning
can be facilitated through digital learning technologies is adaptivity. Adaptivity can be defined as
“the ability of a learning system to diagnose a range of learner variables and to accommodate a
learner's specific needs by making appropriate adjustments to the learner's experience with the goal
of enhancing learning outcomes” [23, p. 276].
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Effectiveness of DPL</title>
        <p>
          It is hypothesized that if we adapt the learning environment to students’ characteristics, they will
learn more effectively. However, empirical findings on its effectiveness have been mixed. For
example, a meta-analysis by [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] showed that adaptivity was not a significant moderator of the
effectiveness of digital technologies to train reading. Furthermore, a systematic review of analytics
for adaptivity revealed mixed results, with only about half of the studies showing positive effects of
adaptation on learning outcomes [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. A recent meta-analysis by [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] showed that personalised
learning has an overall medium positive effect on learning achievement. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] also investigated the
effect of DPL on learning perceptions including students’ motivation and attitudes towards learning
and found a small effect size. The effectiveness of DPL is typically assessed using pre- and post-test
interventions that compare experimental and control conditions. However, these studies often fail
to detect significant effects (e.g., [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]). This could be attributed to the short duration of interventions,
which may only yield small differences between adaptive and non-adaptive conditions, especially in
the short term.
        </p>
        <p>
          Analyzing log data presents an opportunity for a more nuanced understanding of student
behavior and outcomes in DPL environments [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Log data offers precise and detailed insights into
students’ behavior and performance throughout the learning process. Furthermore, compared to
traditional pre- and post-tests, analyzing log data is non-intrusive and does not require shifting tasks
from the learning environment to a (standardized) testing format [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. For instance, [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] investigated
learning effectiveness by analyzing the participants' usage data, such as the number of attempts at
testing themselves, and found that those who made more attempts gained higher post-test scores.
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] measured learning efficiency by dividing the recorded total number of completed tasks by the
number of correctly answered problems. Additionally, they used Bayesian knowledge tracing to
generate moment-by-moment learning curves, and they concluded that the curves accurately
depicted how well pupils could regulate their learning across time. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] applied a modeling approach
to assess learning efficiency through the extracted log data where the student's average learning rate
within a session and over sessions is modeled. The findings revealed that adaptive digital educational
games could promote learning efficiency across game-playing sessions compared to the
nonadaptive games. Although log data shows the potential of better understanding students' behavioral
or learning patterns in DPL environments, students’ log data are underexplored [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>1.3. The Effect of DPL on Self-Evaluation of Learning</title>
        <p>
          The effects of DPL on learning have mainly been studied through self-developed or standardized
knowledge tests or self-report questionnaires measuring students’ attitudes or motivation [
          <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
          ].
However, limited attention has been given to how DPL environments influence students'
selfevaluation of learning [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Self-evaluating abilities can be defined as "a personal, unguided reflection
on performance for the purpose of generating an individually derived summary of one's own level
of knowledge, skill, and understanding in a particular area" [12, p.15]. This concept encompasses
both quantity estimates (e.g., "How many task requirements have I met?") and quality estimates (e.g.,
"How well have I done?") [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Building on this, [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] further distinguishes between formative
selfassessment, aimed at fostering learning during the process, and summative self-assessment, which
involves post-task evaluations of learning based on performance. Self-evaluation plays a crucial role
in students' academic achievement and self-regulated learning, where they set goals, monitor
progress, and adjust strategies to achieve those goals [
          <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
          ]. These are important skills not only
within K12 education, but also in corporate settings where employees often need to train
autonomously through digital learning platforms. Self-regulation skills are important for learners to
estimate their mastery in certain topics and make decisions about the next best actions within the
learning environment.
        </p>
        <p>
          In digital learning environments, self-evaluation gains particular significance. Online
selfevaluation methods offer several advantages over traditional pen-and-paper approaches, such as
time-efficient scoring, immediate feedback, flexible assessment formats, and enhanced opportunities
for students to reflect on their learning [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. By leveraging these digital features, DPL environments
have the potential to foster more accurate and meaningful self-evaluations, enabling students to
better understand their progress and adapt their learning strategies. Exploring the relationship
between DPL and students’ self-evaluation is essential to understand how DPL environments
influence not only cognitive outcomes but also students' metacognition.
        </p>
      </sec>
      <sec id="sec-1-4">
        <title>1.4. Moderator Variables of the Effectiveness of DPL</title>
        <p>
          The effectiveness of DPL is likely influenced by multiple factors, including individual learner
characteristics, subject area, and the design of the adaptive system. However, research findings on
the moderating effects of these variables remain inconsistent [
          <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
          ]. Subject area appears to play a
role in the variability of DPL outcomes. A recent meta-analysis found that the impact of DPL differs
across disciplines, showing small effects in subjects like languages, math, and science; medium effects
in psychology; and larger effects in technology-based subjects [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. In contrast, [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] found subject
domain was not a significant moderator of the effect of DPL. Moreover, self-evaluation scores may
also vary by subject area. For example, students in well-defined subjects like mathematics or science
may find it easier to estimate their abilities compared to students in more interpretive or subjective
areas like history or literature [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
        </p>
        <p>
          The design of the adaptive system itself is another potential moderator. Some DPL environments
rely on simple, rule-based adaptations, while others employ more advanced systems that
continuously assess students’ abilities and dynamically adjust tasks [
          <xref ref-type="bibr" rid="ref18">18, 23</xref>
          ]. It is hypothesized that
more flexible and sophisticated adaptive systems may yield greater improvements in learning
outcomes. Despite these theoretical considerations, empirical evidence remains limited. Existing
research has yet to draw definitive conclusions about how these variables interact to influence the
effectiveness of DPL, underscoring the need for further research in this area.
        </p>
      </sec>
      <sec id="sec-1-5">
        <title>1.5. Research Questions</title>
        <p>Previous research on DPL has primarily been conducted through controlled experiments, often with
fixed instructions, to facilitate comparisons between adaptive and non-adaptive learning
environments. While these "efficacy" studies [20] are critical for establishing baseline effectiveness,
it is equally important to explore the potential of DPL in real-world settings. Such studies can include
larger participant groups, cover a broader range of subjects, and account for the variability inherent
in authentic educational contexts. Assessing the impact of DPL through self-evaluation of learning
offers a valuable approach. Self-evaluation is less intrusive for students, making it easier to integrate
into authentic learning settings. The following research questions were investigated: (1) Is there a
difference in students’ self-evaluation of learning between adaptive and non-adaptive learning
environments?, (2) Is the difference in students’ self-evaluation of learning between adaptive and
non-adaptive learning tracks moderated by the subject area?, and (3) Is students’ self-evaluation of
learning influenced by the amount of adaptivity in the learning tracks?.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>This study is part of a wider evaluation of an innovation project in which over 550 schools
participated. The aim of this project was to foster DPL in Flemish secondary schools (Belgium). The
DPL tool enables teachers to design personalised learning pathways for their students. This
personalisation is driven by key moments that include cognitive (e.g., "What is 2 x 2?"), motivational
(e.g., "Would you like to learn more about this topic by watching a video or reading a text?"), and
metacognitive (e.g., "Do you think you have mastered this topic?") questions (see Figure 1 for an
example learning track). Based on how students respond to these key moments, the system assigns
them a customized pathway within the learning track. An example of the design of a learning track
for computational thinking developed with the DPL tool can be found in [24]. Regarding the students’
self-evaluation in the DPL tool, students are shown the expected learning content and goals at the
beginning of each learning track. At the end, they are asked to assess their progress by moving a
slider from 'Completely not' to 'Definitely yes,' which corresponds to a numerical score from 1 to 10,
reflecting their perceived achievement of the learning goals.</p>
      <p>This study utilizes log data from December 2021 to April 2024, and entailed 2,511 students from
secondary schools actively participating in the DPL project. Of these, 683 students participated in
the adaptive learning environments (including at least one key moment) and 1,828 students in the
non-adaptive learning tracks. Subject areas were categorized as Science and Technology (e.g.
Mathematics, Chemistry, and Biology) and Social Sciences (e.g. English, History, and Politics). The
distribution of students in each category was as follows: Adaptive Science and Technology (n = 21),
Nonadaptive Science and Technology (n = 238), Adaptive Social Sciences (n = 73), and Nonadaptive
Social Sciences (n = 208). The DPL tool was used by teachers for multiple purposes ranging from
instruction of new concepts, exercising learning content, formative assessment, and addressing
individual learner needs. The DPL tool was mostly used in the classroom, and to a limited extent at
home.</p>
      <p>A multilevel linear regression analysis was conducted including three levels because the data
structure was hierarchical: measurements were nested within students, and students were nested
within teachers. The proportion of variance explained at each level was quantified by the Intraclass
Correlation Coefficient (ICC). The analyses were conducted using the R package lmerTest [21].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>3.1. Adaptive Versus Non-Adaptive Learning Tracks</title>
        <p>To answer RQ1, a three-level random intercept regression model was tested. The levels considered
were measurements (level 1), students (level 2), and teachers (level 3). The majority of the variance
in the student self-evaluation of learning was accounted for at the student level (ICC = 0.51), followed
by the teacher level (ICC = 0.25). This suggests that the individual differences among students play
a more significant role in explaining the variability in students’ self-evaluation of learning than the
differences between teachers. The results of this model (see attached Table 1 and Figure 2) showed
that self-evaluation of learning was significantly larger in adaptive learning tracks (M = 7.36, SE =
0.12) compared to non-adaptive learning tracks (M = 7.14, SE = 0.12), although the effect size of the
difference was small (Cohen's d = 0.07).</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Influence of Subject Area</title>
        <p>Regarding RQ2, to test the moderating effects of subject area, a model including an interaction
between adaptivity and the subject area was tested (Model 2). It should be noted that these analyses
are based on a smaller dataset due to the fact that part of the learning tracks lacked metadata on
subject area. As shown in Table 1 and Figure 3, the moderating effect was not significant (Cohen's d
= 0.03, small). Yet, there was a significant main effect of ‘Subject Area’, with Social Science subjects
(M = 6.88, SE = 0.31) linked to lower self-evaluation scores compared to Science &amp; Technology
subjects (M = 7.58, SE = 0.28), with a small effect size (Cohen's d = 0.28).</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Influence of Amount of Adaptivity</title>
        <p>A three-level random intercept regression model was tested. The levels considered were
measurements (level 1), students (level 2), and teachers (level 3). The amount of adaptivity was
operationalized as the total number of key moments in a learning track. It is assumed that the higher
the total number of key moments in a learning track, the more adaptive the learning track is. The
results of this model (see Table 2, Figure 4) showed that there was no association between the amount
of adaptivity and students’ self-evaluation of learning (Cohen's d = 0.10, small).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>This study investigated the effectiveness of DPL environments on students’ self-evaluation of
learning. Specifically, we examined: (1) whether students’ self-evaluation of learning differs between
adaptive and non-adaptive environments, (2) the moderating effect of subject area on this
relationship, and (3) whether the degree of adaptivity influences students’ self-evaluation of learning.
Using multilevel regression analysis, we analyzed students' log data with a three-level model
encompassing measurement, student, and teacher levels.</p>
      <p>
        Our findings underscore the influence of learning environment structure on students’
selfevaluation of learning. Students in more personalised environments generally rated their learning
experiences more positively, aligning with prior research that highlights the benefits of DPL
environments on learning outcomes (e.g., [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]). However, the dependent variable in this study—
self-evaluation of learning—has received limited attention in previous research, making direct
comparisons challenging.
      </p>
      <p>
        Interestingly, the subject area did not moderate the effectiveness of DPL environments on
students’ self-evaluations. This suggests that DPL environments can support positive self-evaluation
across diverse subjects, corroborating findings from [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], who observed no significant differences in
effect sizes among learning domains. However, a main effect of subject area was noted, with students
rating themselves higher in science and technology subjects compared to social sciences.
      </p>
      <p>Contrary to our expectations, higher levels of adaptivity did not significantly enhance students’
self-evaluation of learning. We hypothesized that greater personalization would lead to better
outcomes, but the limited variation in adaptivity levels (ranging from 1 to 3) may have been
insufficient to reveal significant differences. This finding suggests the need for further exploration
of adaptivity operationalization and their impact on learning.</p>
      <p>
        Several limitations should be considered when interpreting these findings. First, the validity of
self-evaluation scores could not be assessed due to the absence of an objective criterion.
Selfevaluation is prone to errors such as overconfidence, which may inflate students’ self-assessments
relative to their actual performance [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Second, teachers had the flexibility to choose, adapt, or
create learning tracks, requiring them to label activities and key moments accurately. Regular checks
were conducted, however, given the real-life nature of the dataset and the freedom given to teachers,
it is possible that a small portion of the dataset was not labelled correctly (e.g., a cognitive key
moment labeled wrongly as a motivational key moment). Nevertheless, given the size of our dataset,
these inaccuracies are unlikely to have significantly impacted the results.
      </p>
      <p>
        To enhance the robustness of future studies, we recommend incorporating objective performance
measures, such as knowledge tests, to complement self-evaluation data. Research indicates that the
validity of self-evaluation improves with students’ experience in self-assessment and can be further
refined by questioning approaches (e.g., absolute vs. relative to peers), anonymity, and alignment
with objective benchmarks [22]. Additionally, this study employed a rule-based personalization
system designed by teachers. Exploring more advanced, AI-driven adaptive systems could provide
insights into the effectiveness of different adaptivity operationalizations [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>This study examined the effectiveness of a DPL environment within a realistic,
curriculumintegrated setting across various subjects. While our findings primarily pertain to secondary
education, they have broader implications for lifelong learning—a key educational goal in the 21st
century. As digital learning environments increasingly emphasize autonomy and personalization,
fostering students' self-assessment skills will be crucial. Effective tools that support reflection and
self-monitoring can empower students, paving the way for autonomous, lifelong learning.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>The Research Foundation - Flanders has financed this research. The research has taken place within
the following project: 1244424N: Fostering personalized learning in primary schools through the use
of learning analytics dashboards.
6. References
[20] C. Shawn Green, D. Bavelier, A. F. Kramer, S. Vinogradov, U. Ansorge, K. K. Ball, U. Bingel, J.</p>
      <p>M. Chein, L. S. Colzato, J. D. Edwards, et al., Improving Methodological Standards in Behavioral
Interventions for Cognitive Enhancement, J. Cogn. Enhanc. 3.1 (2019) 2–29.
doi:10.1007/s41465018-0115-y.
[21] A. Kuznetsova, P. B. Brockhoff, R. H. B. Christensen, lmerTest Package: Tests in Linear Mixed</p>
      <p>Effects Models, J. Stat. Softw. 82.13 (2017). doi:10.18637/jss.v082.i13.
[22] P. A. Mabe, S. G. West, Validity of self-evaluation of ability: A review and meta-analysis., J.</p>
      <p>Appl. Psychol. 67.3 (1982) 280–296. doi:10.1037/0021-9010.67.3.280.
[23] J. L. Plass, S. Pawar, Toward a taxonomy of adaptivity for learning, J. Res. Technol. Educ. 52.3
(2020) 275–300. doi:10.1080/15391523.2020.1719943.
[24] R., Van Schoors, R., S. M., Bhatt, J., Elen, A., Raes, W., Van den Noortgate, F., Depaepe, Design
and Development of a Digital Personalized Learning Track: Bridging the Gap between Textual
and Visual Programming. International Journal of Designs for Learning, 15.1 (2024) 74-95.
https://doi.org/10.14434/ijdl.v15i1.35224</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Major</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Francis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tsapali</surname>
          </string-name>
          ,
          <article-title>The effectiveness of technology‐supported personalised learning in low‐ and middle‐income countries: A meta‐analysis</article-title>
          ,
          <source>Br. J. Educ. Technol. 52.5</source>
          (
          <year>2021</year>
          )
          <fpage>1935</fpage>
          -
          <lpage>1964</lpage>
          . doi:
          <volume>10</volume>
          .1111/bjet.13116.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Sampson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Karagiannidis</surname>
          </string-name>
          ,
          <article-title>Personalised learning: educational, technological and standardisation perspective</article-title>
          ,
          <source>Digit. Educ. Rev. No. (4)</source>
          (
          <year>2002</year>
          )
          <fpage>24</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vanbecelaere</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cornillie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sasanguie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reynvoet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Depaepe</surname>
          </string-name>
          ,
          <article-title>The effectiveness of an adaptive digital educational game for the training of early numerical abilities in terms of cognitive, noncognitive and efficiency outcomes, Br</article-title>
          .
          <string-name>
            <surname>J. Educ. Technol.</surname>
          </string-name>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1111/bjet.12957.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ninaus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nebel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Systematic</given-names>
            <surname>Literature</surname>
          </string-name>
          <article-title>Review of Analytics for Adaptivity Within Educational Video Games, Front</article-title>
          . Educ.
          <volume>5</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .3389/feduc.
          <year>2020</year>
          .
          <volume>611072</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tlili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Salha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rudolph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weidong</surname>
          </string-name>
          ,
          <article-title>Does personalization really help in improving learning achievement? A meta-analysis</article-title>
          ,
          <source>in: 2024 IEEE International Conference on Advanced Learning Technologies (ICALT)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          .1109/icalt61570.
          <year>2024</year>
          .
          <volume>00011</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Gyasi</surname>
          </string-name>
          ,
          <article-title>The effectiveness of technology-facilitated personalized learning on learning achievements and learning perceptions: a meta-analysis</article-title>
          ,
          <source>Educ. Inf. Technol</source>
          . (
          <year>2022</year>
          ).
          <source>doi:10.1007/s10639-022-11092-7.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Debeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vanbecelaere</surname>
          </string-name>
          , W. Van Den Noortgate,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reynvoet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Depaepe</surname>
          </string-name>
          ,
          <article-title>The effect of adaptivity in digital learning technologies. Modelling learning efficiency using data from an educational game</article-title>
          ,
          <source>Br. J. Educ. Technol. 52.5</source>
          (
          <year>2021</year>
          )
          <fpage>1881</fpage>
          -
          <lpage>1897</lpage>
          . doi:
          <volume>10</volume>
          .1111/bjet.13103.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Van Schoors</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Elen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Raes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Depaepe</surname>
          </string-name>
          ,
          <article-title>An overview of 25 years of research on digital personalised learning in primary and secondary education: A systematic review of conceptual and methodological trends</article-title>
          ,
          <source>Br. J. Educ. Technol. 52.5</source>
          (
          <year>2021</year>
          )
          <fpage>1798</fpage>
          -
          <lpage>1822</lpage>
          . doi:
          <volume>10</volume>
          .1111/bjet.13148.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>McKelroy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Corliss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carrigan</surname>
          </string-name>
          ,
          <article-title>Investigating the effect of an adaptive learning intervention on students' learning,</article-title>
          <source>Educ. Technol. Res. Dev. 65.6</source>
          (
          <year>2017</year>
          )
          <fpage>1605</fpage>
          -
          <lpage>1625</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11423-017-9542-1.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Molenaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Horvers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <article-title>What can moment-by-moment learning curves tell about students' self-regulated learning?,</article-title>
          <string-name>
            <surname>Learn. Instr.</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>101206</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.learninstruc.
          <year>2019</year>
          .
          <volume>05</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Xie</surname>
          </string-name>
          , H.-C. Chu,
          <string-name>
            <given-names>G.-J.</given-names>
            <surname>Hwang</surname>
          </string-name>
          , C.-C.
          <article-title>Wang, Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017, Comput</article-title>
          . &amp;
          <string-name>
            <surname>Educ</surname>
          </string-name>
          .
          <volume>140</volume>
          (
          <year>2019</year>
          )
          <article-title>103599</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.compedu.
          <year>2019</year>
          .
          <volume>103599</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Eva</surname>
          </string-name>
          , G. Regehr, “
          <article-title>Iʼll never play professional football” and other fallacies of self-assessment</article-title>
          ,
          <source>J. Contin. Educ. Health Prof. 28.1</source>
          (
          <year>2008</year>
          )
          <fpage>14</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1002/chp.150.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[13] SAGE Handbook of Research on Classroom Assessment</source>
          , SAGE Publications, Inc., 2455 Teller Road,
          <source>Thousand Oaks California 91320 United States</source>
          ,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .4135/9781452218649.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Andrade</surname>
          </string-name>
          ,
          <article-title>A Critical Review of Research on Student Self-Assessment, Front</article-title>
          . Educ.
          <volume>4</volume>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .3389/feduc.
          <year>2019</year>
          .
          <volume>00087</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>North</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Maier</surname>
          </string-name>
          ,
          <string-name>
            <surname>O.</surname>
          </string-name>
          Haas (Eds.),
          <source>Knowledge Management in Digital Change</source>
          , Springer International Publishing, Cham,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -73546-7.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>C</surname>
          </string-name>
          .
          <article-title>-h.</article-title>
          <string-name>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>The implementation and evaluation of a mobile self- and peer-assessment system</article-title>
          ,
          <source>Comput. &amp; Educ. 55.1</source>
          (
          <year>2010</year>
          )
          <fpage>229</fpage>
          -
          <lpage>236</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.compedu.
          <year>2010</year>
          .
          <volume>01</volume>
          .008.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Dunning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Heath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Suls</surname>
          </string-name>
          , Flawed Self-Assessment,
          <source>Psychol. Sci. Public Interest 5.3</source>
          (
          <year>2004</year>
          )
          <fpage>69</fpage>
          -
          <lpage>106</lpage>
          . doi:
          <volume>10</volume>
          .1111/j.1529-
          <fpage>1006</fpage>
          .
          <year>2004</year>
          .
          <volume>00018</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sailer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ninaus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Greiff</surname>
          </string-name>
          ,
          <article-title>The End is the Beginning is the End: The closed-loop learning analytics framework</article-title>
          ,
          <source>Comput. Hum. Behav</source>
          . (
          <year>2024</year>
          )
          <article-title>108305</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.chb.
          <year>2024</year>
          .
          <volume>108305</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>