<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>WOLFRAM in Action: Teaching and Learning (Pseudo)Random Generation with Cellular Automata in Higher Education Settings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zach Anthis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lefteris Zacharioudakis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Neapolis University Pafos</institution>
          ,
          <addr-line>Danaes Avenue, Pafos, 8042</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Igor Sikorsky Kyiv Polytechnic Institute, National Technical University of Ukraine</institution>
          ,
          <addr-line>Kyiv, 03056</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>UCL Knowledge Lab, University College London</institution>
          ,
          <addr-line>Gower Street, London, WC1E 6BT</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This article presents ongoing work on WOLFRAM, an interactive EdTech tool designed to teach random generation by visualizing unidimensional Cellular Automata (CA). The web-based prototype integrates a series of gamified tasks with a Learning Analytics (LA) dashboard, to provide students with hands-on experience in elementary CA mechanics whilst delivering detailed insights to instructors in real time. The backend tracks user progress through key performance metrics, including response times, task accuracy, and engagement levels. Preliminary results from a quasi-experimental study demonstrate substantial learning gains across two distinct cohorts: BSc Computer Science (CS) students in a Cybersecurity module and BSc Artificial Intelligence (AI) students in a Machine Learning module. Both cohorts reported high usability and motivation via quantitative Likert scale assessments, with ANOVA showing no significant differences in these areas. Yet, AI students exhibited notably higher improvements in learning clarity, likely due to stronger curricular alignment with CA concepts. In fact, regression analysis confirmed that being in the AI group significantly predicted greater clarity in general, even after controlling for other factors. Next steps involve the integration of adaptive learning features to dynamically adjust content difficulty based on recorded student performance, alongside additional predictive and prescriptive components to provide for automated feedback (in the form of AI-driven hints) on a need-to basis. Future research will focus on expanding the tool's scalability across various (adjoining) academic disciplines and investigating its impact on long-term retention of more advanced concepts such as fractal geometry, entropy estimation, algorithmic complexity, pattern formation, or self-organization.</p>
      </abstract>
      <kwd-group>
        <kwd>Cellular Automata (CA)</kwd>
        <kwd>Learning Analytics (LA)</kwd>
        <kwd>Computer Science (CS)</kwd>
        <kwd>Artificial Intelligence (AI) 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recent advances in Artificial Intelligence (AI) and Cybersecurity are jointly reshaping modern
technology,
with
applications ranging from
everyday conveniences to complex
multi-tier
architectures bound to safeguard critical information. As these innovations proliferate, there is a
growing demand for individuals who possess a deep understanding of key concepts pertinent to their
inherent stochasticity. Afterall, from reactive machines to limited memory and self-awareness
ecosystems, predictive modeling increasingly relies on foundational principles of randomness, while
navigating statistical and epistemic uncertainties—and for good reason. Controlled randomization
has become crucial in several aspects of applied Machine Learning (ML); notably, data shuffling or
augmentation, initialization, error bounding, and
model training/testing by use of iterative
(hyper)parameter optimization [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Moreover, as cyberthreats evolve, randomization plays a pivotal
role in mitigating risks and maintaining the overall integrity of secure communications, enhancing
the robustness of encryption algorithms which prevent malicious actors from easily decoding
sensitive data. However, it is crucial to distinguish between pseudo-random number generators
(PRNGs) and true random number generators (TRNGs) in this context. While PRNGs use
deterministic algorithms to produce sequences that appear random, they can be vulnerable if the
initial seed or algorithm becomes known to an attacker [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This predictability poses significant risks
in hands-on cryptographic applications, but may also introduce selection, confirmation, or
algorithmic biases across the MLOps pipeline. In contrast, TRNGs derive randomness from
inherently unpredictable physical phenomena, such as electronic noise, radioactive decay, or
quantum effects [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. These sources provide true entropy, resulting in non-deterministic outputs that
cannot be predicted or replicated without direct access to the source. Their use is essential in
generating cryptographic keys that are truly random, to prevent attackers from being able to guess
or calculate them, thus maintaining the integrity and confidentiality of secure communications.
Similarly, TRNGs can add to the reliability of ML models (establishing convexity, stability, and
generalizability) while minimizing the risks of discernible patterns in scenarios where
objectivity/impartiality, transparency, fairness, security, and privacy are of the essence (e.g.,
adversarial training for fraud detection). On the other hand, the growing accessibility of
oncedisruptive technologies, such as Internet-of-Things (IoT) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and Generative AI (genAI) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], has
fueled offensive strategies that could potentially exploit system vulnerabilities and has driven
demand for effective counter-measures grounded in quantum computing [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or decentralization [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Sure enough, this in turn has led to a standalone surge in ML adoption for managing complexity and
allocating resources on the receiving end [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. All in all, in both fields alike, these intricate decisions
are furnished by AI one way or another, and that usually means testing out contrastive estimation
tactics, iteratively sampling rows (instances) or columns (features), conducting randomized
reduction or recovery, and performing random repeats or restarts. In light of these developments,
the ability to understand, apply, and critically evaluate random generation has become a highly
sought-after skill [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], underscoring the importance of teaching it to students in Computer Science
(CS) and its interdisciplinary domains (e.g., AI, data science, or computational engineering).
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        Despite the ubiquity of randomization in CS, teaching this core concept presents unique challenges.
If experience has taught us anything, it’s that traditional classroom methods often struggle to convey
the complexity and importance of random processes, leaving students without the intuitive grasp
necessary for effective application. Indeed, even when a strong conceptual understanding is
achieved, transferability to diverse AI and Cybersecurity contexts requires adaptive expertise, a
challenge often highlighted in both educational and cognitive psychology research [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The
ontological basis of this study asserts that optimal learning occurs through interactive,
contextualized experiences that foster deeper exploration of underlying concepts (e.g., emergent
behaviors arising from local interactions, governed by simple, deterministic rules). Epistemologically
speaking, this aligns well with constructivist theories [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], highlighting that the knowledge in
question is best built through active engagement and meaningful social contexts, which enable
learners to integrate new information with prior knowledge for deeper cognitive processing. Thus,
the need for educational tools that can transform abstract mathematical theory into tangible,
stimulating learning interactions is more pressing than ever.
      </p>
      <p>
        One promising approach lies in Cellular Automata (CA), a simple yet powerful computational
model, particularly suited for teaching complex patterns and behaviors that can arise from simple
rules, mirroring the unpredictability of randomization in dynamic systems [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Its flexibility,
easeof-use, and suitability for optimizing state-space exploration, outperforms many alternative models
in terms of interpretability, scalability, and cognitive alignment. All practicalities in probabilistic
model simulation (generating diversity in state transitions) aside, its capacity for interactive
experimentation and openness to visualization correspondingly reflect established test preconditions
(controllability) and tractable means for empirical verification (observability). This makes CA a
fitting representation framework for illustrating both randomization and complexity in a variety of
contexts, including AI and Cybersecurity.
      </p>
      <p>
        Since Von Neumann's introduction of the “universal constructor mechanism” in the 1940s [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
educational research has persistently explored the benefits of CA in modeling complex natural
phenomena such as insect colonies, bird flight paths, and even DNA sequencing [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], as well as in
cultivating computational thinking and enhancing problem-solving skills; more so among students
in CS [
        <xref ref-type="bibr" rid="ref16 ref17 ref18">16-18</xref>
        ]. Yet, its specific use for teaching principles of complexity theory, particularly in AI and
Cybersecurity, is underexplored. The present study fills this gap by examining the role of teaching
CA through an interactive, game-based tool, transforming the experience into an integrated
Exploratory Learning Environment (ELE) that can empower students from diverse backgrounds to
manipulate content in real time. Gamification, recognized for its ability to enhance motivation and
engagement, is turning into an evidently powerful feature of modern-day EdTech [
        <xref ref-type="bibr" rid="ref19 ref20 ref21">19-21</xref>
        ]. By
integrating game-like elements such as challenges, rewards, and leaderboards, educational platforms
can turn passive learning into active participation, shifting from traditional rote learning to a more
immersive experience, where students are motivated by the sense of achievement and progress. In
doing so, the design itself capitalizes on intrinsic motivators, like curiosity and competition,
encouraging students to solve problems, explore concepts, and persevere through challenging tasks
in a low-stakes environment.
      </p>
      <p>
        At the same time, Learning Analytics (LA) is well known to enhance the learning experience (LX)
of its own, by collecting and processing usage data on student performance and behavior [
        <xref ref-type="bibr" rid="ref22">22, 23</xref>
        ].
Metrics such as task completion time, accuracy, or engagement levels, provide valuable insights into
how students interact with the content, which is crucial in Higher Education (HE) settings [24, 25].
For one, LA dashboards allow instructors to summarize relevant data at different granularities with
little, or no, programming skill, and thus personalize LXs by adjusting the difficulty or pacing of
tasks based on individual performance. Similarly, they enable the early identification of students who
may struggle or disengage, fostering contextualized interventions with tailored support, feedback,
or resources. This combination of game-based learning and just-in-time analytics creates a
responsive environment where data-driven strategies can improve both student outcomes and
instructional effectiveness alike.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Research Objectives</title>
      <p>This study tests the effectiveness of a newly developed EdTech tool as an end-to-end solution for
improving learning outcomes across two distinct educational settings: a Cybersecurity module from
the BSc in Computer Science course and a Machine Learning module from the BSc in Artificial
Intelligence course. This first assessment of the tool aims to address the challenges of teaching
(pseudo-)random generation and contribute to a broader understanding of how data-driven learning
environments can support AI for Education (AIEd), as much as Education for AI (EdAI). To evaluate
the didactic impact of the intervention, it is essential to review how it influences both the user
experience and educational outcomes across these diverse learning contexts, in a systematic fashion;
specifically, to quantifiably measure usability and motivation subscales, and explore potential
differences in reported learning gains.</p>
      <p>In response, the Web-based Orchestrated Learning for Random Automata Modeling
(WOLFRAM), was developed as a unified platform specifically designed for teaching CA in a
structured and accessible way through experiential learning. The tool combines real-time
visualizations with gamified tasks and integrates teacher-centered dashboarding, to create an
immersive experience that allows students to explore randomization interactively. By tracking
metrics such as response times and task accuracy, it enables instructors to monitor student progress
and adapt learning pathways accordingly (see Figure 1).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>The broader methodological framework was patterned after the Designed-based Research Collective
(DBRC) paradigm [26] which has been used extensively in the past in order to align
Technologyenhanced LE (TELEs) with their fundamental epistemological and theoretical assumptions [27, 28].
As a short-term educational program, the intervention in its entirety was based upon the
FourComponent Instructional Design (4C/ID) approach for complex learning [29]. To avoid common
detachments of EdTech research from policy and practice, the selected approach engages in iterative
designs and evaluations (collaborating with research subjects in the process), in the frame of two
distinct UG modules, to achieve the right balance between theory-building [30] and practical impact
[31].</p>
      <sec id="sec-4-1">
        <title>4.1. Participants</title>
        <p>Participants were selected based on their enrollment in courses directly aligned with the research
focus. The study involved 60 undergraduate students from our university, divided into two cohorts:
a) 36 first-year BSc CS students enrolled in the Cybersecurity module, split evenly into a control
group (n=18) and a treatment group (n=18); and b) 24 first-year BSc AI students enrolled in the ML
module, similarly divided into a control group (n=12) and a treatment group (n=12). The investigation
aimed to gauge the effectiveness of the tool as a means for explicating RNG through CA, while
comparing the results of the treatment group (who were granted access to use the platform freely)
with the control group who received traditional instruction. The selection of the study population
adheres to well-established guidelines in user-centred EdTech research [32] and aligns with
designbased research principles, where researchers proceed to empirically test the impact of proposed
interventions within real educational settings, while pursuing the generalizability of results to
similar academic environments. This methodological approach draws from multiple theoretical
perspectives and research paradigms so as to build understandings of the nature and conditions of
learning, cognition, and development [33]. Purposeful sampling was employed, to ensure that
participants are key stakeholders, directly engaged in learning the complex concepts the tool was
designed to address. Undergraduates enrolled in these modules were considered as highly relevant
subpopulations, given the CA’s twin capacity (as discrete dynamical systems and
informationprocessing systems) and the practical applications of random generators being foundational to both
these fields. The cohorts were chosen to assess whether the tool could meet distinct learning
requirements against both academic domains. This caters the need for ecological validity (findings
being applicable to real-world scenarios), accounting for the experimental circumstances, stimuli
under investigation, and behavioral response [34].</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Study Design</title>
        <p>A quasi-experimental pre-test/post-test design was employed to critically evaluate the impact of the
tool on perceived usability, motivation, and learning outcomes, divided into three distinct phases: (a)
Pretest phase: All participants were given 30 minutes to complete a pre-test designed to evaluate
their baseline knowledge, including conceptual questions and problem-solving tasks, to provide a
comprehensive assessment of their understanding of theoretical/practical aspects of randomness; (b)
Intervention phase: Two days later, (randomly assigned) participants in the treatment groups
attended a single 3-hour session, featuring a series of in-class interactive tasks via WOLFRAM beta,
meant to cover CA discrete evolution and (pseudo-)random generation. At the same time, control
group attended a parallel session, receiving only traditional instruction (lectures and textbook-based
exercises) instead; (c) Post-test phase: A week after completing the intervention, both groups took a
post-test, identical in structure to the pre-test, to evaluate their learning gains. The post-test assessed
conceptual understanding, problem-solving skills, as well as the ability to apply randomization
principles in new contexts, whereby real-world systems reflect local interactions between individual
components leading to emergent global behaviors, or where global order may arise without
centralized control.</p>
        <p>The selected approach aimed for a model that is robust enough to detect moderate effects in a
real-world educational environment and provides sufficient statistical rigor to meet the research
objectives of a small-scale study. To ensure statistical validity, a power analysis was conducted,
assuming a modest effect size (Cohen's d = 0.5), power of 0.80, and a significance level of 0.05 [35],
which suggested a sample size of at least 64 participants (32 per group). However, due to practical
constraints in class settings, we ended up targeting slightly less participants per cohort, consistent
with typical EdTech studies [36, 37]. These numbers still provide adequate power, especially given
the inclusion of repeated measures, which tend to enhance statistical efficiency by controlling for
within-subject variability [38].</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Data Collection and Analysis</title>
        <p>The LA dashboard monitored several real-time metrics during the intervention, which included: (a)
Task Completion Times (duration taken to complete tasks involving CA-based randomization); (b)
Task Accuracy (the correctness of responses in problem-solving tasks e.g., CA expansions); (c)
Engagement Levels (elapsed time spent on tasks and interaction frequency with the platform). In
addition to these metrics (and the pre- and post-tests administered to all participants), the study
utilized two technology-agnostic (validated) instruments to assess self-reported motivation and
usability respectively: a) the Intrinsic Motivation Inventory (IMI), to gauge student engagement,
interest, and perceived competence (clarity) [39, 40]; and b) the standardized 10-item System
Usability Scale (SUS) to calculate the perceived usability of the platform in the treatment group [41].
Importantly, in this study, learning clarity is taken to be subjective (with task accuracy representing
its objective counterpart), allowing for a balanced assessment of both perceived and demonstrated
understanding.</p>
        <p>In terms of primary statistical methods, an Analysis of Variance (ANOVA) was used to compare
usability, motivation, and learning clarity across the two cohorts, providing insight into whether
significant differences existed between the groups in their response to traditional instruction versus
the intervention. A regression analysis was then conducted to examine whether belonging to the AI
cohort predicted higher learning clarity (and/or task accuracy), while controlling for engagement
and usability, for a more focused exploration of the specific factors that could drive looked-for
learning outcomes. Analyses of data gathered from multimodal usage metrics in combination with
pre- and post-tests, SUS scores, and IMI assessments, reveals significant findings related to the
effectiveness of the intervention.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Preliminary Results</title>
        <p>Firstly, both cohorts demonstrated notable learning gains. The CS group improved by 42%, while the
AI group saw a 46% improvement, overall (using weighted averages). While the post-test scores were
significantly higher for both (p &lt; 0.01), the difference in improvement between groups was not
statistically significant (p = 0.07) though the AI cohort did show a trend of higher knowledge
retention.</p>
        <p>Nevertheless, the breakdown analysis of various LA metrics traced during the intervention phase,
resulted in highlighting significant differences in task completion times, accuracy, and engagement
levels, which were further corroborated by the submitted feedback on usability and declared student
motivation (SUS and IMI respectively) across the treatment groups, as summarized in Table 2.</p>
        <p>Cohort
AI (n=24)
CS (n=36)</p>
        <p>Group
Control
Treatment
Control</p>
        <p>Treatment</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>According to our own in-class observations, integrating RNG principles into CS curriculum through
elementary CA strengthens student understanding of the vital role of randomization in handling
uncertainty, in general. As expected, by simulating genuinely volatile systems and enabling users to
generate and test random numbers, the tool leveraged visual perception to help connect abstract
theoretical concepts to practical real-world applications across the two cohorts, aligning with
wellestablished constructivist theories [42, 43]. The flow successfully comprised disparate scenes
(interconnecting to form a looping map made up of several branching scenarios) that let learners
independently work through complex search subroutines, explore how their choices can lead to
different outcomes, or start over and see where another path might take them. This has equipped
students with the skills to grasp ML internal mechanics (e.g., overfitting or variance-bias tradeoff),
apply common troubleshooting techniques (e.g., early stopping, regularization, and cross-validation),
and deepen their understanding of cryptographic security and appreciation of true randomness in
defense against evolving cyberthreats. Nonetheless, the AI subgroup demonstrated significantly
higher task accuracy (p = 0.04) and showed higher engagement (in terms of duration), compared to
CS students (p = 0.05). Although both subgroups seem to have rated WOLFRAM highly, with no
significant difference in perceived usability (p = 0.20), AI students reported significantly higher
intrinsic motivation (p = 0.03). Regression analysis revealed that membership in the AI cohort
significantly predicted higher learning clarity (R² = 0.14, p = 0.02) and task accuracy (R² = 0.12, p =
0.04), when controlling for engagement and usability. This suggests that the direct relevance of
randomization in ML has likely contributed to the better outcomes in this group, and that the tool
can meet our expectations in demystifying analogous stochastic processes in the future (e.g., random
walks, Monte Carlo simulations, and noise generation).</p>
      <p>In conclusion, WOLFRAM has proven to be an overall effective tool, versatile enough for teaching
random generation concepts in both AI and Cybersecurity education. Results show that it enhances
teaching, learning, motivation, and engagement, particularly among AI students, where the
alignment of CA with the curriculum is perhaps more pronounced. From an LX perspective, the
findings highlight the role of interactive, gamified environments in improving both student
conceptual understanding and task accuracy, while fostering interest-related motivational constructs
such as active participation, reflection, self-regulation, and sustained autonomy. With regards to
instructional design implications, the differential impact of WOLFRAM across the two cohorts
suggests that content relevance is indeed critical for maximizing learning outcomes [44], whereas
tailoring learning tools to the specific domain—such as integrating CA into straightforward scenarios
relating to cybersecurity—may further enhance accuracy and engagement in fields where direct
applicability is less apparent [45]. This strong relationship between curriculum alignment and
outcome underscores the importance of designing EdTech tools that closely integrate with
coursespecific objectives and individual learning paths.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Limitations and Future Work</title>
      <p>While yielding promising immediate results, the study presents acknowledgeable limitations. Firstly,
the small sample size and quasi-experimental design restrict the generalizability of the findings. The
limited participant pool may not fully represent the diverse range of learners and educational
contexts, potentially skewing the portability and replicability of the results. Moreover, the focus on
short-term learning gains does not address the critical question of long-term retention. Assessing
how well participants retain and apply the learned material over extended periods, remains an
essential but unexplored dimension. Finally, perceived usefulness and dependability are yet to be
tested in broader contexts, beyond the scope of theoretical CS. For instance, it is still unclear how
the findings translate to teaching modern combinatorics in other fields, such as pure vs. applied
mathematics or electrical vs. mechanical engineering, which may entail different pedagogical
requirements and learning outcomes.</p>
      <p>To address these gaps, future research should prioritize larger, more diverse samples to enhance
the external validity of the findings. Also, longitudinal studies are needed to empirically assess the
long-term retention of knowledge and the pertinency of the tool in varied academic disciplines.
Specifically, future work should systematically examine the scalability of the WOLFRAM interface
across different domains and educational levels. This includes integrating adaptive learning features
that adjust content difficulty based on real-time performance data, thereby personalizing the learning
experience. Finally, a mixed-methods approach is recommended for future investigations,
incorporating qualitative judgements through semi-structured interviews and/or focus group
discussions, to capture the richness of human experience in terms of learning needs, preferences, or
barriers (e.g., self-efficacy, cognitive load, and affective evaluation aspects). Such an approach will
provide a more nuanced understanding of learner perceptions and the (meta-)cognitive processes
involved. These steps are essential for assessing WOLFRAM's practicality (its ability to function
effectually across diverse scenarios), adaptability (its capacity to adjust to evolving requirements),
and overall consistency (its reliability in providing a uniform experience and maintaining high
performance standards). Together, these factors determine how effectively the tool can support
diverse learning environments (and how successfully it supports various subject-specific learning
contexts) and thus will make key areas of future investigation.
[23] B. Rienties and L. Toetenel, "The impact of 151 learning designs on student satisfaction and
performance: social learning (analytics) matters," in Proceedings of the sixth international
conference on learning analytics &amp; knowledge, 2016, pp. 339-343.
[24] J. T. Avella, M. Kebritchi, S. G. Nunn, and T. Kanai, "Learning analytics methods, benefits, and
challenges in higher education: A systematic literature review," Online Learning, vol. 20, no. 2,
pp. 13-29, 2016.
[25] D. Gasevic, Y.-S. Tsai, S. Dawson, and A. Pardo, "How do we start? An approach to learning
analytics adoption in higher education," The International Journal of Information and Learning
Technology, 2019.
[26] D.-B. R. Collective, "Design-based research: An emerging paradigm for educational inquiry,"</p>
      <p>Educational researcher, vol. 32, no. 1, pp. 5-8, 2003.
[27] [27] F. Wang and M. J. Hannafin, "Design-based research and technology-enhanced learning
environments," Educational technology research and development, vol. 53, no. 4, pp. 5-23, 2005.
[28] L. Zheng, "A systematic literature review of design-based research from 2004 to 2013," Journal
of Computers in Education, vol. 2, no. 4, pp. 399-420, 2015.
[29] J. J. van Merriënboer and P. A. Kirschner, "4C/ID in the context of instructional design and the
learning sciences," in International handbook of the learning sciences: Routledge, 2018, pp.
169179.
[30] M. Warr, P. Mishra, and B. Scragg, "Designing theory," Educational Technology Research and</p>
      <p>Development, vol. 68, no. 2, pp. 601-632, 2020.
[31] R. Tormey, C. Hardebolle, F. Pinto, and P. Jermann, "Designing for impact: a conceptual
framework for learning analytics as self-assessment tools," Assessment &amp; Evaluation in Higher
Education, vol. 45, no. 6, pp. 901-911, 2020.
[32] M. Schmidt, A. A. Tawfik, I. Jahnke, and Y. Earnshaw, "Learner and user experience research:</p>
      <p>An introduction for the field of learning design &amp; technology," EdTech Books, 2020.
[33] S. Barab and K. Squire, "Design-based research: Putting a stake in the ground," in Design-based</p>
      <p>Research: Psychology Press, 2016, pp. 1-14.
[34] M. A. Schmuckler, "What is ecological validity? A dimensional analysis," Infancy, vol. 2, no. 4,
pp. 419-436, 2001.
[35] M. A. Kraft, "Interpreting effect sizes of education interventions," Educational researcher, vol.</p>
      <p>49, no. 4, pp. 241-253, 2020.
[36] J. W. Creswell and J. D. Creswell, Research design: Qualitative, quantitative, and mixed methods
approaches. Sage publications, 2017.
[37] L. Castañeda and B. Williamson, "Assembling New Toolboxes of Methods and Theories for
Innovative Critical Research on Educational Technology," Journal of New Approaches in
Educational Research, vol. 10, no. 1, pp. 1-14, 2021/01/01 2021, doi: 10.7821/naer.2021.1.703.
[38] A. C. Cheung and R. E. Slavin, "How methodological features affect effect sizes in education,"</p>
      <p>Educational Researcher, vol. 45, no. 5, pp. 283-292, 2016.
[39] M. R. Lepper and T. W. Malone, "Intrinsic motivation and instructional effectiveness in
computer-based education," in Aptitude, learning, and instruction: Routledge, 2021, pp. 255-286.
[40] C. Bosch, "Assessing the Psychometric Properties of the Intrinsic Motivation Inventory in
Blended Learning Environments," Journal of Education and e-Learning Research, vol. 11, no. 2,
pp. 263-271, 2024.
[41] P. Vlachogianni and N. Tselios, "Perceived usability evaluation of educational technology using
the System Usability Scale (SUS): A systematic review," Journal of Research on Technology in
Education, vol. 54, no. 3, pp. 392-409, 2022.
[42] D. C. Phillips, "The good, the bad, and the ugly: The many faces of constructivism," Educational
researcher, vol. 24, no. 7, pp. 5-12, 1995.
[43] M. Tam, "Constructivism, instructional design, and technology: Implications for transforming
distance learning," Journal of Educational Technology &amp; Society, vol. 3, no. 2, pp. 50-60, 2000.
[44] E. A. Kohler, L. M. Elreda, and K. Tindle, "EdTech Context Inventory: Factor analyses for ten
instruments to measure edtech implementation context features," Computers &amp; Education, vol.
195, p. 104709, 2023.
[45] M. R. N. King, S. J. Rothberg, R. J. Dawson, and F. Batmaz, "Bridging the edtech evidence gap,"
Información Tecnológica, vol. 18, no. 1, pp. 18-40, 2016.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Anthis</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>The Black-Box Syndrome: Embracing Randomness in Machine Learning Models," in Artificial Intelligence in Education</source>
          , Cham,
          <string-name>
            <given-names>M. M.</given-names>
            <surname>Rodrigo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Matsuda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. I.</given-names>
            <surname>Cristea</surname>
          </string-name>
          , and V. Dimitrova, Eds.,
          <year>2022</year>
          // 2022: Springer International Publishing, pp.
          <fpage>3</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Eastlake</surname>
          </string-name>
          3rd,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schiller</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Crocker</surname>
          </string-name>
          ,
          <article-title>"Rfc 4086: randomness requirements for security,"</article-title>
          <source>ed: RFC Editor</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Menezes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Van Oorschot</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Vanstone</surname>
          </string-name>
          , Handbook of applied cryptography. CRC press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Vijayakumaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Muthusenthil</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Manickavasagam</surname>
          </string-name>
          ,
          <article-title>"A reliable next generation cyber security architecture for industrial internet of things environment,"</article-title>
          <source>International Journal of Electrical and Computer Engineering</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>387</fpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Akiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aryal</surname>
          </string-name>
          , E. Parker, and
          <string-name>
            <given-names>L.</given-names>
            <surname>Praharaj</surname>
          </string-name>
          ,
          <article-title>"From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,"</article-title>
          <source>IEEE Access</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Abellan</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Pruneri</surname>
          </string-name>
          ,
          <article-title>"The future of cybersecurity is quantum,"</article-title>
          <source>IEEE Spectrum</source>
          , vol.
          <volume>55</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>35</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>O. O.</given-names>
            <surname>Malomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Rawat</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Garuba</surname>
          </string-name>
          ,
          <article-title>"Next-generation cybersecurity through a blockchain-enabled federated cloud framework,"</article-title>
          <source>The Journal of Supercomputing</source>
          , vol.
          <volume>74</volume>
          , no.
          <issue>10</issue>
          , pp.
          <fpage>5099</fpage>
          -
          <lpage>5126</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mosca</surname>
          </string-name>
          ,
          <article-title>"Cybersecurity in an era with quantum computers: Will we be ready?,"</article-title>
          <source>IEEE Security &amp; Privacy</source>
          , vol.
          <volume>16</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Crocetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nannipieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Di</given-names>
            <surname>Matteo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fanucci</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Saponara</surname>
          </string-name>
          ,
          <article-title>"Review of methodologies and metrics for assessing the quality of random number generators,"</article-title>
          <source>Electronics</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>3</issue>
          , p.
          <fpage>723</fpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Barnett</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Ceci</surname>
          </string-name>
          ,
          <article-title>"When and where do we apply what we learn?: A taxonomy for far transfer," Psychological bulletin</article-title>
          , vol.
          <volume>128</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>612</fpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Schunk</surname>
          </string-name>
          ,
          <article-title>Learning theories an educational perspective</article-title>
          .
          <source>Pearson Education</source>
          , Inc,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pritchard</surname>
          </string-name>
          ,
          <article-title>Ways of learning: Learning theories for the classroom</article-title>
          .
          <source>Routledge</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Hoekstra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kroc</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Sloot</surname>
          </string-name>
          ,
          <article-title>Simulating complex systems by cellular automata</article-title>
          .
          <source>Springer Science &amp; Business Media</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J. Von</given-names>
            <surname>Neumann</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Burks</surname>
          </string-name>
          ,
          <article-title>"Theory of self-reproducing automata</article-title>
          ,"
          <year>1966</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Gaylord</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Nishidate</surname>
          </string-name>
          ,
          <article-title>Modeling nature: Cellular automata simulations with Mathematica®</article-title>
          . Springer,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>G.</given-names>
            <surname>Faraco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pantano</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Servidio</surname>
          </string-name>
          ,
          <article-title>"The use of cellular automata in the learning of emergence,"</article-title>
          <source>Computers &amp; Education</source>
          , vol.
          <volume>47</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>280</fpage>
          -
          <lpage>297</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Staubitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Teusner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Meinel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Prakash</surname>
          </string-name>
          ,
          <article-title>"Cellular Automata as basis for programming exercises in a MOOC on Test Driven Development," in 2016 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE</article-title>
          ),
          <year>2016</year>
          : IEEE, pp.
          <fpage>374</fpage>
          -
          <lpage>380</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Voskoglou</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Buckley</surname>
          </string-name>
          ,
          <article-title>"Problem Solving and Computational Thinking in a Learning Environment</article-title>
          ,"
          <volume>12</volume>
          /02 2012.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>B.</given-names>
            <surname>Marín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Frez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cruz-Lemus</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Genero</surname>
          </string-name>
          ,
          <article-title>"An empirical investigation on the benefits of gamification in programming courses," ACM Transactions on Computing Education (TOCE)</article-title>
          , vol.
          <volume>19</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sailer</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Homner</surname>
          </string-name>
          ,
          <article-title>"The gamification of learning: A meta-analysis," Educational psychology review</article-title>
          , vol.
          <volume>32</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>112</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>C. V. de Carvalho</surname>
            and
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Coelho</surname>
          </string-name>
          ,
          <article-title>"Game-based learning, gamification in education and serious games,"</article-title>
          vol.
          <volume>11</volume>
          , ed: MDPI,
          <year>2022</year>
          , p.
          <fpage>36</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bienkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Feng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Means</surname>
          </string-name>
          ,
          <article-title>"Enhancing Teaching and Learning through Educational Data Mining and Learning Analytics: An Issue Brief," Office of Educational Technology</article-title>
          ,
          <source>US Department of Education</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>