=Paper=
{{Paper
|id=Vol-3836/paper4
|storemode=property
|title=Supporting Personalized Lifelong Learning with Human-Centered Artificial Intelligence Systems
|pdfUrl=https://ceur-ws.org/Vol-3836/paper1.pdf
|volume=Vol-3836
|authors=Alireza Gharahighehi,Rani Van Schoors,Paraskevi Topali,Jeroen Ooge
|dblpUrl=https://dblp.org/rec/conf/all/GharahighehiSTO24
}}
==Supporting Personalized Lifelong Learning with Human-Centered Artificial Intelligence Systems==
Supporting Personalized Lifelong Learning with
Human-Centered Artificial Intelligence Systems
Alireza Gharahighehi1,2,* , Rani Van Schoors1,3 , Paraskevi Topali4 and Jeroen Ooge5
1
KU Leuven, Campus Kulak, Department of Public Health and Primary Care, Kortrijk, Belgium
2
Itec, imec research group at KU Leuven, Kortrijk, Belgium
3
KU Leuven, Centre for Instructional Psychology and Technology, Leuven, Belgium
4
Radboud University, NOLAI | National Education Lab AI, Behavioural Science Institute, Nijmegen, the Netherlands
5
Utrecht University, Department of Information and Computing Sciences, Utrecht, the Netherlands
Abstract
Technological advancements supported by artificial intelligence bring exciting promises for lifelong
learning, including deeper insights into large datasets on learning processes, personalized recommen-
dations, and automated scaffolding. Yet, the current rapid evolution can also cause skill gaps among
educational stakeholders and often lacks a human-centered perspective. This paper discusses the op-
portunities and challenges of artificial intelligence for lifelong learning, focusing on three main facets:
adaptivity for personalized learning, explainability and controllability of AI-supported learning systems,
and human-centered learning analytics and AI. Drawing from our discussion of existing research, we
suggest directions for future studies to further advance these areas.
Keywords
adaptive learning, lifelong learning, explainable AI, human-AI interaction, human-centred design
1. Introduction
Over the past few decades, spectacular growth in computing power facilitated the collection
and analysis of huge datasets, often unveiling insights previously hidden. As a result, artificial
intelligence (AI) has boomed, raising high expectations about its potential to realize break-
throughs in many application domains. Interestingly, the histories of AI and education have
long been intertwined [1]. In research circles, the lively interplay between AI and education
became known as the AIEd field. Given this rich shared history, it should not surprise that AI is
nowadays embedded in numerous educational technologies, aiming to support and enhance
learning and teaching activities [2]. In fact, the scope has been broadened to lifelong learning,
acknowledging that learning goes beyond formal education at the start of people’s lives.
Compared to traditional learning tools, an essential advantage of AI-supported tools is adap-
tivity. For example, AI can provide scaffolds (e.g., clarification, encouragement, and feedback)
and facilitate connections with peer helpers when needed. Moreover, AI-supported tools such
as intelligent dashboards can support teachers and trainers by visualizing learning processes
ALL’24: Workshop on Adaptive Lifelong Learning, July 08–12, 2024, Recife, Brazil
$ alireza.gharahighehi@kuleuven.be (A. Gharahighehi); rani.vanschoors@kuleuven.be (R. V. Schoors);
evi.topali@ru.nl (P. Topali); j.ooge@uu.nl (J. Ooge)
0000-0003-1453-1155 (A. Gharahighehi); 0000-0003-1462-268X (R. V. Schoors); 0000-0002-1951-2327 (P. Topali);
0000-0001-9820-7656 (J. Ooge)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
1
and proposing prescriptions or predictions, which in turn enables them to better respond (proac-
tively) to learners’ personal needs [3–8]. Being able to adapt learning systems is especially
relevant from the perspective of lifelong learning: learners continuously improve their skills
and develop new ones, evolving with their changing working and living environments [9].
Considering these benefits, AI is likely to continue changing education [4, 10].
Despite its potential, however, the rapid growth of AI in education makes it hard for people to
keep up. This may result in skill gaps, lacking flexibility, poor understanding of AI functionalities,
and lowered agency [8, 11]. Furthermore, it is clear that AI cannot operate in isolation from
other research disciplines [12]. Consequently, the AIEd field is increasing research efforts into
human-centered AI [2], involving various educational stakeholders such as teachers, students,
and educational designers. For example, human-centered AI research focuses on support for
collaboration between technology and educational stakeholders, developing AI tools based
on different perspectives, and balancing human control and automated adaptivity [13–15].
Another topic relates to how AI-supported systems can explain their outcomes in terms that are
understandable, relevant, and actionable for involved stakeholders [16, 17]. Addressing these
human-centered challenges is not trivial because teachers, for example, show different needs
and interactions with AI in different scenarios [18, 19].
In sum, AI is reshaping education, creating new opportunities and challenges in terms of
research, operationalization, and policy-making. This paper further discusses this evolution,
focusing on three related AIEd topics: (1) adaptivity for personalized learning, (2) explainability
and controllability for AI-supported learning systems, and (3) human-centered learning analytics
and AI, with an emphasis on keeping stakeholders in the loop. We hope our overview sheds
light on ongoing research lines and inspires future work on supporting personalized lifelong
learning with human-centered AI systems.
2. Adaptivity for Personalized Learning
Personalization fosters a unique online experience for each user, which can in turn boost
engagement and satisfaction [20]. In recent years, especially following the COVID-19 pandemic,
personalized education through adaptive digital tools has gained significant attention. Compared
to traditional learning methods, a major benefit of adaptive online learning is the possibility to
tailor learning experiences to individual students or small groups. This new learning experience
moves away from the traditional non-personalized approach and promises advantages such as
access to learning anytime and anywhere, and improved cognitive and non-cognitive learning
outcomes [2, 6, 7, 21–23]. Additionally, as learner diversity continues to grow, adaptive learning
can offer personalized exercises, scaffolding, and assessments, alleviating some of the workload
for teachers and trainers [10].
The cornerstone of adaptivity and personalization within educational systems are learner
models [24]. These dynamic models encapsulate the evolving knowledge and competencies of
learners [25], and are constructed using a variety of approaches, including cognitive, pragmatic,
and data-driven approaches. Given the availability of data on learners’ behavior, data-driven
approaches are increasingly being used to construct learner models. Two pivotal methods for
constructing learner models are knowledge tracing and recommendation systems.
2
Knowledge tracing (KT) refers to a suite of methods that model learners’ competencies
based on their previous responses to exercises related to certain knowledge concepts. These
methods predict the likelihood of a learner providing a correct answer to future exercises. KT
methods fall into three categories: Bayesian models, logistic models, and deep learning-based
models [26]. The first two categories model learner knowledge with traditional probabilistic
and logistic models, respectively. In contrast, deep KT is a newer category pioneered by Piech
et al. [27], who used a Long Short-Term Memory network. Deep KT now encompasses various
subcategories, including sequential, attentive, graph-based, memory-augmented, and forgetting-
and memory-aware models [28]. For most large public KT datasets, deep KT models tend to
outperform their traditional counterparts, albeit with less transparent predictions.
Recommendation systems (RSs) encompass machine learning methods that capture users’
preferences to suggest items that align closely with those preferences. There are two main
types of RSs: content-based and collaborative filtering. Content-based RSs recommend items
by matching their features with user profiles, selecting items that best resonate with users’
interests. In contrast, collaborative filtering RSs infer user preferences and needs through
collaborative information among users or items. Although collaborative filtering RSs generally
surpass content-based models in performance [29], they are more susceptible to the cold-start
problem [30] and popularity bias [31]. Ilídio et al. [32] provide an example of how collaborative
filtering can recommend learning materials and learning paths. The challenge in this context is
that negatively labeled training data is unreliable: it is unclear whether learners did not interact
with learning materials or paths deliberately. The proposed algorithm combines multiple tiers
of local and global Random Forests and outperforms various collaborative filtering methods in
terms of normalized discounted cumulative gain and recall.
In contrast to the structured approach of traditional education, adaptivity is essential in
lifelong learning as learners are presented with more choices and greater autonomy. This
necessitates creating learner models that can adapt to the educational context. For example, in
Massive Open Online Courses (MOOCs), learners have the autonomy to decide when, how, and
which courses to pursue from a vast selection of courses and learning paths. It is also crucial to
gain a comprehensive understanding of learners, providing adaptivity that considers multiple
criteria such as preferences, engagement [33], and proficiency within the system.
3. Explainability and Controllability
Adaptive educational systems are just one example of how AI is integrated into education. Other
examples include supporting dropout prediction, assessment, providing feedback, improving
teaching, and training teachers [34]. Integrating AI into education brings two challenges that
also occur in more general AI-supported systems: AI systems should be transparent and still
allow for human control. The following sections discuss these challenges for education.
3.1. Explanations in Educational AI Systems
AI models are often “opaque” or “black boxes,” meaning it is unclear how they obtain outcomes.
Burrell [35] distinguishes between three forms of opacity. First, “opacity as intentional secrecy”
relates to AI models being protected by copyrights or intellectual property measures, which
3
makes it impossible to check details such as their training process. Second, “opacity due to
scale and how algorithms operate” reflects that well-performing AI models are often inherently
complex due to their huge amount of training data and parameters, which makes it infeasible
for humans to understand their decision-making process. Finally, “opacity as technical illiteracy”
refers to many people lacking the training to grasp the mathematics and coding underlying AI
models. Opening up black-box AI models to make them more explainable and gain insights into
their decision-making is the holy grail in the field of explainable AI (XAI) [36].
Explainability can be tackled from an algorithm and a human perspective, respectively linking
to the second and third forms of opacity described above. Algorithm-centered explainability
focuses on AI models themselves, their input, training data, and outcomes [37]. For example,
researchers developed many post-hoc techniques to estimate feature importance or approximate
complex black-box models with simplified interpretable ones [38–41]. Furthermore, visualiza-
tions can reveal relations between model inputs and outputs, and model behavior. In contrast,
human-centered explainability focuses on the target audience of explanations and acknowledges
that different people and contexts require different explainability solutions [42, 43]. To elicit
which insights people look for, Liao et al. [44] developed an XAI question bank with prototypi-
cal questions for explanation types such as how, why (not), what if, and how to (still) be that.
Furthermore, Miller [45] reviewed research from the social sciences to conclude that ‘good’
explanations are contrastive, selective, social, and refrain from including probabilities.
In education, researchers have long argued for transparency in adaptive learning systems
through open learner models [46], which show learners the personal data used for adaptivity,
for example, skill mastery, engagement, and misconceptions. For example, Abdi et al. [47] found
that an open learner model engaged students more and increased their perceived understanding
of the rationale behind recommendations on a platform that recommends learning activities. A
premise of open learner models that show skill mastery is of course generating such estimates.
For instance, Chen et al. [48] applied several statistical and machine learning methods to predict
the number of latent skills and their relation to learning items in an online learning environment.
This information could be used to better justify recommended learning items in terms of the
skills they support. The results suggest that machine learning methods such as random forests
generally outperform their statistical counterparts in terms of correct estimation proportion.
The recent surge of XAI research has revived transparency research in education, stressing
the importance of involving different stakeholders, studying potential benefits and pitfalls, and
designing educational AI systems in a human-centered way [17]. Concretely, Ooge et al. [16]
justified recommended exercises on an e-learning platform with visual explanations clarifying
a collaborative filtering step and found these explanations increased students’ initial trust
in the platform. Furthermore, Barria-Pineda et al. [49] found that explanations can increase
engagement with recommended learning content, yielding higher success rates.
3.2. Learner Control in Educational AI Systems
Besides explainability, a second important consideration in AI systems is how much control
is still given to people during decision-making. Traditionally, AI systems are positioned on a
spectrum ranging from full human control to full automation [50]. On one end, full human
control bypasses the potential benefits of AI automation, and in educational contexts, inexperi-
4
enced learners might not be ready to completely control their learning process. On the other
end, full automation ignores people’s domain expertise and can have disadvantages such as
reduced cognitive engagement due to a lack of control. Thus, automating tasks while keeping
people in the loop is a plausible compromise. Turning control into a two-dimensional concept,
Shneiderman [51] argued that human control and automation are not mutually exclusive. In
other words, AI systems can be designed such that they are highly automated and provide
high levels of human control at the same time. In addition, there exist different paradigms for
interacting with AI, including intermittent, continuous, and proactive interaction [52].
In education, many researchers have explored ways to give learners greater responsibility and
control over all aspects of learning and stimulate informed decision-making during practice [15].
This appreciation for learners being actively involved in their learning process led to favoring
the term “learner control” rather than “student control” or “user control.” Examples of learner
control include choosing between learning tools and peers, on-demand learning, controlling
elements of educational systems, and even controlling the amount of control [15]. In particular,
open learner models are an influential example for allowing learners to steer their learning
or negotiate their learner model [46]. Interestingly, little research has covered how learners
can “collaborate” with AI to select learning materials [53, 54]. Yet, initial studies in this context
are promising, showing that learner control over the difficulty level of learning materials
combined with open learner models can boost learning [55, 56] and increase engagement with
recommended learning content, leading to higher success rates [49]. Furthermore, visualizing
the impact of such exercised control can increase initial trust in educational systems [14].
4. Human-Centered Learning Analytics and AI
Integrating AI and learning analytics (LA) into education has become pivotal due to the insights
they provide into teaching and learning practices. Implementing LA and AI solutions allows to
monitor learning progress, alleviates administrative tasks, and delivers personalized and prompt
feedback [57]. For example, intelligent tutoring systems can provide customized instructions
according to students’ individual learning paces and styles [58], adaptive platforms facilitate
real-time feedback [59], LA dashboards allow educators to monitor learner progress [60], and
AI-based virtual chatbots can offer immediate assistance within a more interactive environment
addressing queries and providing additional resources [61]. Nevertheless, the adoption of these
technologies is still quite restricted [62]. Various factors may contribute to this reluctance,
such as institutional policies, adoption costs, and the lack of transparent tools. Many authors
additionally criticize the insufficient contextual relevance, lacking consideration of human
needs, and neglect of pedagogical principles [63, 64].
To account for human requirements, values, and perceptions, prior research stressed the
importance of human-centered design (HCD) for developing LA and AIEd systems [65, 66].
HCD considers stakeholders as collaborators while creating technological solutions [67, 68].
For instance, teachers best know their courses’ objectives and design. When designing LA or AI
tools, their expertise may be invaluable for ensuring that proposed solutions support learning
instead of hindering it. Rouse [69] asserts that HCD can enhance human capabilities, uncover
faced challenges, and foster technology acceptance. Additionally, technological solutions can
5
be found more reliable, accessible, and socially responsible when tailored to a specific course
context [70]. According to Dimitriadis et al. [64], researchers should consider the three things to
effectively implement HCD approaches: (1) the “agentic positioning” of education stakeholders,
(2) the explicit consideration of the learning design cycle, and (3) the pedagogical theories that
inform the design of intended technological solutions.
Previous authors reported employing HCD to enhance their LA and AI solutions. For example,
Long et al. [71] followed a participatory approach involving students in the development of an
intelligent tutoring system with students aiming to enhance classroom motivation. Additionally,
Topali et al. [72] involved MOOC instructors in co-designing and developing a tool that supports
semi-automatic LA-informed feedback. Yet, despite the research efforts to actively position
stakeholders as collaborators in such processes, there is limited focus on HCD application in
real-life environments [73, 74]. Potential reasons are stakeholders’ difficulties in expressing
their needs, the time-consuming process of involving various stakeholders, and the difficulty in
coping with diverse needs and expectations.
To overcome the aforementioned difficulties, several techniques and suggestions were pro-
posed to support stakeholder involvement within the HCD processes. One suggestion is consid-
ering specific HCD frameworks that aim to guide stakeholder involvement in different phases of
the design process. Examples include the LATUX [75], HCID [76], and LAT-EP frameworks [77].
Additionally, Molenaar [78] and Shneiderman [79] proposed frameworks for different levels
of control shared between humans and AI. For instance, Molenaar [78] proposed a 6-level
model ranging from full human automation to full AI automation with 4 intermediate levels
of shared automation. Using this framework, researchers and designers can support teachers
in identifying varying automation levels based on course tasks and activities. This approach
enables practitioners to work effectively while preserving their autonomy.
Building upon the idea of promoting shared control between humans and AI, Krushinskaia
et al. [80], developed a GPT bot to help teachers create lesson plans. While teachers found the bot
valuable, easy to use, and fit for future use, they also expressed concerns about overreliance and
time-consuming interactions. The authors suggest further studying which steps in instructional
design can be fully automated and which steps should remain under teachers’ full control.
5. Open Challenges and Future Directions
To advance future research in support of lifelong learning, we highlight three promising research
directions related to the topics discussed themes.
5.1. Adapting Learning Experiences for Personalized Learning
Adaptive lifelong learning is a dynamic research field where learning content, context, objectives,
and preferences can change rapidly. Lifelong learner models should be able to capture and reflect
such complexities in learning environments to inform personalized adaptivity for learners. For
instance, in the context of MOOCs, Ramírez Luelmo et al. [81] assessed learner models based
on interoperability, knowledge representation, and lifelong learning criteria, and identified only
four models [82–85] that satisfy lifelong learning criteria such as regular updating, re-usability,
forgetting modeling, data interconnection, autonomy, and self-regulated learning instigation.
6
Further research should be carried out to investigate how the specific characteristics of lifelong
learning should be translated into adjusted or new learner models.
Furthermore, future work should develop lifelong learner models beyond the traditional
scope of measuring knowledge and competence. These models should provide a holistic view
of learners, encompassing learning preferences and styles, engagement, motivation, job market
demands, professional development, and socioeconomic factors. Achieving this can involve
using multi-objective [86], multi-task [87], or multi-modal [88] learner models, or by integrating
different learner models through ensemble methods [89].
5.2. Making AI-Supported Educational Systems Explainable and Controllable
Khosravi et al. [17] outlined three opportunities for future research into explainability for
educational systems. First, it is important to design actionable explanations that help learners
make informed decisions, rather than only make them understand AI algorithms and their
outcomes better. Second, previous work has shown promising signs for working towards
personalizing explanations [90], for example, by adapting the type of explanation to personal
traits. Third, the efficacy of explanations should be properly evaluated, for example, in terms of
understandability, appropriate trust-building, and development of metacognition.
Kay [15] and Ooge et al. [14] outlined several challenges regarding learner control, including
potential discomfort for learners who view control as overload or too much responsibility.
Thus, future research could look into balancing the amount of control, clarifying the role of
teachers and other educational stakeholders in controllable educational systems. In addition,
scaffolding could help avoid over- and underestimation while exerting control over, for example,
the difficulty of learning materials. To tackle these challenges, it seems advisable to draw lessons
from pedagogical and educational sciences.
Finally, an interesting avenue for future research is to combine explanations and learner
control. This builds upon the idea of open learner models that can be scrutinized, that is, seeing
how learner models are composed and used while being able to correct or steer them [91]. In
the broader context of XAI, it would be interesting to explore how different forms of learner
control can be combined with different types of explanations and how those combinations affect
student attitudes. For example, what-if explanations could support learners with selecting
learning materials by showing how practicing them would affect their skill mastery. Interesting
research questions could be how this combination of what-if explanations and control shapes
learners’ selection strategies and engagement.
5.3. Bringing Stakeholders in the Loop With Human-Centered Design
Human-centered design (HCD) is deemed important in LA and AI to ensure the development of
tools that align with the needs, perceptions, and values of users such as trainers, teachers, and
learners. The aim is to enhance the educational experience by accounting for human require-
ments and classroom contextual factors. However, despite the claimed benefits of involving
stakeholders, the empirical evidence is limited since most focus was placed on developing
technologies without fully considering end-users. Thus, more real-life evaluations with real
end-users should be conducted [73].
7
Recently, UNESCO stressed the importance of adopting human-centered approaches in the
design and development of digital technologies [92]. Regardless of whether or not technologies
are intelligent, UNESCO considers HCD to be an integral element of technology that should be
implemented in educational settings. Regarding AI, UNESCO also outlined principles that guide
the consideration of human-centered approaches to promote teachers’ and students’ agency.
According to their proposal, HCD can have a significant impact on key topics such as trust,
ownership, and explainability with regard to AI’s current risks and challenges [70]. Building on
the insights of UNESCO, we foresee that future research efforts should focus on (a) supporting
stakeholders’ involvement within design processes, and (b) empirically assessing the benefits of
HCD in authentic settings. This can be achieved by involving diverse groups at every stage
of the AI lifecycle, ensuring that their needs, concerns, and values shape the technology. For
example, participatory design methods could empower vulnerable groups, such as disabled
people, to co-create AI-driven accessibility tools. Meanwhile, evidence-based assessments are
needed so that AI’s impact on decision-making, appropriate trust, and potential biases can be
studied, helping refine systems to meet ethical standards and ensure real-world benefits.
6. Conclusion
While artificial intelligence brings promising opportunities to support lifelong learning, many
challenges remain. This paper discussed three key topics: (1) adaptivity for personalized lifelong
learning, (2) explainability and controllability of AI-supported learning systems, and (3) human-
centered learning analytics and AI with an emphasis on keeping stakeholders in the loop.
Advancing the AIEd field in these areas is far from trivial, especially since both algorithmic
and human-centered expertise is required. Thus, to support all stakeholders in the context of
lifelong learning, we argue for continued interdisciplinary collaboration at various stages of the
design, development, and research process for AI-supported educational technologies.
References
[1] S. Doroudi, The Intertwined Histories of Artificial Intelligence and Education,
International Journal of Artificial Intelligence in Education (2022). doi:10.1007/
s40593-022-00313-2.
[2] M. A. Cardona, R. J. Rodríguez, K. Ishmael, et al., Artificial intelligence and the future of
teaching and learning: Insights and recommendations (2023).
[3] M. R. Breines, M. Gallagher, A return to teacherbot: rethinking the development of
educational technology at the university of edinburgh, Teaching in Higher Education 28
(2023) 517–531.
[4] J. Knox, AI and Education in China: Imagining the Future, Excavating the Past, Taylor &
Francis, 2023.
[5] OECD., OECD Digital Education Outlook 2021 Pushing the Frontiers with Artificial Intelli-
gence, Blockchain and Robots, OECD Publishing, 2021.
[6] X. Zhai, X. Chu, C. S. Chai, M. S. Y. Jong, A. Istenic, M. Spector, J.-B. Liu, J. Yuan, Y. Li, A
8
review of artificial intelligence (ai) in education from 2010 to 2020, Complexity 2021 (2021)
1–18.
[7] K. Zhang, A. B. Aslan, Ai technologies for education: Recent research & future directions,
Computers and Education: Artificial Intelligence 2 (2021) 100025.
[8] M. Zimmerman, Teaching AI: exploring new frontiers for learning, International Society
for Technology in Education, 2018.
[9] L. M. Blaschke, The dynamic mix of heutagogy and technology: Preparing learners for
lifelong learning, British Journal of Educational Technology 52 (2021) 1629–1645.
[10] W. Holmes, M. Bialik, C. Fadel, Artificial intelligence in education: Promises and implica-
tions for teaching and learning, 2019.
[11] O. S. Outlook, Skills for a resilient green and digital transition, 2023.
[12] M. Cukurova, C. Kent, R. Luckin, Artificial intelligence and multimodal data in the service
of human decision-making: A case study in debate tutoring, British Journal of Educational
Technology 50 (2019) 3032–3046.
[13] J. Ooge, J. De Braekeleer, K. Verbert, Nudging Adolescents Towards Recommended Maths
Exercises With Gameful Rewards, in: Artificial Intelligence in Education, Springer Nature
Switzerland, Cham, 2024.
[14] J. Ooge, L. Dereu, K. Verbert, Steering Recommendations and Visualising Its Impact: Effects
on Adolescents’ Trust in E-Learning Platforms, in: Proceedings of the 28th International
Conference on Intelligent User Interfaces, IUI ’23, Association for Computing Machinery,
New York, NY, USA, 2023, pp. 156–170. doi:10.1145/3581641.3584046.
[15] J. Kay, Learner control, User modeling and user-adapted interaction 11 (2001) 111–127.
[16] J. Ooge, S. Kato, K. Verbert, Explaining Recommendations in E-Learning: Effects on
Adolescents’ Trust, in: 27th International Conference on Intelligent User Interfaces,
IUI ’22, Association for Computing Machinery, New York, NY, USA, 2022, pp. 93–105.
doi:10.1145/3490099.3511140.
[17] H. Khosravi, S. B. Shum, G. Chen, C. Conati, Y.-S. Tsai, J. Kay, S. Knight, R. Martinez-
Maldonado, S. Sadiq, D. Gašević, Explainable Artificial Intelligence in education, Comput-
ers and Education: Artificial Intelligence 3 (2022) 100074. doi:10.1016/j.caeai.2022.
100074.
[18] R. Van Schoors, J. Elen, A. Raes, S. Vanbecelaere, F. Depaepe, The charm or chasm of digital
personalized learning in education: Teachers’ reported use, perceptions and expectations,
TechTrends 67 (2023) 315–330.
[19] M. Szymanski, J. Ooge, R. De Croon, V. Vanden Abeele, K. Verbert, Feedback, Control, or
Explanations? Supporting Teachers With Steerable Distractor-Generating AI, in: Proceed-
ings of the 14th Learning Analytics and Knowledge Conference, LAK ’24, Association for
Computing Machinery, New York, NY, USA, 2024, pp. 690–700. doi:10.1145/3636555.
3636933.
[20] S. Pardini, S. Gabrielli, M. Dianti, C. Novara, G. M. Zucco, O. Mich, S. Forti, The role of
personalization in the user experience, preferences and engagement with virtual reality
environments for relaxation, International Journal of Environmental Research and Public
Health 19 (2022) 7237.
[21] W. Holmes, K. Porayska-Pomsta, The Ethics of Artificial Intelligence in education: Practices,
challenges, and debates, Taylor & Francis, 2022.
9
[22] N. Maslej, L. Fattorini, E. Brynjolfsson, J. Etchemendy, K. Ligett, T. Lyons, J. Manyika,
H. Ngo, J. C. Niebles, V. Parli, et al., Artificial intelligence index report 2023, arXiv preprint
arXiv:2310.03715 (2023).
[23] F. Miao, W. Holmes, R. Huang, H. Zhang, et al., AI and education: A guidance for policy-
makers, UNESCO Publishing, 2021.
[24] J. Self, The defining characteristics of intelligent tutoring systems research: Itss care,
precisely, International journal of artificial intelligence in education 10 (1998) 350–364.
[25] S. Bull, J. Kay, Smili: A framework for interfaces to learning data in open learner models,
learning analytics and related fields, International Journal of Artificial Intelligence in
Education 26 (2016) 293–331.
[26] S. Shen, Q. Liu, Z. Huang, Y. Zheng, M. Yin, M. Wang, E. Chen, A survey of knowledge
tracing: Models, variants, and applications, IEEE Transactions on Learning Technologies
(2024).
[27] C. Piech, J. Bassen, J. Huang, S. Ganguli, M. Sahami, L. J. Guibas, J. Sohl-Dickstein, Deep
knowledge tracing, Advances in neural information processing systems 28 (2015).
[28] G. Abdelrahman, Q. Wang, B. Nunes, Knowledge tracing: A survey, ACM Computing
Surveys 55 (2023) 1–37.
[29] M. Ferrari Dacrema, P. Cremonesi, D. Jannach, Are we really making much progress? a
worrying analysis of recent neural recommendation approaches, in: Proceedings of the
13th ACM conference on recommender systems, 2019, pp. 101–109.
[30] A. Gharahighehi, K. Pliakos, C. Vens, Addressing the cold-start problem in collaborative
filtering through positive-unlabeled learning and multi-target prediction, Ieee Access 10
(2022) 117189–117198.
[31] Z. Zhu, Y. He, X. Zhao, Y. Zhang, J. Wang, J. Caverlee, Popularity-opportunity bias in
collaborative filtering, in: Proceedings of the 14th ACM International Conference on Web
Search and Data Mining, 2021, pp. 85–93.
[32] P. Ilídio, A. Gharahighehi, F. K. Nakano, C. Vens, Personalized learning in k-12 education:
Exploring weak-labels for a random forest-based collaborative filtering approach, in:
ALL’24: Workshop on Adaptive Lifelong Learning, co-located with the 25th International
Conference on Artificial Intelligence in Education, July 08–12, 2024, Recife, Brazi, CEUR-
WS.org, 2024.
[33] A. Gharahighehi, M. Venturini, A. Ghinis, F. Cornillie, C. Vens, Extending bayesian person-
alized ranking with survival analysis for mooc recommendation, in: Adjunct Proceedings
of the 31st ACM Conference on User Modeling, Adaptation and Personalization, 2023, pp.
56–59.
[34] M. Zafari, J. S. Bazargani, A. Sadeghi-Niaraki, S.-M. Choi, Artificial Intelligence Ap-
plications in K-12 Education: A Systematic Literature Review, IEEE Access 10 (2022)
61905–61921. doi:10.1109/ACCESS.2022.3179356.
[35] J. Burrell, How the machine ‘thinks’: Understanding opacity in machine learning algo-
rithms, Big Data & Society 3 (2016) 205395171562251. doi:10.1177/2053951715622512.
[36] D. Gunning, D. Aha, DARPA’s Explainable Artificial Intelligence (XAI) Program, AI
Magazine 40 (2019) 44–58. doi:10.1609/aimag.v40i2.2850.
[37] D. Afchar, A. Melchiorre, M. Schedl, R. Hennequin, E. Epure, M. Moussallam, Explainability
in Music Recommender Systems, AI Magazine 43 (2022) 190–208. doi:10.1002/aaai.
10
12056.
[38] A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado,
S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable Artificial
Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible
AI, Information Fusion 58 (2020) 82–115. doi:10.1016/j.inffus.2019.12.012.
[39] A. Adadi, M. Berrada, Peeking Inside the Black-Box: A Survey on Explainable Artifi-
cial Intelligence (XAI), IEEE Access 6 (2018) 52138–52160. doi:10.1109/ACCESS.2018.
2870052.
[40] G. Stiglic, P. Kocbek, N. Fijacko, M. Zitnik, K. Verbert, L. Cilar, Interpretability of machine
learning-based prediction models in healthcare, WIREs Data Mining and Knowledge
Discovery 10 (2020) e1379. doi:10.1002/widm.1379.
[41] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pedreschi, A Survey of
Methods for Explaining Black Box Models, ACM Computing Surveys 51 (2019) 1–42.
doi:10.1145/3236009.
[42] Q. V. Liao, K. R. Varshney, Human-Centered Explainable AI (XAI): From Algorithms to
User Experiences, 2022. doi:10.48550/arXiv.2110.10790. arXiv:2110.10790.
[43] U. Ehsan, M. O. Riedl, Human-Centered Explainable AI: Towards a Reflective Sociotechnical
Approach, in: C. Stephanidis, M. Kurosu, H. Degen, L. Reinerman-Jones (Eds.), HCI Interna-
tional 2020 - Late Breaking Papers: Multimodality and Intelligence, volume 12424, Springer
International Publishing, Cham, 2020, pp. 449–466. doi:10.1007/978-3-030-60117-1_
33.
[44] Q. V. Liao, D. Gruen, S. Miller, Questioning the AI: Informing Design Practices
for Explainable AI User Experiences, in: Proceedings of the 2020 CHI Conference
on Human Factors in Computing Systems, ACM, Honolulu HI USA, 2020, pp. 1–15.
doi:10.1145/3313831.3376590.
[45] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial
Intelligence 267 (2019) 1–38. doi:10.1016/j.artint.2018.07.007.
[46] S. Bull, There are Open Learner Models About!, IEEE Transactions on Learning Technolo-
gies 13 (2020) 425–448. doi:10.1109/TLT.2020.2978473.
[47] S. Abdi, H. Khosravi, S. Sadiq, D. Gasevic, Complementing educational recommender
systems with open learner models, in: Proceedings of the Tenth International Conference
on Learning Analytics & Knowledge, Association for Computing Machinery, New York,
NY, USA, 2020, pp. 360–365.
[48] C. Chen, R. D’hondt, C. Vens, W. Van Den Noortgate, Using machine learning to predict
the number of latent skills in online learning environments, in: ALL’24: Workshop on
Adaptive Lifelong Learning, co-located with the 25th International Conference on Artificial
Intelligence in Education, July 08–12, 2024, Recife, Brazi, CEUR-WS.org, 2024.
[49] J. Barria-Pineda, K. Akhuseyinoglu, S. Želem-Ćelap, P. Brusilovsky, A. K. Milicevic,
M. Ivanovic, Explainable Recommendations in a Personalized Programming Practice
System, in: I. Roll, D. McNamara, S. Sosnovsky, R. Luckin, V. Dimitrova (Eds.), Artificial
Intelligence in Education, volume 12748, Springer International Publishing, Cham, 2021,
pp. 64–76. doi:10.1007/978-3-030-78292-4_6.
[50] I. Molenaar, Towards hybrid human-ai learning technologies, European Journal of
Education 57 (2022) 632–645.
11
[51] B. Shneiderman, Human-Centered AI, 1 ed., Oxford University PressOxford, 2022. doi:10.
1093/oso/9780192845290.001.0001.
[52] N. van Berkel, M. B. Skov, J. Kjeldskov, Human-AI interaction: Intermittent, continuous,
and proactive, Interactions 28 (2021) 67–71. doi:10.1145/3486941.
[53] P. Brusilovsky, AI in Education, Learner Control, and Human-AI Collaboration,
International Journal of Artificial Intelligence in Education (2023). doi:10.1007/
s40593-023-00356-z.
[54] R. Alfredo, V. Echeverria, Y. Jin, L. Yan, Z. Swiecki, D. Gašević, R. Martinez-Maldonado,
Human-Centred Learning Analytics and AI in Education: A Systematic Literature Review,
Computers and Education: Artificial Intelligence (2024) 100215. doi:10.1016/j.caeai.
2024.100215.
[55] Y. Long, V. Aleven, Enhancing learning outcomes through self-regulated learning support
with an Open Learner Model, User Modeling and User-Adapted Interaction 27 (2017)
55–88. doi:10.1007/s11257-016-9186-6.
[56] Y. Long, V. Aleven, Mastery-Oriented Shared Student/System Control Over Problem
Selection in a Linear Equation Tutor, in: A. Micarelli, J. Stamper, K. Panourgia (Eds.),
Intelligent Tutoring Systems, Lecture Notes in Computer Science, Springer International
Publishing, Cham, 2016, pp. 90–100. doi:10.1007/978-3-319-39583-8_9.
[57] L. Chen, P. Chen, Z. Lin, Artificial Intelligence in Education: A Review, IEEE Access 8
(2020) 75264–75278. doi:10.1109/ACCESS.2020.2988510.
[58] U. Maier, C. Klotz, Personalized feedback in digital learning environments: Classifica-
tion framework and literature review, Computers and Education: Artificial Intelligence
3 (2022) 1–13. URL: https://doi.org/10.1016/j.caeai.2022.100080. doi:10.1016/j.caeai.
2022.100080.
[59] S. Dutta, S. Ranjan, S. Mishra, V. Sharma, P. Hewage, C. Iwendi, Enhancing educational
adaptability: A review and analysis of ai-driven adaptive learning platforms, in: 2024
4th International Conference on Innovative Practices in Technology and Management
(ICIPTM), IEEE, 2024, pp. 1–5.
[60] G. M. Fernández-Nieto, S. Buckingham Shum, R. Martínez-Maldonado, Beyond the
Learning Analytics Dashboard : Alternative Ways to Communicate Student Data In-
sights Combining Visualisation , Narrative and Storytelling, in: 12th International
Learning Analytics and Knowledge Conference (LAK22), 2022, pp. 1–11. doi:https:
//doi.org/10.1145/3506860.3506895.
[61] Y. Yuan, An empirical study of the efficacy of ai chatbots for english as a foreign language
learning in primary education, Interactive Learning Environments (2023) 1–16.
[62] M. Sadallah, J.-M. Gilliot, S. Iksal, K. Quelennec, M. Vermeulen, L. Neyssensas, O. Aubert,
R. Venant, Designing lads that promote sensemaking: A participatory tool, in: European
Conference on Technology Enhanced Learning, Springer, 2022, pp. 587–593.
[63] J. P. Sarmiento, A. F. Wise, Participatory and Co-Design of Learning Analytics: An Initial
Review of the Literature, in: LAK22: 12th International Learning Analytics and Knowledge
Conference, 1, 2022, pp. 535–541. doi:10.1145/3506860.3506910.
[64] Y. Dimitriadis, R. Martínez-Maldonado, K. Wiley, Human-Centered Design Principles for
Actionable Learning Analytics, in: Research on E-Learning and ICT in Education, 2021,
pp. 277–296. doi:10.1007/978-3-030-64363-8_15.
12
[65] S. B. Buckingham-Shum, R. Ferguson, R. Martinez-Maldonado, Human-centred learning
analytics, Journal of Learning Analytics 6 (2019) 1–9. doi:10.18608/jla.2019.62.1.
[66] O. Viberg, I. Jivet, M. Scheffel, Designing culturally aware learning analytics: A value
sensitive perspective, in: Practicable learning analytics, Springer, 2023, pp. 177–192.
[67] M. Zachry, J. H. Spyridakis, Human-centered design and the field of technical com-
munication, Journal of Technical Writing and Communication 46 (2016) 392–401.
doi:10.1177/0047281616653497.
[68] J. Giacomin, What is human centred design?, The design journal 17 (2014) 606–623.
[69] W. B. Rouse, People and Organizations: Explorations of Human-Centered Design, 2007.
doi:10.1002/9780470169568.
[70] M. A. K. Akhtar, M. Kumar, A. Nayyar, The role of human-centered design in developing
explainable ai, in: Towards Ethical and Socially Responsible Explainable AI: Challenges
and Opportunities, Springer, 2024, pp. 99–126.
[71] Y. Long, Z. Aman, V. Aleven, Motivational design in an intelligent tutoring system
that helps students make good task selection decisions, in: Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), volume 9112, 2015, pp. 226–236. doi:10.1007/978-3-319-19773-9_
23.
[72] P. Topali, A. Ortega-Arranz, J. I. Asensio-Pérez, S. L. Villagrá-Sobrino, A. Martínez-
Monés, Y. Dimitriadis, e-feed4mi: human-centred design of personalised and con-
textualised feedback in moocs, Behaviour & Information Technology 0 (2024) 1–
18. URL: https://doi.org/10.1080/0144929X.2024.2376201. doi:10.1080/0144929X.2024.
2376201. arXiv:https://doi.org/10.1080/0144929X.2024.2376201.
[73] P. Topali, A. Ortega-Arranz, M. J. Rodríguez-Triana, E. Er, M. Khalil, G. Akçapınar, De-
signing human-centered learning analytics and artificial intelligence in education solu-
tions: a systematic literature review, Behaviour & Information Technology 0 (2024) 1–28.
doi:10.1080/0144929X.2024.2345295.
[74] R. Alfredo, V. Echeverria, Y. Jin, L. Yan, Z. Swiecki, D. Gašević, R. Martinez-Maldonado,
Human-centred learning analytics and ai in education: A systematic literature review,
Computers and Education: Artificial Intelligence 6 (2024) 100215. doi:https://doi.org/
10.1016/j.caeai.2024.100215.
[75] R. Martinez-Maldonado, A. Pardo, N. Mirriahi, K. Yacef, J. Kay, A. Clayphan, Latux: an
iterative workflow for designing, validating and deploying learning analytics visualisations,
Journal of Learning Analytics 2 (2016) 9–39. URL: https://learning-analytics.info/index.
php/JLA/article/view/4458. doi:10.18608/jla.2015.23.3.
[76] M. A. Chatti, A. Muslim, M. Guesmi, F. Richtscheid, D. Nasimi, A. Shahin, R. Damera,
How to design effective learning analytics indicators? a human-centered design approach,
in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), volume 12315 LNCS, 2020, pp. 303–317.
doi:10.1007/978-3-030-57717-9_22.
[77] R. Martinez-Maldonado, D. Elliott, C. Axisa, T. Power, V. Echeverria, S. B. Shum, Designing
translucent learning analytics with teachers: an elicitation process, Interactive Learning
Environments 30 (2022) 1077–1091. doi:10.1080/10494820.2019.1710541.
[78] I. Molenaar, Towards hybrid human-AI learning technologies, European Journal of
13
Education 57 (2022) 632–645.
[79] B. Shneiderman, HUMAN-CENTERED AI, 2022. doi:10.1093/oso/9780192845290.
001.0001.
[80] K. Krushinskaia, J. Elen, A. Raes, Design and development of a co-instructional designer
bot using gpt-4 to support teachers in designing instruction, in: ALL’24: Workshop on
Adaptive Lifelong Learning, co-located with the 25th International Conference on Artificial
Intelligence in Education, July 08–12, 2024, Recife, Brazi, CEUR-WS.org, 2024.
[81] S. I. Ramírez Luelmo, N. El Mawas, J. Heutte, Learner models for mooc in a lifelong
learning context: A systematic literature review, in: Computer Supported Education: 12th
International Conference, CSEDU 2020, Virtual Event, May 2–4, 2020, Revised Selected
Papers 12, Springer, 2021, pp. 392–415.
[82] V. Dimitrova, P. Brna, From interactive open learner modelling to intelligent mentoring:
Style-olm and beyond, International Journal of Artificial Intelligence in Education 26
(2016) 332–349.
[83] N. El Mawas, J.-M. Gilliot, S. Garlatti, R. Euler, S. Pascual, As one size doesn’t fit all,
personalized massive open online courses are required, in: Computer Supported Education:
10th International Conference, CSEDU 2018, Funchal, Madeira, Portugal, March 15–17,
2018, Revised Selected Papers 10, Springer, 2019, pp. 470–488.
[84] W. Maalej, P. Pernelle, C. Ben Amar, T. Carron, E. Kredens, Modeling skills in a learner-
centred approach within moocs, in: Advances in Web-Based Learning–ICWL 2016: 15th
International Conference, Rome, Italy, October 26–29, 2016, Proceedings 15, Springer, 2016,
pp. 102–111.
[85] A. Qazdar, C. Cherkaoui, B. Er-Raha, D. Mammass, Aelf: Mixing adaptive learning system
with learning management system, International Journal of Computer Applications 119
(2015) 1–8.
[86] H. Li, Z. Zhong, J. Shi, H. Li, Y. Zhang, Multi-objective optimization-based recommendation
for massive online learning resources, IEEE Sensors Journal 21 (2021) 25274–25281.
[87] M. Geden, A. Emerson, J. Rowe, R. Azevedo, J. Lester, Predictive student modeling in
educational games with multi-task learning, in: Proceedings of the AAAI Conference on
Artificial Intelligence, volume 34, 2020, pp. 654–661.
[88] A. Picciano, Blending with purpose: The multimodal model, Journal of the Research
Center for Educational Technology 5 (2009) 4–14.
[89] A. Gharahighehi, C. Vens, K. Pliakos, An ensemble hypergraph learning framework for
recommendation, in: Discovery Science: 24th International Conference, DS 2021, Halifax,
NS, Canada, October 11–13, 2021, Proceedings 24, Springer, 2021, pp. 295–304.
[90] C. Conati, O. Barral, V. Putnam, L. Rieger, Toward personalized XAI: A case study in
intelligent tutoring systems, Artificial Intelligence 298 (2021) 103503. doi:10.1016/j.
artint.2021.103503.
[91] J. Kay, B. Kummerfeld, Creating personalized systems that people can scrutinize and
control: Drivers, principles and experience, ACM Transactions on Interactive Intelligent
Systems (TiiS) 2 (2013) 1–42.
[92] S. Tawil, F. Miao, Steering the digital transformation of education: Unesco’s human-
centered approach, Frontiers of Digital Education 1 (2024) 51–58.
14