<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>TrueLearn: A Python Library for Personalised Informational Recommendations with (Implicit) Feedback</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuxiang Qiu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karim Djemili</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denis Elezi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aaneel Shalman</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>María Pérez-Ortiz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sahan Bulathwela</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centre for Artificial Intelligence, University College London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University College London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This work describes the TrueLearn Python library, which contains a family of online learning Bayesian models for building educational (or more generally, informational) recommendation systems. This family of models was designed following the "open learner" concept, using humanly-intuitive user representations. For the sake of interpretability and putting the user in control, the TrueLearn library also contains diferent representations to help end-users visualise the learner models, which may in the future facilitate user interaction with their own models. Together with the library, we include a previously publicly released implicit feedback educational dataset with evaluation metrics to measure the performance of the models. The extensive documentation and coding examples make the library highly accessible to both machine learning developers and educational data mining and learning analytic practitioners. The library and the support documentation with examples are available at https://truelearn. readthedocs.io/en/latest.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        It has been shown that personalised one-on-one learning could lead to improving learning
gains by two standard deviations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. With this goal in sight, and the ambition to democratise
education to a world population, we require responsible Artificial Intelligence systems that can
bring scalable, personalised and governable models to a mass of learners [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Up until recently,
the go-to solution for scaling education has been Intelligent Tutoring Systems (ITS), heavily
relying on testing users for knowledge, which is a practical option for formal courses with a
limited number of learning materials involved. However, educational recommender systems
have now the opportunity to go one step further, leveraging implicit interaction signals (such
as clicks and watch time) to personalise and support learning for informal lifelong learners [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
This is exactly the focus of the models in this library, with the aim of making these methods
more accessible, as publicly available learner models and datasets are currently scarce, and they
can open up huge opportunities for education.
      </p>
      <sec id="sec-1-1">
        <title>1.1. Our Contribution</title>
        <p>This work introduces TrueLearn1, an open-source Python library that packages state-of-the-art
online recommendation models, datasets and visualisation tools. Among its diverse use cases, it
can be used as a personalization component of e-learning platforms (e.g., YouTube and EdX) to
estimate learners’ potential engagement with learning resources and to model their background
knowledge, interests, and novelty. The library contains diferent components that will enable
i) creating content representations of learning resources ii) managing user/learner states, iii)
modelling the state evolution of learners using interactions and iv) evaluating engagement
predictions. Requiring minimal data, its design ofers a transparent solution that respects the
privacy of its users and enables user interaction. The development of the TrueLearn library
aims to provide both the research and developer communities with the opportunity to use the
TrueLearn family of models. The paper describes the development process and experiments that
demonstrate the utility of this package to the educational data mining community and beyond.</p>
        <p>While the motivation for TrueLearn stems from education, the models are applicable to a wide
variety of applications that relate to informational recommendations and to model engagement
in tasks in which human learning is involved. Additionally, note that the models included
are suitable both for implicit and explicit feedback. In our experiments, we used a dataset of
video lecture watch patterns, which we use as a proxy for learner engagement, but the same
models could be applicable if learners also provided e.g. explicit feedback on the dificulty of
the learning material.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>We have researched related work on learner models and how to design usable machine learning
libraries to make decisions regarding the design of the TrueLearn library. This section reviews
these works and their influence on the development of the library.</p>
      <sec id="sec-2-1">
        <title>2.1. Item Response Theory and Knowledge Tracing</title>
        <p>
          Item Response Theory (IRT) focuses on designing, analysing and scoring ability tests by
modelling learner’s knowledge and question dificulty, without considering changes in knowledge
over time. The simplest of IRT, the Rasch model [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], computes the probability of scoring a
correct answer as a function of the learner’s skill   and the dificulty of the question/resource
:
 ( | , ) =  (  − )
(1)
where  is usually a logistic function. TrueSkill model extends IRT to model the skill of multiple
users playing a video game [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The TrueLearn models implemented in this work extend
TrueSkill for learner engagement prediction.
        </p>
        <p>
          An alternative to IRT for modelling learning is Knowledge Tracing (KT) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Unlike IRT, KT
model does not consider question dificulty but instead estimates knowledge acquisition as
a function of practice opportunities. Several Bayesian KT (BKT) algorithms that extend the
        </p>
        <sec id="sec-2-1-1">
          <title>1Documentation available at https://truelearn.readthedocs.io/en/latest</title>
          <p>
            original KT model have been proposed in the literature. The pyBKT library, a Python library of
KT models provides a clear Application Programming Interface advocating the separation of
data generation, model fitting and prediction functions [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. However, conventional KT models
do not train using online learning, introducing challenges when scaling to real-time scenarios
with a large number of users learning over a long period of time.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. TrueLearn Models</title>
        <p>
          The TrueLearn family of online Bayesian learner models uses implicit feedback from learners
to recover their learning state. Prior work has proposed several learner models that capture the
learner’s interests and the knowledge and novelty of the material. Subsequent work combines
these individual models to propose proposed ensembles that can account for these factors
simultaneously, improving the predictive performance of individual models [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          While being data eficient and privacy-preserving by design, TrueLearn models generate
humanly intuitive learner representations that are inspired by open learner models. An open
learner model is a learner model that has been made accessible to the learner it represents or to
other users (e.g. teachers, parents) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This involves generating visualisations to construct an
interface that will communicate to learners information about their knowledge and learning
path. Open learner models come with definite advantages, such as promoting learner reflection
by aiding learners in planning and monitoring their learning and allowing them to compare
their knowledge to that of their peers [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Open learner models also come with associated
challenges, since all leaner representations and their visual presentations may not be equally
understood by a wide variety of end-users. Among many diferent visualisations used to present
learner knowledge state, user studies have shown that some visualisations are comparatively
more user-friendly than others [
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ]. This work builds on these findings to develop a set of
visualisations that aid this communication process.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Design a Machine Learning Library</title>
        <p>
          To design a user-friendly, easy-to-use and scalable library, we need to avoid commonly used
bad design practices, such as rigidity, fragility, immobility and viscosity [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Rigidity refers
to the tendency for software to be dificult to change. Fragility is the tendency for software
to break once it has been updated. Immobility is the inability to reuse code within or across
projects. Viscosity refers to the dificulty of retaining the original design when changes to the
software are required. Various design principles and patterns are proposed and employed in
software engineering to overcome these issues [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          When designing a machine learning library, we also need to account for unique challenges
(e.g. incorporating data, pre-processing, models etc.). A great example of a well-designed
machine learning library that has been taken up by both industry and academia recently is
scikit-learn. scikit-learn [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] proposes some general design principles (consistency,
inspection and sensible defaults) and interface design (estimators and predictors) [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] for
building a scalable and user-friendly machine learning library. Consistency emphasises the
importance of establishing a shared and consistent interface across diferent machine learning
models, as this reduces the learning cost of the library. Inspection is concerned with exposing
the model’s parameters and hyperparameters as public attributes, which makes it easier for
users to access the internal states of the model. Sensible defaults ensure that the model behaves
reasonably well with the default values. The estimator and predictor interfaces in scikit-learn
reflect how the library implements these general guidelines. The estimator interface specifies
a fit function to provide a consistent interface to the training model and exposes the coef_
attribute to facilitate the inspection of the internal state of the model. The predictor interface
specifies the predict and predict_prob functions as methods for utilising the trained model.
Due to the time-tested design decisions that have succeeded in scikit-learn, the design decisions
made in developing the TrueLearn library are inspired by these practices.
        </p>
        <p>
          Because of the concept of duck typing in Python, others’ model implementations can
interoperate with scikit-learn (e.g., developers can plug them into scikit-learn’s grid search) without
being forced to inherit the above interfaces. This makes scikit-learn extensible and encourages
users to reuse code. However, the use of duck typing makes it dificult to perform static program
analysis [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], thus postponing the discovery of incorrect implementations until runtime and
increasing the likelihood of software bugs [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Therefore, in our work, we tend to take a hybrid
approach, utilising type annotations [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] throughout the code base while allowing the use of
static duck types supported by the Protocol class [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Compared to traditional duck typing,
static duck typing allows the library implementer to represent the requirement for parameters
of a method explicitly but also does not force the user to inherit any class, making it easier for
users to understand the intent of the method and helping static type checkers to analyse the
code [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Library Overview</title>
      <p>This section describes the problem setting, the architecture of the library and how it can be
applied in practice. While TrueLearn provides a probability that can be mapped to a binary
outcome (engaged/not engaged), the probability prediction on diferent materials can rank them
in relation to the state of the learner, creating personalised recommendations.</p>
      <sec id="sec-3-1">
        <title>3.1. Problem Setting</title>
        <p>The scenario educational recommendation focuses on is modelling a learner ℓ in learner
population  interacting with a series of educational resources ℓ ⊂ { 1, . . . , } where  are
fragments/parts of diferent educational videos. The watch interactions happen over a period of
 time steps,  being the total number of resources in the system. In this system with a total 
unique knowledge components (KCs), resource  is characterised by a set of top KCs or topics
 ⊂ { 1, . . . ,  }. We assume the presence  of KC in resource  and the degree  of KC
coverage in the educational resource is observable.</p>
        <p>The key idea is to model the probability of engagement 
ℓ, ∈ {1, − 1} between learner ℓ
and resource  at time  as a function of the learner interest   , knowledge  ℓNK based on the
ℓI
top KCs covered  using their presence  , and depth of topic coverage  .</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Architecture</title>
        <p>
          The TrueLearn library consists of several modules that contain programming logic to execute
diferent tasks using the library. Figure 1 outlines the main structure of the TrueLearn library.
These modules are described below.
3.2.1. Datasets
The TrueLearn dataset module integrates tools for both downloading and parsing learner
engagement datasets. Currently, PEEK [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], a publicly available learner engagement dataset is
integrated. This module serves as a helper to integrate publicly available datasets that can be
used for conducting experiments, evaluating model performance, and analysing learner data
using TrueLearn’s visualisation capabilities.
3.2.2. Pre-processing
The pre-processing module contains utility classes designed specifically for extracting content
representations from educational materials. The extracted representations serve as the
foundation for creating KCs that can be used with IRT, KT and TrueLearn models. At present, utility
functions that create Wikipedia-based KCs that are used in TrueLearn experiments [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] are
included.
3.2.3. Models
The models module houses the class that can store the learner model. In this context, the
learner model refers to the data structure that represents the learner’s state (e.g. knowledge
or interest). This learner model in the library is loosely coupled with the learning algorithms
which makes this object reusable with many other learning algorithms that go beyond the
TrueLearn algorithms currently included in the library. This means that the output of other
learning algorithms such as BKT and IRT can still be used to create learner representations
using this class.
3.2.4. Learning
The learning module contains the implementation of TrueLearn algorithms that can perform
training and prediction of learner engagement with transcribed videos [
          <xref ref-type="bibr" rid="ref21 ref3">21, 3</xref>
          ]. Each classifier
within this module follows an implicit interface inspired by the scikit-learn design. For training,
fit function is used. For prediction, predict and predict_proba functions are used to
generate a binary label and a probability value respectively. Currently, i) a set of baseline
models, ii) TrueLearn algorithms that model interest, novelty and knowledge in isolation, and
iii) an ensemble model that combines the isolated models are implemented.
3.2.5. Metrics
To evaluate the learning algorithm performance, the metrics module provides an interface to
several key classification metrics including precision, accuracy, recall and F1 score. We use
the scikit-learn API to support evaluation metrics. This opens up the opportunity to easily
incorporate more evaluation metrics without having to put significant efort into testing and
maintaining them in the future.
3.2.6. Visualisations
To efectively depict the learner state, nine diferent visualisations have been developed. These
visualisations can be used with the ability to sort the output based on specific study topics (KCs),
based on learners’ proficiency. Figure 2 provides a preview of a subset of available visualisations.
Out of these visualisations, seven are interactive visualisations that allow the end user to click
and hover over the output to explore more details. However, they can also be saved as static
images. The remaining two, namely, the Bubble Chart and Word Cloud, are exclusively static
representations due to the limitations of the libraries used for their implementation. This module
also provides the functionality to export these visualisations, where dynamic output can be
saved in HTML format while the static output can be saved in various image formats such as
PNG, JPEG and SVG.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Visualising the Learner State</title>
        <p>
          Several visualisations that communicate the AI’s learner state representation to a human user
are implemented as part of the TrueLearn library. These visualisations are inspired by the
open learner model concept where models are developed to maintain a humanly intuitive
representation [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. Furthermore, the visualisations utilise user-friendly cues and conventions
to minimise the learning curve for the human learner. The visualisations mainly represent the
current state of the learner while there are also visualisations that can depict how the state of the
learner evolves over time (where the x-axis of the plot is time). The i) bar plot, ii) bubble plot,
iii) dot plot, iv) pie plot, radar plot, rose plot and tree plot are implemented. Figure 2 previews
these visualisations. The TrueLearn family of algorithms represents a state using a mean and a
variance value. In two-dimensional plots such as bar plots and dot plots, the mean is the y-axis
and the confidence intervals mark the variance. In circle-based plots such as bubble plots and
rose plots, the radius of the circle represents the mean. The intensity of the colour maps to the
variance of the estimate (dark being low variance).
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Usage of TrueLearn Python Library</title>
      <p>The TrueLearn Python library has two main uses, which are depicted in Figure 3.</p>
      <sec id="sec-4-1">
        <title>4.1. Personalising E-learning/ Information</title>
        <p>This Python package makes it very easy to incorporate personalisation into video-based
elearning platforms. The pre-processing module allows extracting KCs/topics from text
transcriptions of video-based learning materials by associating them with Wikipedia topics. When</p>
        <p>Data Processing / Datasets</p>
        <p>Training</p>
        <p>Prediction
Video Transcription
Audio Transcription
Text Extraction
truelearn.datasets
truelearn.preprocessing</p>
        <p>Event
truelearn.learning
classifier.predict()</p>
        <p>Prediction
current
learner
state
updated
learner
state
truelearn.models
truelearn.utils.metrics</p>
        <p>Evaluation
truelearn.utils.visualisation</p>
        <p>Visualisation
uses learner model
a learner starts watching a video, the interaction signals can be recorded from the web
application. The TrueLearn package can instantiate a learner model for each individual learner in
the platform and use the interaction logs to update the learner model. The online learning
algorithms can continuously fit the events to the learner model. The updated model can be used
to predict engagement with a set of potential future videos (or any other type of educational
material) to rank them and provide back to the learner as recommendations. Additionally, the
learner may request to see their current state at any given point. The visualisation module
can be used with the current learner state to create both static and interactive learner-state
visualisations that can be presented to the end-user.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Ofline/Online Evaluation of Informational Recommenders</title>
        <p>Academics and researchers can use the TrueLearn library for conducting both online and ofline
evaluations of educational/informational recommendation algorithms. If the researchers need
to benchmark a new learning algorithm, they can implement the learner algorithm using the
common interface provided by the TrueLearn library. Then the Python library can be integrated
with a web application to run online experiments and record user interactions. Similarly, ofline
evaluations can be done either by using i) an existing PEEK dataset or ii) integrating a new
dataset.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments</title>
      <p>
        In order to validate the accuracy of the implementation, we ran a few small-scale experiments
attempting to replicate the results published in prior work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We used the PEEK dataset [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
to evaluate the performance of the primary TrueLearn models proposed in prior work, namely,
i) TrueLearn Interest, TrueLearn Novelty, and TrueLearn INK. The experimental protocol was
similar to the one used earlier. We also used a sequential experimental design. For each
learner, its engagement at time  is predicted using its engagement at times 1 to  − 1. We
used the hold-out validation technique in our experiments where the training data is used
for hyperparameter tuning. The best hyperparameter combination based on the F1-Score is
identified. This combination is used with the test set to evaluate the final predictive performance.
Since the engagement is predicted as a binary label in the PEEK dataset, the predictions for
each event can be combined into a confusion matrix to compute accuracy, precision, recall, and
F1 score. Same as in the prior publications, we calculate the weighted average of each learner’s
metrics based on their number of events.
      </p>
      <sec id="sec-5-1">
        <title>5.1. Empirical Evaluation</title>
        <p>The empirical results obtained are reported in Table 1. The reported metric is the predictive
performance each model obtained in the test set of the PEEK dataset.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Visualisations</title>
        <p>We designed visualisations, to empower students to recognise, contrast, and track the
development of their skill level across subjects of their choice. It employs our algorithms’ prediction
of their skill level (‘mean’) and calculation of a certain level in these predictions (‘variance’).
These computations stem from learners’ interaction with diverse learning resources and subject
matters, classified based on Wikipedia topics. The library is designed to enable learners to
efortlessly generate dynamic and static visualisations and potentially export static ones to provide
learners with visually enriched insights, thereby promoting a learner-centric, self-regulated
study experience. Figure 4 previews the learner knowledge state generated from one of the
learners in the PEEK dataset test data.</p>
        <p>
          Our approach was guided by a thorough examination of seminal research on impactful
learning visualisations. By incorporating the results from the questionnaire responses [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], we
arrived at a selection of nine distinct visualisations: Bar Charts, Line Charts, Dot Plots, Pie
Charts, Rose Charts, Bubble Charts, Tree Maps, Radar Charts, and Word Clouds.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>We discuss the TrueLearn library using several functional and non-functional aspects of the
TrueLearn Python library.
Normalizing constant</p>
      <p>Expectation propagation
Machine learning</p>
      <p>0 0.5 1 1.5 2Posterior probability
Importance sampling</p>
      <p>Topic model
Conditional probability distribution</p>
      <p>Probability distribution</p>
      <sec id="sec-6-1">
        <title>6.1. Library Design and Stability</title>
        <p>Throughout the development process, we adhere to design principles discussed in related
work, such as consistency, inspection, and hybrid typing. By providing a consistent training
and prediction interface for all classifiers in the TrueLearn library, we have achieved the
consistency principle. Simultaneously, for easy inspection, we exposed the internal state of
objects by means of property attributes, public attributes, and getters/setters. The reason why
we use getters/setters instead of public attributes in the classifier is to better facilitate the
hybrid typing approach. With these methods, we can perform type and value checks when the
hyperparameters of the classifier are modified, which ensures the robustness of the classifier
implementation and guides the user to pass in the correct hyperparameters. Considering that
users may want to implement their version of the KC (in order to include custom topic fields
needed by their subject domain), the TrueLearn classifier interacts with the KC by using static
duck typing whenever possible, which promotes interoperability between TrueLearn and the
client code as well as the scalability of TrueLearn itself.</p>
        <p>TrueLearn benefits from 100% test coverage achieved through a combination of integration,
unit, and documentation tests. These tests were run on Python versions ranging from 3.7 to
3.11 on all major operating systems to ensure compatibility. By making use of periodic testing
with Continuous Integration, we can more greatly ensure that TrueLearn will work as intended
regardless of the operating environment.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Maintainability</title>
        <p>
          Modularity is an important aspect of the TrueLearn library, achieved by using a collection of
Base classes that define a common interface and shared functionality. This design establishes
a foundation that can be easily extended and customised moving forward. To ensure the
straightforward integration of the TrueLearn library, our documentation includes a wide range
of examples showcasing the functionality ofered by our API and visualisations. These examples
are accompanied by the necessary code to generate them, allowing for a clear understanding of
their implementation. To further enhance maintainability, we have taken several additional
measures. Firstly, we focused on minimising external dependencies wherever feasible to reduce
the risk of compatibility issues and make it easier to maintain our codebase independently.
Code consistency and readability are further enhanced by following the PEP 8 guidelines [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ],
which define a set of best practices for Python code.
        </p>
        <p>Furthermore, we provide a comprehensive API reference, ofering detailed information for
each class and function. The project website 2 describes relevant information for both a potential
end-user or a contributor to familiarise with the library. Each of these comes with its own
ifne-grained examples along with descriptions of their purpose. To better explain the rationale
of the technical and design decisions made, information is provided concerning the library’s
design and style guide in the contributing section to minimise the learning curve associated
with future development.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Relevance and Impact</title>
        <p>Modelling learner state in a humanly intuitive manner, requiring minimal data and exclusively
relying on individual user actions, TrueLearn ofers a transparent learner model that respects
the privacy of its users and can scale to lifelong education. The development of the TrueLearn
library aims to provide both the research and developer communities with the opportunity
to seamlessly use the TrueLearn family of models in their work. The learner models utilise
Wikipedia entity-based entity linking to create KCs that are based on a publicly available
knowledge base. The content annotation also can scale to thousands of materials created in
diferent modalities (video, text, audio etc.).</p>
        <p>
          The impact of TrueLearn is two-fold. For development and research, the TrueLearn library
employs a design that conforms with popular machine learning libraries such as scikit-learn
and pyBKT [
          <xref ref-type="bibr" rid="ref14 ref7">14, 7</xref>
          ]. The documentation is extensive and contains detailed examples that help
the implementation. For end-use, the models employ probabilistic graphical models that are
data eficient while providing humanly intuitive visualisations that trigger meta-cognition. The
online learning algorithm updates the learner state in real-time helping better personalisation.
A platform implementing TrueLearn can scale to a large population of users and support them
through lifelong education.
        </p>
        <p>
          The TrueLearn models advocate open learner modelling and employ open data sources
such as Wikipedia helping the democratisation of education [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The data and computational
eficiency of the models also lead to minimised carbon footprint. The models are currently used
in building a platform that connects Open Educational Resources to lifelong learners supporting
lifelong, equitable education. In an era where AI is reaching a position that can impact society
significantly, TrueLearn library unlocks the power of personalisation of information while
taking into account multiple human values that go beyond knowledge management (e.g. climate
responsibility, privacy, transparency to name a few).
        </p>
        <sec id="sec-6-3-1">
          <title>2Available at https://truelearn.readthedocs.io/en/latest/</title>
        </sec>
      </sec>
      <sec id="sec-6-4">
        <title>6.4. Limitations</title>
        <p>
          While getting inspired by scikit-learn library, the learning algorithms in the TrueLearn library
are not compatible with some helper functions available in scikit-learn (such as grid search).
Building seamless compatibility with these utilities will enable the TrueLearn library to be
adopted by a wider audience while minimising the development efort required to support such
powerful features. The exclusive support of online learning algorithms can also be seen as a
limitation of the current library as there are many batch learning algorithms that are proposed
for educational recommendation [
          <xref ref-type="bibr" rid="ref24 ref25">24, 25</xref>
          ]. The library also doesn’t support state-of-the-art deep
learning algorithms [
          <xref ref-type="bibr" rid="ref26 ref27">26, 27</xref>
          ]. These limitations are to be addressed in the future.
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>This work showcases TrueLearn, a Python library that models the learner knowledge and interest
states to predict the engagement of learners with educational videos. The library contains
several online learning models including ensemble models that model multiple factors that afect
learner engagement together. It also includes a set of visualisations that can be used to interpret
the learner’s interest/knowledge state. The learner representations and state visualisations
are comparable to outputs of knowledge tracing models, except, TrueLearn uses watch time
interactions rather than relying on test taking. The empirical results demonstrate that the new
implementation of the library achieves similar performance to the prior work that introduced
these algorithms assuring correctness. The new implementation encourages educational data
mining practitioners to use this library to incorporate educational video recommendations in
e-learning systems. Researchers are encouraged to extend this library with new datasets and
online learning algorithms for educational or informational recommendations.</p>
      <sec id="sec-7-1">
        <title>7.1. Future Work</title>
        <p>The immediate future work entails running several user studies to evaluate the efectiveness
of the visualisations and identifying ways to improve them. Incorporating the library into a
real-world e-learning platform to run online evaluations is also a top priority. We also aim to
evaluate the performance of the TrueLearn models in the context of information recommenders
that present informational content such as news and podcasts to demonstrate the generalisability
of the models. Incorporating models that can exploit explicit feedback simultaneous to implicit
feedback can also enhance the library and its utility. In the long term, we aim to add more
general informational recommendation algorithms to the library and mobilise the research
community to contribute various models, pre-processing techniques and evaluation metrics
that the library can benefit from.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work is partially supported by the European Commission-funded project "Humane AI:
Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society
and the World Around Us" (grant 820437) and the X5GON project funded from the EU’s Horizon
2020 research programme grant No 761758.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Bloom</surname>
          </string-name>
          ,
          <article-title>The 2 sigma problem: The search for methods of group instruction as efective as one-to-one tutoring</article-title>
          ,
          <source>Educational Researcher</source>
          <volume>13</volume>
          (
          <year>1984</year>
          )
          <fpage>4</fpage>
          -
          <lpage>16</lpage>
          . URL: http://www.jstor.org/ stable/1175554.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pérez-Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Halloway</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shawe-Taylor</surname>
          </string-name>
          ,
          <article-title>Could AI Democratise Education? Socio-Technical Imaginaries of an EdTech Revolution</article-title>
          , in:
          <source>In Proc. of the NeurIPS Workshop on Machine Learning for the Developing World (ML4D)</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <article-title>Sahan and Pérez-Ortiz, María and Yilmaz, Emine and</article-title>
          <string-name>
            <surname>Shawe-Taylor</surname>
          </string-name>
          , John, Power to the Learner:
          <article-title>Towards Human-Intuitive and Integrative Recommendations with Open Educational Resources</article-title>
          ,
          <source>Sustainability</source>
          <volume>14</volume>
          (
          <year>2022</year>
          ). URL: https://www.mdpi.com/ 2071-1050/14/18/11682. doi:
          <volume>10</volume>
          .3390/su141811682.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Rasch</surname>
          </string-name>
          ,
          <article-title>Probabilistic Models for Some Intelligence and Attainment Tests</article-title>
          , volume
          <volume>1</volume>
          ,
          <year>1960</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Herbrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Minka</surname>
          </string-name>
          , T. Graepel,
          <article-title>Trueskill(tm): A bayesian skill rating system</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          <volume>20</volume>
          , MIT Press,
          <year>2007</year>
          , pp.
          <fpage>569</fpage>
          -
          <lpage>576</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Corbett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>Knowledge tracing: Modeling the acquisition of procedural knowledge, User modeling and user-adapted interaction 4 (</article-title>
          <year>1994</year>
          )
          <fpage>253</fpage>
          -
          <lpage>278</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bulut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Yildirim-Erbasli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gorgun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Pardos</surname>
          </string-name>
          ,
          <article-title>An introduction to bayesian knowledge tracing with pybkt</article-title>
          ,
          <source>Psych</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <fpage>770</fpage>
          -
          <lpage>786</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          , Open Learner Models, volume
          <volume>308</volume>
          ,
          <year>2010</year>
          , pp.
          <fpage>301</fpage>
          -
          <lpage>322</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>642</fpage>
          -14363-2_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guerra</surname>
          </string-name>
          ,
          <article-title>Which learning visualisations to ofer students?</article-title>
          , in: V.
          <string-name>
            <surname>Pammer-Schindler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Sanagustín</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Drachsler</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Elferink</surname>
          </string-name>
          , M. Schefel (Eds.),
          <source>Lifelong Technology-Enhanced Learning</source>
          , Springer International Publishing, Cham,
          <year>2018</year>
          , pp.
          <fpage>524</fpage>
          -
          <lpage>530</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guerra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Araújo</surname>
          </string-name>
          ,
          <article-title>Individual and peer comparison open learner model visualisations to identify what to work on next</article-title>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Design principles and design patterns</article-title>
          ,
          <source>Object Mentor</source>
          <volume>1</volume>
          (
          <year>2000</year>
          )
          <fpage>597</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gamma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Helm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Vlissides</surname>
          </string-name>
          ,
          <article-title>Design patterns: elements of reusable object-oriented software</article-title>
          ,
          <source>Pearson Deutschland GmbH</source>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          , et al.,
          <article-title>Scikit-learn: Machine learning in python</article-title>
          ,
          <source>the Journal of machine Learning research 12</source>
          (
          <year>2011</year>
          )
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Buitinck</surname>
          </string-name>
          , G. Louppe,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mueller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Niculae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grobler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Layton</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. VanderPlas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Holt</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Varoquaux, API design for machine learning software: experiences from the scikit-learn project</article-title>
          ,
          <source>CoRR abs/1309</source>
          .0238 (
          <year>2013</year>
          ). URL: http://arxiv.org/abs/1309.0238. arXiv:
          <volume>1309</volume>
          .
          <fpage>0238</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Milojkovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghafari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nierstrasz</surname>
          </string-name>
          ,
          <article-title>It's duck (typing) season!</article-title>
          ,
          <source>in: 2017 IEEE/ACM 25th International Conference on Program Comprehension (ICPC)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>315</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICPC.
          <year>2017</year>
          .
          <volume>10</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          , W. Ma, L. Chen,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>An empirical study on dynamic typing related practices in python systems</article-title>
          ,
          <source>in: Proceedings of the 28th International Conference on Program Comprehension</source>
          , ICPC '20,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          , p.
          <fpage>83</fpage>
          -
          <lpage>93</lpage>
          . URL: https://doi.org/10.1145/3387904.3389253. doi:
          <volume>10</volume>
          .1145/ 3387904.3389253.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>G. van Rossum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtosalo</surname>
          </string-name>
          , Łukasz Langa, Type Hints, PEP
          <volume>484</volume>
          ,
          <year>2014</year>
          . URL: https://peps. python.org/pep-0484/.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Levkivskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtosalo</surname>
          </string-name>
          , Łukasz Langa,
          <article-title>Protocols: Structural subtyping (static duck typing)</article-title>
          ,
          <source>PEP 544</source>
          ,
          <year>2017</year>
          . URL: https://peps.python.org/pep-0544/.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perez-Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shawe-Taylor</surname>
          </string-name>
          ,
          <article-title>PEEK: A Large Dataset of Learner Engagement with Educational Videos</article-title>
          ,
          <source>in: Proc. of RecSys Workshop on Online Recommender Systems and User Modeling (ORSUM'21)</source>
          ,
          <year>2021</year>
          . URL: https: //arxiv.org/abs/2109.03154.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pérez-Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          , J. Shawe-Taylor,
          <article-title>TrueLearn: A Family of Bayesian Algorithms to Match Lifelong Learners to Open Educational Resources</article-title>
          ,
          <source>in: AAAI Conf. on Artificial Intelligence, AAAI 20</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bulathwela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pérez-Ortiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Yilmaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shawe-Taylor</surname>
          </string-name>
          ,
          <article-title>Towards an Integrative Educational Recommender for Lifelong Learners</article-title>
          ,
          <source>in: AAAI Conference on Artificial Intelligence, AAAI 20</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , M. Alotaibi,
          <string-name>
            <given-names>W.</given-names>
            <surname>Byrne</surname>
          </string-name>
          , G. Cierniak,
          <article-title>Visualising multiple data sources in an independent open learner model</article-title>
          ,
          <source>in: Artificial Intelligence in Education: 16th International Conference, AIED</source>
          <year>2013</year>
          ,
          <article-title>Memphis</article-title>
          ,
          <string-name>
            <surname>TN</surname>
          </string-name>
          , USA, July
          <volume>9</volume>
          -
          <issue>13</issue>
          ,
          <year>2013</year>
          . Proceedings 16, Springer,
          <year>2013</year>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>208</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>G. van Rossum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Warsaw</surname>
          </string-name>
          , N. Coghlan,
          <source>Style Guide for Python Code, PEP 8</source>
          ,
          <year>2001</year>
          . URL: https://www.python.org/dev/peps/pep-0008/.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <article-title>Intervention-bkt: Incorporating instructional interventions into bayesian knowledge tracing</article-title>
          , in: A.
          <string-name>
            <surname>Micarelli</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Stamper</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Panourgia (Eds.),
          <source>Proc. of Int. Conf. on Intelligent Tutoring Systems</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Pardos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Gowda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Hefernan</surname>
          </string-name>
          ,
          <article-title>The sum is greater than the parts: Ensembling models of student knowledge in educational software</article-title>
          ,
          <source>SIGKDD Explor. Newsl</source>
          .
          <volume>13</volume>
          (
          <year>2012</year>
          )
          <fpage>37</fpage>
          -
          <lpage>44</lpage>
          . doi:
          <volume>10</volume>
          .1145/2207243.2207249.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>C.</given-names>
            <surname>Piech</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ganguli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sahami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Guibas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sohl-Dickstein</surname>
          </string-name>
          ,
          <article-title>Deep knowledge tracing</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Pardos</surname>
          </string-name>
          , W. Jiang,
          <article-title>Designing for serendipity in a university course recommendation system</article-title>
          ,
          <source>in: Proceedings of the Tenth International Conference on Learning Analytics &amp; Knowledge</source>
          , LAK '20,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          , p.
          <fpage>350</fpage>
          -
          <lpage>359</lpage>
          . URL: https://doi.org/10.1145/3375462.3375524. doi:
          <volume>10</volume>
          .1145/3375462. 3375524.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>