<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>EasyDKT: an easy-to-use framework for Deep Knowledge Tracing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriella Casalino</string-name>
          <email>gabriella.casalino@uniba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mattia Di Gangi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Ranieri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Schicchi</string-name>
          <email>daniele.schicchi@itd.cnr.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Taibi</string-name>
          <email>davide.taibi@itd.cnr.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Deep Knowledge Tracing, Education, Deep Learning, Artificial Intelligence</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AppTek GmbH</institution>
          ,
          <addr-line>Aachen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer Science Department, University of Bari</institution>
          ,
          <addr-line>Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Institute for Education Technology, National Research Council of Italy</institution>
          ,
          <addr-line>Palermo</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <fpage>2009</fpage>
      <lpage>2010</lpage>
      <abstract>
        <p>The goal of knowledge tracing (KT) is to track a students' progress over time by analyzing their historical data, so as to predict their future performance on tests related to the topics they have covered. The rise of online platforms for education, where the learning process is embedded, unlocked the potential of customized teaching such as in intelligent tutoring systems. Thanks to ongoing advancements in KT algorithms, teachers can now be aware of students' needs and recommend appropriate learning resources. They can also rank learning content, skipping or delaying content based on dificulty. In recent years, Deep Knowledge Tracing (DKT) has proven highly efective in solving KT tasks due to its ability to model complex long-range dependencies in test sequences, resulting in better prediction quality. The field of DKT is expanding, with numerous algorithms being proposed and implemented using various technologies. This paper introduces a new framework called EasyDKT, which simplifies the development and evaluation process for DKT algorithms. The framework aims at ofering users a high level of technological abstraction, with a modular structure that considers data processing, evaluation metrics, and neural network models to be trained on custom datasets. Currently, EasyDKT supports PyTorch and TensorFlow, with plans to incorporate additional technologies in the future. Experiments on the ASSISTments skill-builder dataset 2009-2010 show a case study of students' data analysis through AIxEDU: 1st International Workshop on High-performance Artificial Intelligence Systems in Education, November 06-09, ∗Corresponding author.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org
EasyDKT.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>Learning is the process of obtaining fresh knowledge, adopting new behaviors, acquiring new
skills, and developing new values, attitudes, and preferences. A meaningful learning experience
concerns integrating recently acquired information with existing knowledge to exploit the
acquired knowledge in several situations and contexts.</p>
      <p>
        The learning process is a highly personalized experience encompassing many activities, such as
(D. Taibi)
CEUR
Workshop
Proceedings
reading, writing, listening, observing, thinking, and testing. Recognizing that every student has a
unique learning pace and requirements is crucial, so personalized learning is essential to optimize
the learning experience and ensure maximum benefit. Moreover, each student has followed a
personal learning path. In this sense, Knowledge Tracing (KT) supports personalized learning
by analyzing students’ previous interactions with specific topics to predict their performance
on future tests. Teachers can use KT algorithms to pinpoint their students’ learning needs and
suggest relevant materials accordingly. This also allows them to prioritize learning content by
postponing or skipping material that may be particularly challenging. These advancements
have significantly improved the educational experience for students and have enabled teachers
to provide more individualized and efective instruction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        According to the Beijing consensus [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], Artificial Intelligence (AI) can support personalized
learning, ofering systems capable of recognizing the students’ needs and ofering them valid
support [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6 ref7">3, 4, 5, 6, 7</xref>
        ]. New studies on KT have leveraged AI to develop self-governing systems
to monitor student competencies. Deep Knowledge Tracing (DKT) utilizes deep learning to
enhance the analysis of intricate, far-reaching connections in assessment sequences that depict
a student’s abilities [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. DKT is an expanding area investigating many algorithms developed
through various technologies. To make the usage of DKT easier, this paper proposes EasyDKT, an
innovative framework that ofers users a high level of technological abstraction in implementing
DKT. We leveraged a modular design that makes the framework easy to extend with other DKT
models, integrating and combining custom datasets. In addition, EasyDKT abstracts the data
processing and the evaluation stage, facilitating tasks such as comparing several DKT models.
It has been implemented in Python and supports PyTorch [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and TensorFlow [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
Currently, EasyDKT implements the original DKT model proposed by Piech et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Such
a choice is due to the model’s importance and the dificulties for the scientific community to
implement it with modern frameworks since the original software libraries no longer work.
In this way, we contribute to the research field by ofering a modern version of the original
DKT model that achieves the same performance and that is easily accessible. To validate
the implementation, we presented a case study using EasyDKT to analyze the well-known
ASSISTments skill-builder dataset 2009-2010. Experiments have been conducted varying the
model’s hyperparameters to validate the efectiveness of the proposed tool. We achieved a
maximum value of AUC of 0.84, very close to 0.86 reported in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>The paper is organized as follows: Section 2 briefly introduces the main ideas behind
Knowledge Tracing. Section 3 reviews the literature of Deep Knowledge Tracing, from the first model
to more recent advancement. The EasyDKT framework is then presented in Section 4, together
with details of the modules it is composed by. The experimental design and the evaluation
results are presented in Section 5. Finally, section 6 concludes the paper and depicts future
direction for this research.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Knowledge Tracing</title>
      <p>
        Corbett and Anderson proposed the former model of Knowledge Tracing (KT) in their ACT
Programming Tutor (APT), and it was intended to guide students in Lisp programming activities
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. As illustrated in figure 1, the idea behind the KT theory is that the student’s knowledge
can be modeled if the domain knowledge is organized into a hierarchical structure of skills that
is proposed during the learning experience, so as students can master low-level skills before
approaching to the highest level skills. It is assumed that students first acquire knowledge through
declarative form, followed by the acquisition of domain-specific procedural knowledge through
practical tasks. In particular, a set of the rules (skills and sub-skills) that the student should have
known is defined, and for each exercise, the probability that the student has learned each rule
is evaluated. Analyzing the student’s interactions the system can recommend activities that
improve the student’s competencies, such as analyzing and studying simpler topics preparatory
to the main topic [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>Autonomous Knowledge Tracing belongs to the Intelligent Tutoring System (ITS) field, which
utilizes cognitive models to evaluate students’ understanding. The system adapts its feedback
and guidance based on the student’s knowledge, thereby enhancing the quality and speed of
learning. A model is developed for the student by analyzing their progress in a sequence of tasks.
The algorithm closely monitors each exercise’s outcomes, noting any successful or unsuccessful
attempts. This data is then used to predict how well the student will perform in the subsequent
exercises.</p>
      <p>
        Probabilistic models based on Bayes theory were mostly used to to estimate learners’
knowledge states over time [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. However, recent advancements in Deep Learning have shown its
efectiveness in tackling KT. In Deep Knowledge Tracing students are modeled individually.
Each student’s abilities are represented by predicted probabilities of using specific skills to solve
exercises. The model automatically extracts hidden skills from the student’s past interactions
and uses historical student data to predict their likelihood of mastering the next item and the
skills involved, along with their probability. This enables identifying students who require
extra assistance or recommending learning resources based on the acquired skills. For further
information on this topic, please refer to [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Literature Review</title>
      <p>Recurrent Neural Networks have been commonly used to address the KT problem,
demonstrating impressive results in forecasting a student’s performance by reviewing their previous
interactions. These models examine the sequence of question-answer pairs {  ,   } over a period
to forecast the student’s response at time  + 1 .</p>
      <p>
        Piech et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] were the first to introduce the concept of ”Deep Knowledge Tracing”. They
suggested that the task should be formulated as a temporal application because the student’s
knowledge increases over time. The authors experimented with both classical and LSTM RNNs
to analyze the data representing the student’s history. Their goal was to trace the acquired
knowledge and predict future performances without hard-coding the student competencies,
which is a demanding task that requires expert annotators. This work leverages three diferent
sets of data: Simulated-5, Khan Math, and ASSISTments “skill builder”. Simulated-5 involves
4,000 virtual students answering 50 exercises based on 5 concepts. The students’ knowledge is
modeled by the Item Response Theory, and the skills improve gradually. Each exercise covers a
specific concept and is labeled with a level of dificulty. The Khan Math is a collection of data
from Khan Academy, consisting of 1.4 million exercises completed by 47,495 students across 69
categories. ASSISTments is a dataset used for building an Intelligent Tutor System that helps
students with math problems. The tutor outputs a log of actions performed by the student every
time they correctly complete an exercise. This publicly available data covers the time period of
2009-2010 and is a significant resource for addressing the KT problem using ML.
      </p>
      <p>
        Subsequently, Xiong et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] have revised the preliminary score presented by Piech et al.
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], uncovering issues that were not considered. The authors have considered the issues and
tested the RNNs on more reliable data sets. The final results show high performance achieved
by using RNNs, but the performance gap with previous models was reduced.
Scientists have been studying the qualities of DKT after its innovative introduction. According
to Khajah et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], RNNs incorporate the recency efect, meaning the model aligns with human
reasoning processes as it gives more importance to recent events over past ones. Since DTK’s
input is the sequence of exercises a student receives in the same order, it can contextualize trial
sequences. This helps us understand how the exercise sequences impact the student’s learning.
DKT can predict a student’s performance on the next exercise based on their achievement
history and can also determine the degree of relatedness among skills.
      </p>
      <p>
        It is not possible for the DKT system to determine if a student has fully grasped a particular
concept. To tackle this problem, a new model called Dynamic Key-Value Memory Networks
(DKVMN) was suggested by Zhang et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. This model has the ability to learn the
relationships between diferent concepts and give an accurate assessment of a student’s understanding
level for each individual concept. DKVMN is inspired by the Memory-Augmented Neural
Network[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], a particular neural network (NN) which exploits an external memory module to
enhance the ability of the model to capture long-term dependencies. DKVMN uses two
memory modules: a static matrix for knowledge concepts (key) and a dynamic matrix for student
competencies (value) for each concept. A comparison was made between the approach used
in the DKVMN model and the classical DKT model introduced by Piech et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] using four
distinct sets of data. The results indicate that the DKVMN model performs better in tracking
the student’s knowledge and provides a comprehensive outline of the level of mastery of each
concept for every student.
      </p>
      <p>
        Zhang et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] have suggested an alternative method to enhance the performance of RNNs
for the KT problem. They recommend analyzing the range of additional features captured by
computer-based learning platforms and incorporating them into DKT models. The authors
conducted an experiment to determine the impact of three factors on student performance:
response time, number of attempts, and whether the first action was to request help. They
then proposed a method for incorporating this information into RNN analysis. The results
showed that augmenting the features considered led to better outcomes than the original DKT
model. Minor changes were made to the original DKT structure to accommodate the richer
set of student information and contextual insights that were deemed important for achieving
improved results.
      </p>
      <p>
        Currently, deep learning models utilizing Transformers are at the forefront of solving tasks
involving temporal sequences. Tackling the KT has exploited such models in several respects.
A study conducted by Pandey et al. [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] focused on improving Deep Knowledge Tracing (DKT)
by utilizing self-attention-based neural networks. Their approach, called SAKT, analyzes a
student’s previous actions to determine which concepts they have mastered. SAKT outperforms
previous deep learning-based systems in addressing the issue of sparse data, where students
only interact with a few concepts, resulting in limited information. The SAKT system calculates
attention weights to determine the importance of completed exercises when predicting a
student’s performance on a given exercise. By visualizing these attention weights, it becomes
easier to see which completed exercises the network relied on to make a prediction. This helps
to identify the relevant past exercises that the student used to solve the current exercise. SAKT
has been extensively tested on real-world datasets and has shown an average improvement of
4.43% in AUC compared to previous DL models.
      </p>
      <p>
        The use of feedback connection tracking time through positional encoding is not utilized by
Transformers according to a study by [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. However, recent developments have resulted in a
new architecture called Transformer-XL. This architecture includes a recurrence mechanism
and an updated positional encoding scheme, which allows for better capturing of longer-term
dependencies compared to both RNNs and traditional Transformer models. In their work, He et
al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] utilized the unique features of Transformer-XL to address issues arising from analyzing
lengthy input exercise sequences, which have negatively impacted past DL models’ performance.
The system they developed, KT-XL, was thoroughly tested on three real-world datasets and
compared with previous models such as DKT, DKVMN, and SAKT. KT-XL outperformed all
other models across the datasets, with an average improvement of 3.6%.
      </p>
      <p>
        New directions of Deep Knowledge Tracing research aim at improving students’ knowledge
modeling. A cognitive representation of students’ skills that overcomes the common assumption
of questions equivalent contributions has been proposed in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. A module to interpret the
prediction results has also been included, to facilitate the use of DKT for the analysis of
students behavior. The use of augmented knowledge have been proposed to better model
students’ skills. In [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] hierarchical heterogeneous knowledge structures are modeled through
knowledge-graphs, whilst Tato et al. explored the use of multi-modal data to enhance the latent
representations of students [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Spatial and temporal features obtained from students’ activities
history has been used to extract deeper hidden information in [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Students’ exercises are
used to derive spatial information that is then connected with temporal characteristics. Results
shown that using more informative representations of students’ knowledge helps in creating
efective user models, leading to better predictive results than state-of-the-art algorithms for
similar tasks.
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Framework</title>
      <p>
        In this paper, we introduce EasyDKT - a user-friendly framework that enables users to
experiment with DKT algorithms through a convenient command-line user interface. The framework,
shown in figure 2, consists of four modules for data management, creation of a neural network
model, experimentation and validation. The framework allows to extend its modules with
interchangeable classes and functions to further enhance the functionalities, which can be easily
selected through a configuration file. EasyDKT implements Piech et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] original neural
network. However, since the technologies they used are no longer available, their experiments
are not reproducible. Thus, the code has been refactored by using two of the most used neural
network libraries, that are TensorFlow and PyTorch. User could select the preferred library, and
could compare results obtained with diferent libraries and settings (as we did in the
experimental part). The configuration file outlines the necessary data for training and evaluating the
model and which module to use for managing data loading and preprocessing. It also includes
the DKT algorithm, its hyperparameters, and the evaluation metrics used to monitor training
progress and final results.
      </p>
      <p>Particularly, we considered the following hyperparameters and the relative values, used to
create and tune Deep Knowledge Tracing models:
- Library: Deep Learning Library (Pythorch, Tensor Flow - TF)
- Optimizer: RMSProp, Adam
- Dropout: dropout rate;
- Hidden Units: Number of LSTM hidden units;
- Batch size: Number of sequences to process in a batch;
- Learning rate;
- Epochs: Number of epochs;
- Time Window: Number of timesteps to process in a batch;</p>
      <p>The framework has meant to be used as a baseline for comparisons, and as a basis technology
to build new algorithms on. For this reason we separated four modules, incapsulating the
functionalities required during a Deep Knowledge Tracing process. The four modules have
been further detailed in the following:</p>
      <sec id="sec-5-1">
        <title>Data managing</title>
        <p>The first module is devoted to prepare student’s data in the format that is required by the given
library. Then, only information related to the student and the answer to a given question are
considered for the processing.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Neural network model creation</title>
        <p>Based on the configuration settings, the neural network model is created. The technologies are
hidden in this module, which exposes an interface to communicate with the user through the
configuration file.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Experimentation</title>
        <p>Data is then divided in a training set to create the model, and a testing set to evaluate it. Since
data is sequentially analysed, and this sequence is crucial for the deep learning models, we
considered a train-test setting, rather than a more general cross-validation setting.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Validation</title>
        <p>The predictive task has been evaluated in terms of the standard classification measure Area
Under the Curve (AUC). Also, graphs with AUC values over epochs are generated in order to
compare the stability and robustness of diferent configuration settings.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Experiments</title>
      <p>5.1. Data
The exploited data is the ASSISTments skill-builder dataset 2009-2010, created through an
online platform and made it available for free 1. This dataset includes mathematical skill-builder
problems that students can solve, and their answers are recorded. The problems presented are
designed to test specific skills, and some questions may require knowledge of multiple skills. To
complete the test, students must answer three consecutive questions correctly. It’s important to
note that if a student uses any support or tutoring system provided by the platform itself, the
question will be marked as incorrect. Additionally, students receive instant feedback to know if
they answered the question correctly.</p>
      <p>The dataset contains 4217 problems and a total of 124 skills. As some students may solve the
same problem, the dataset actually consists of 522,000 tuples. Each tuple comprises three parts:
a student identifier (id), a skill identifier, and the answer to the problem. If a problem relates to
multiple skills, there will be multiple tuples with the same student id and answer but diferent
skill identifiers.</p>
      <p>The dataset was divided within the framework to use 3361 items for Deep Learning model
training and 856 items for testing.</p>
      <sec id="sec-6-1">
        <title>5.2. Results</title>
        <p>During the experimental phase, our main objective was to determine the most efective approach
for implementing the DKT via PyTorch and TensorFlow deep learning libraries. To achieve this,
we conducted a series of rigorous experiments, carefully adjusting various model parameters,
including the dropout rate, learning rate, and optimizer (i.e. RMSProp and Adam), and ensuring
a comprehensive evaluation.</p>
        <p>The model evaluation process has been carried out by computing the AUC (Area under the</p>
        <p>ROC curve) on the tests performed on the ASSISTtments skill-builder data. The ROC curve is
a statistical method that gauges the accuracy of a diagnostic test across the full spectrum of
potential values. Measuring the area beneath the ROC curve is a widely recognized approach
for assessing machine learning models.</p>
        <p>The figures 3 and 4 display the highest and lowest performance results we obtained while
conducting our experiments using the Pytorch and Tensorflow frameworks. We experimented
with various configurations by adjusting the model’s hyperparameters and looking at the AUC
in the range of 50 epochs. A fully overview of the conducted experiments can be found in table
1.</p>
        <p>
          Concerning pytorch, the worst performance was observed when Adam was the optimizer and
had a learning rate of 0.01. In this case, there is an increase in the instability of the AUC value
obtained at each iteration. The best performance was achieved with the same configuration
but by reducing the learning rate to 0.0001. Starting from the twelfth training cycle, the AUC
value stabilized at 0.84 until the end of the execution, resulting in a 2% increase in the final AUC
value. With the best configuration, our framework achieves an AUC score of 0.84 comparable
to the original score of 0.86 reported in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], which was developed using outdated technologies
that are now dificult to replicate. Instead, Figure 4 includes the results of our framework
when using tensorflow. The lowest performance was observed with the configuration that
employed Adam optimizer, did not use dropout, and had a learning rate of 0.0001. Despite using
a high-performing configuration for Pytorch’s algorithm, there was no improvement in the
outcome compared to TensorFlow. In fact, the AUC value remained constant at 0.76, which
is worse than the previous performance. On the other hand, the highest performance was
achieved with the same configuration as the worst case but with a learning rate of 0.001. In this
case, changing the optimizer from RMSProp to Adam does not significantly impact performance
as the AUC fluctuates between 0.78 and 0.79.
        </p>
        <p>We have observed that the learning rate has the most significant impact on improving the
model’s performance. In addition, even when using the same configuration, models developed
with TensorFlow and PyTorch libraries show diferent performances. This highlights the
diferences between these two libraries, despite both aiming to achieve the same goal. Probably,
diferent initialization parameters afect the training of the model leading to a result gap of
5%. In conclusion, the Pytorch implementation with Adam optimizer provided the best results.
This highlights the importance of having an easy-to-use parameterizable workflow for DKT to
identify the best configuration for a given problem quickly.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion and Future Works</title>
      <p>We have introduced a modular framework that makes Deep Knowledge Tracing more accessible
and eficient. Our framework incorporates Pytorch and Tensorflow, the two most significant
deep-learning libraries, giving users the flexibility to choose their preferred one , while
incapsulating the implementation details, so that users are not necessarily required to know how
to use these libraries and their syntax. A simple configuration file has been used to define the
experimental setup. EasyDKT is a first attempt to develop a simple tool for DKT, implementing
all recent technologies. It has been conceived as the core of a more complex tool where more
recent DKT methodologies are encapsulated as hierarchical building blocks. A case study
exploring the use of EasyDKT with the ASSISTments skill-builder dataset has been presented.
Particularly, we studied how the neural-network hyperparameters could afect the learning
performance of the tool. Our experiments have demonstrated that when Pytorch is utilized, our
framework attains state-of-the-art performance. However, some challenges are encountered
when using Tensorflow.</p>
      <p>Our future work involves improving the software structure of the framework to enable the
execution of various algorithms with multiple execution parameters, which can result in more
accurate outcomes. Additionally, we aim to modify the NN model structure based on the
preferences of experienced users. We plan to analyze specific components of the tensorflow
implementation, such as tensor initialization, and compare them with those of Theano to achieve
superior results (AUC = 0.78). We also propose modifying the dataset read-from-file section to
make it more adaptable to diferent datasets and implementations. This will make the framework
more versatile and usable with diferent datasets, structures, and formats. Finally, we plan to
enhance the tool by including more advanced DKT algorithms, and by providing an intuitive
interface to facilitate the user experience.</p>
      <sec id="sec-7-1">
        <title>Acknowledgment</title>
        <p>Gabriella Casalino acknowledges funding from the European Union PON project Ricerca e
Innovazione 2014-2020, DM 1062/2021. This work is partially funded by Bando per Progetti di
Ricerca GNCS 2023 - CUP E53C22001930001. G.C. is a member of the INdAM GNCS research
group.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Corbett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>Knowledge tracing: Modeling the acquisition of procedural knowledge, User modeling and user-adapted interaction 4 (</article-title>
          <year>1994</year>
          )
          <fpage>253</fpage>
          -
          <lpage>278</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[2] UNESCO, Beijing consensus on artificial intelligence and education</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Taibi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Fulantelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Monteleone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schicchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Scifo</surname>
          </string-name>
          ,
          <article-title>An innovative platform to promote social media literacy in school contexts</article-title>
          ,
          <source>in: ECEL 2021 20th European Conference on e-Learning</source>
          , Academic Conferences International limited,
          <year>2021</year>
          , p.
          <fpage>460</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lo Bosco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pilato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schicchi</surname>
          </string-name>
          ,
          <article-title>Deepeva: a deep neural network architecture for assessing sentence complexity in italian and english languages</article-title>
          ,
          <source>Array</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <fpage>100097</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schicchi</surname>
          </string-name>
          , G. Pilato,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Bosco</surname>
          </string-name>
          ,
          <article-title>Attention-based model for evaluating the complexity of sentences in english language</article-title>
          ,
          <source>in: 2020 IEEE 20th Mediterranean Electrotechnical Conference (MELECON)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>221</fpage>
          -
          <lpage>225</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casalino</surname>
          </string-name>
          , G. Castellano, G. Zaza,
          <article-title>Neuro-fuzzy systems for learning analytics</article-title>
          ,
          <source>in: International Conference on Intelligent Systems Design and Applications</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>1341</fpage>
          -
          <lpage>1350</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casalino</surname>
          </string-name>
          , G. Castellano,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Mencar, Incremental and adaptive fuzzy clustering for virtual learning environments data analysis</article-title>
          ,
          <source>in: 2019 23rd International Conference Information Visualisation (IV)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>382</fpage>
          -
          <lpage>387</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Casalino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Grilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Limone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Santoro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schicchi</surname>
          </string-name>
          , et al.,
          <article-title>Deep learning for knowledge tracing in learning analytics: an overview</article-title>
          .,
          <source>TeleXbe</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>A survey on deep learning based knowledge tracing</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>258</volume>
          (
          <year>2022</year>
          )
          <fpage>110036</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Paszke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Massa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lerer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bradbury</surname>
          </string-name>
          , G. Chanan,
          <string-name>
            <given-names>T.</given-names>
            <surname>Killeen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gimelshein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Antiga</surname>
          </string-name>
          , et al.,
          <article-title>Pytorch: An imperative style, high-performance deep learning library</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brevdo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Citro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Corrado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Devin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghemawat</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Harp</surname>
          </string-name>
          , G. Irving,
          <string-name>
            <given-names>M.</given-names>
            <surname>Isard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jozefowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kudlur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Levenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mané</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Monga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Murray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Olah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schuster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shlens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Steiner</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Talwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tucker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vanhoucke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasudevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Viégas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Warden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wattenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <source>TensorFlow: Large-scale machine learning on heterogeneous systems</source>
          ,
          <year>2015</year>
          . URL: https://www.tensorflow.org/, software available from tensorflow.
          <source>org.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Piech</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ganguli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sahami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Guibas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sohl-Dickstein</surname>
          </string-name>
          ,
          <article-title>Deep knowledge tracing</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          (
          <year>2015</year>
          )
          <fpage>505</fpage>
          --
          <lpage>513</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Corbett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>Knowledge tracing: Modeling the acquisition of procedural knowledge, User modeling and user-adapted interaction (</article-title>
          <year>1994</year>
          )
          <fpage>253</fpage>
          --
          <lpage>278</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bulut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Yildirim-Erbasli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gorgun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Pardos</surname>
          </string-name>
          ,
          <article-title>An introduction to bayesian knowledge tracing with pybkt</article-title>
          ,
          <source>Psych</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <fpage>770</fpage>
          -
          <lpage>786</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Abdelrahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nunes</surname>
          </string-name>
          ,
          <article-title>Knowledge tracing: A survey</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>55</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. G. V.</given-names>
            <surname>Inwegen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Beck</surname>
          </string-name>
          ,
          <article-title>Going deeper with deep knowledge tracing</article-title>
          ,
          <source>Proceedings of the 9th International Conference on Educational Data Mining</source>
          (
          <year>2016</year>
          )
          <fpage>545</fpage>
          --
          <lpage>550</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Khajah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. V.</given-names>
            <surname>Lindsey</surname>
          </string-name>
          , M. C. Mozer,
          <article-title>How deep is knowledge tracing?</article-title>
          , arXiv:
          <fpage>1604</fpage>
          .
          <string-name>
            <surname>02416v2</surname>
          </string-name>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Botelho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Hefernan</surname>
          </string-name>
          ,
          <article-title>Incorporating rich features into deep knowledge tracing</article-title>
          ,
          <source>LS '17: Proceedings of the Fourth</source>
          (
          <year>2017</year>
          ) ACM Conference on Learning (
          <year>2017</year>
          )
          <fpage>169</fpage>
          --
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          , G. Wayne,
          <string-name>
            <given-names>M.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Harley</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Danihelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Grabska-Barwińska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Colmenarejo</surname>
          </string-name>
          , E. Grefenstette,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ramalho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Agapiou</surname>
          </string-name>
          , et al.,
          <article-title>Hybrid computing using a neural network with dynamic external memory</article-title>
          ,
          <source>Nature</source>
          <volume>538</volume>
          (
          <year>2016</year>
          )
          <fpage>471</fpage>
          -
          <lpage>476</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Karypis</surname>
          </string-name>
          ,
          <article-title>A self-attentive model for knowledge tracing</article-title>
          , arXiv preprint arXiv:
          <year>1907</year>
          .
          <volume>06837</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          )
          <fpage>5998</fpage>
          -
          <lpage>6008</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zu</surname>
          </string-name>
          , G. Sun,
          <article-title>Kt-xl: A knowledge tracing model for predicting learning performance based on transformer-xl</article-title>
          ,
          <source>ACM TURC'20: Proceedings of the ACM Turing Celebration Conference - China</source>
          (
          <year>2020</year>
          )
          <fpage>175</fpage>
          -
          <lpage>179</lpage>
          . doi:
          <volume>10</volume>
          .1145/3393527.3393557.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <article-title>Improving interpretability of deep sequential knowledge tracing models with question-centric cognitive representations</article-title>
          ,
          <source>arXiv preprint arXiv:2302.06885</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <article-title>Hhskt: A learner-question interactions based heterogeneous graph neural network model for knowledge tracing</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>215</volume>
          (
          <year>2023</year>
          )
          <fpage>119334</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nkambou</surname>
          </string-name>
          ,
          <article-title>Towards a multi-modal deep learning architecture for user modeling</article-title>
          ,
          <source>in: The International FLAIRS Conference Proceedings</source>
          , volume
          <volume>36</volume>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>L.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Deep knowledge tracing based on spatial and temporal representation learning for learning performance prediction</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <fpage>7188</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>