<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>C. Chen);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Integrating Expert Knowledge in Matrix Factorization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Changsheng Chen</string-name>
          <email>changsheng.chen@kuleuven.be</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robbe D'hondt</string-name>
          <email>robbe.dhondt@kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alireza Gharahighehi</string-name>
          <email>alireza.gharahighehi@kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Celine Vens</string-name>
          <email>celine.vens@kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wim Van den Noortgate</string-name>
          <email>wim.vandennoortgate@kuleuven.be</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Public Health and Primary Care, KU Leuven</institution>
          ,
          <addr-line>Campus KULAK, Kortrijk</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Psychology and Educational Sciences, KU Leuven</institution>
          ,
          <addr-line>Campus KULAK, Kortrijk</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>imec research group itec, KU Leuven</institution>
          ,
          <addr-line>Kortrijk</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Matrix Factorization (MF) has been widely used to build recommender systems that play an important role in adaptive lifelong learning systems. Yet, as a data-driven approach, the optimization of MF estimation mainly focuses on the improvement of prediction errors in a black-box way, which leads to a lack of interpretability regarding latent factors and validity examination of the resulting estimates and predictions. To address this problem, we proposed a revised version of MF to integrate expert knowledge. Specifically, the item-factor matrix is constrained based on expert-defined itemfactor information in the estimation and those constraints deliver the interpretation to the resulting latent factors. We illustrate this method with an empirical data set with 60 items and 4,645 students from Trends in International Mathematics and Science Study (TIMSS). The results show that the revised MF has slightly lower prediction performance compared to the traditional MF but provides interpretable latent factors and validated user-factor estimates and accelerates the hyperparameter tuning operation.</p>
      </abstract>
      <kwd-group>
        <kwd>matrix factorization</kwd>
        <kwd>expert knowledge</kwd>
        <kwd>interpretability</kwd>
        <kwd>validity</kwd>
        <kwd>recommender system1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recommending personalized learning materials or exercises to facilitate continuous learning in
learners or trainees constitutes a pivotal component in the development of adaptive systems in
a lifelong learning context [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1-4</xref>
        ]. Personalization entails the selection or design of relevant
materials based on the unique characteristics of each user, whereas adaptivity signifies the
system's ability to adjust to evolving needs and circumstances of users over time. In the past
years, several techniques have been
      </p>
      <p>
        proposed to make personalized and adaptive
recommendations within learning systems, and one of those is Matrix Factorization (MF) [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8 ref9">5–9</xref>
        ].
      </p>
      <p>
        MF was originally proposed to recommend commercial products for online merchants (such
as Netflix) based on a given user-item score matrix (rows: users; columns: items; entries: scores
given by users to the items) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The scores can be binary, indicating whether users watched
or liked this movie, polytomous or continuous, referring to levels of interest of users. The input
scores provide personalized information on users’ historical interaction with the given items.
The principle behind the basic MF is that it decomposes a score matrix into two low-rank
matrices, i.e., the user-factor matrix and the item-factor matrix. The user-factor matrix provides
the relationship information between users and latent factors, while the item-factor matrix
offers information on the link between items and latent factors. The multiplication of two low
rank matrices constitutes reconstructed scores, which offers prediction information. With the
prediction, the system can personalize item recommendations. For example, items with higher
predicted scores can be prioritized in movie recommendations. In addition, the relevant
estimates can be evolved across time to realize adaptivity.
      </p>
      <p>
        Theoretically, MF can be seen as a data-driven black-box approach and it has been criticized
because it is difficult to interpret resulting latent factors and hence predictions or
recommendations [
        <xref ref-type="bibr" rid="ref11 ref12">11–13</xref>
        ]. From the perspective of the application domain, traditional MF
produces predictions only based on a score matrix without considering any other
humandefined information, which might cause concern about the validity and interpretability of
results. Some studies have proposed approaches to account for those issues. For example,
Abdollahi and Nasraoui [14] and Vlachos [15] proposed co-cluster approaches with different
methodological designs to give explanations for resulting recommendations. These co-cluster
approaches quantify similarities within items or users based on the given user-item interaction
patterns, and the recommendations can be justified by identifying certain clusters that the
corresponding scores belong to [14,15]. While these similarities can be seen as a form of trusted
information regarding reasoning recommendations, it remains challenging to comprehend the
resulting factors and predictions for both approaches. In addition, the two co-cluster approaches
are based on business scenarios that show different properties compared to learning systems.
In educational sciences, learning materials or test items are usually designed for improving or
measuring students’ skills (considered as latent factors), so the interpretation of latent factors
is always important for learners and teachers. For example, an item designed for measuring or
improving students’ English verbal skill cannot provide information about students’ geometry
skill, and recommendations need to consider the improvement on targeted skills. Furthermore,
in business applications, missing rating scores are typically coded as zero. In learning systems,
a zero usually indicates an incorrect answer to a test item or a failed learning task. Those
domain-specific differences cause special considerations in the development of analysis
techniques.
      </p>
      <p>To address the aforementioned concern, this study proposes a revised version of MF to
integrate expert knowledge to interpret resulting latent factors and estimates. Specifically, in
the traditional MF, a given response matrix is approximated by a user-factor matrix and an
item-factor matrix. In the revised MF, we add constraints to the item-factor matrix with the help
of expert knowledge by fixing certain entries to zero and skipping them in the optimization
routine. In other words, we only optimize the full user-factor matrix and the non-zero entries
of item-factor matrix. These constraints match expert-defined factor tags with latent factors to
give interpretation to relevant estimates, which is the central idea in the revised MF. With the
revised MF, the follow-up research questions include how integrating domain experts’ opinion
will affect the prediction performance and the user-factor matrix compared to the original MF.
To answer these questions, an empirical study is presented below.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>2.1. Data
The following sections start with an introduction to the empirical data set. After that, the
technical details of the proposed revised MF are introduced. Then, the analysis design for
comparing the traditional MF and the revised MF is explained.</p>
      <p>An empirical data set from Trends in International Mathematics and Science Study (TIMSS) was
used in this study. The selected data was collected from the 4th grade students for mathematics
test in TIMSS 2019 in Flanders, Belgium [16]. In total, there were 213 items designed for 4,655
students. The response matrix contained a large number of missing values because of the item
administration design [17]. In particular, 86.73% of entries in the response matrix were missing.
Each student and each item had 28 and 617 responses on average respectively. Furthermore, we
followed relevant instructions of the codebook from the corresponding database in TIMSS 2019
to convert original multi-categorical responses to binary responses (i.e., right or wrong) [17,18].
Specifically, “Correct Response” and “Partially Correct Response” were all coded as 1 and
“Incorrect Response” was coded as 0. Additionally, to consider the implementation of cross
validation for tuning hyperparameters, we excluded 10 students and 4 items with less than five
responses. Then, for the sake of illustration, we randomly selected 60 items. After that, the final
response matrix contained 4,645 students as rows and 60 items as columns.</p>
      <p>In TIMSS, each item is designed and labelled for assessing certain latent factors defined by
domain experts, and we adopted latent factor tags under the “Topic Area” label system provided
by the online codebook [16,17]. Table 1 provides examples of items with those latent factor tags.
There are seven factor tags in total and each item is linked to one of seven factor tags. We used
the tag information to construct an expert-defined item-factor matrix where the entry 1
indicated that the item can contribute information to the corresponding factor and the entry 0
referred to the opposite.</p>
      <sec id="sec-2-1">
        <title>2.2. Method Implementation</title>
        <p>Suppose that an observed user-item rating matrix  is given, where  refers to the rating
of user u for item i. The rating matrix can be approximated by a product of two decomposed
matrices  and  with a defined rank K (equal to the number of latent factors), which is

− 

+  (‖
‖ + ‖
‖ )</p>
        <p>
          In the above equation,  is the regularization parameter for controlling the overfitting. To
minimize the defined loss function, we slightly revised the stochastic gradient descent
optimization method proposed by Simon Funk [19], which is:

for non-missing entries in 
. It can be further expressed as 
≈  ̂ =
(1)
(2)



≈ 
= (
= 
, 
 . Here,  ̂ refers to the predicted rating that is the product of the user-factor vector
, … , 
) for user u and the transpose of the item-factor vector 
=
( , 
, … , 
) for item i [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. To obtain these two decomposed matrices 
function is constructed to minimize the reconstruction error, which is:


← 
← 
+  ( 
+  ( 
−
        </p>
        <p>)
−   ) when 
≠ 0
where the prediction error is defined as 
= 
− 

and  denotes the learning rate.</p>
        <p>The user-factor and item-factor entries denoted as 
and 
are repeatedly computed with 
and  , and</p>
        <p>is only computed when it is not equal to zero. In other words, when 
to zero, those entries will be skipped in the optimization routine. The iteration for 
is equal
and 
is stopped when the defined error threshold or the defined number of iteration steps is reached.
Furthermore, 
and</p>
        <p>are initialized with random numbers. In order to integrate expert
knowledge, certain entries of</p>
        <p>are constrained to zero based on a given expert-defined
binary item-factor matrix for mapping between items and expert-defined factors and delivering
factor tags to corresponding factors. The zero constraint means that certain items cannot
contribute any information to the defined factors.</p>
        <p>In the illustration analysis, specifically, 
and 
were initialized with numbers
generated from a uniform distribution  (0,1). Then, for the revised MF, certain entries in 
were constrained to zero based on the aforementioned expert-defined item-factor matrix for the
TIMSS data set. The number of factors (or the defined rank K) is fixed to seven. The error
threshold was defined as</p>
        <p>&lt; 0.01 and the iteration steps followed hyperparameter settings
(see Table 2).</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.3. Design and Analysis</title>
        <p>The evaluation of two versions of MF (with and without adaptation) was based on R 4.3.2 [20]
in the Flemish Supercomputer Center (Vlaams Supercomputer Centrum; VSC) by an Intel Xeon
8360Y processor with 72 cores. In particular, 71 cores were used to do parallel computation for
tuning relevant hyperparameters, and 1 core was used for training the final model and the result
analysis.</p>
        <p>For the evaluation operation, first, 80% of entries from an input response matrix were
selected as training data, which is to estimate the model parameters, and the rest 20% of entries
were used as test data, which is to compare the observed and predicted scores. The selection
was operated in a random way but under a condition that each item and each student had at
least one available response for both training and test data. This is because MF cannot make
predictions for completely new items or users, which is denoted as the cold-start problem in the
literature [13,21].</p>
        <p>Second, a 2-folds cross validation was implemented within the training step to tune
hyperparameters. In particular, training data was further divided into training and validation
data with the same random selection operation. The average of Mean Squared Error (MSE)
across folds was used to compute the performance of the input set of hyperparameters. Table 2
presents defined ranges for three hyperparameters, including the iteration steps, the learning
rate and the regularization parameter. Those defined values were based on recommended values
from relevant software [22] and computation power consumption. It is worth noting that the
number of factors is fixed to seven as known information. In total, 6 (iteration steps) × 20
(learning rates) × 10 (regularization parameters) = 1,200 scenarios were created, and the grid
search was implemented in tuning procedures.</p>
        <p>Finally, the set of hyperparameters with the lowest MSE was selected as the final one, which
was implemented to train the final MF model. After that, the final MF model made predictions
based on test data and the obtained MSE was used for evaluating methods’ performance. Apart
from that, relevant estimates for three selected students were further investigated to examine
the interpretability and validity performance.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>3.1. Methods Evaluation</title>
        <p>Table 3 shows the results of the two versions of MF. For the revised MF (with integrating
expertdefined latent factor information), the lowest MSE for tuning hyperparameters in the training
stage was 0.272, which was 30% higher than the traditional MF (0.210). The MSE for the revised
MF in test data was 0.262, which was 26% higher than the traditional MF (0.208). The difference
for both was around 0.06. In terms of time usage, compared to the traditional MF, the revised
MF saved 209 seconds for tuning hyperparameters. Regarding the time for training the final
tuned model, due to the higher number of iteration steps, training the final revised MF model
took 8 seconds longer than the traditional MF. Additionally, as item-factor estimates were
constrained in the revised MF, which affected user-factor estimates, the overall differences in
resulting user-factor estimates were investigated as well. Figure 1 presents the pairwise MSE
based on user-factor estimates between two approaches, and the overall average MSE was
around 0.099. It is worth noting that the order of the user-factor vectors in the traditional MF
does not correspond to the order in the revised MF, so the pairwise MSE looped over all columns
in the user-factor matrix.
Note. “Training Time” here refers to the time for training the final tuned model.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Single Case Evaluation</title>
        <p>To further evaluate the interpretability and validity performance of two approaches, 3 students
were randomly selected: student No. 822, student No. 4091, and student No. 4396. Table 4
presents relevant information for selected students based on the final trained model. Generally,
thanks to the item-factor constraints, namely the integration of expert knowledge, the resulting
user-factor estimates in the revised MF were interpretable. When the constraints were added to
each factor column in the item-factor matrix in the revised MF, factor columns with respective
constraints corresponded to expert-defined tags. Specifically, for student No. 822 who answered
7 items, two items designed for measuring “Expressions, Simple Equations, and Relationships”
were answered correctly and the rest of items for measuring “Fractions and Decimals”,
“Measurement”, and “Reading, Interpreting, and Representing” were answered incorrectly. From
corresponding estimated user-factor scores, it can be found that scores of giving correct answers
were higher than scores of giving wrong answers in the revised MF. Furthermore, scores from
the revised MF for indicating the relationship between student No. 822 and the defined factor
“Expressions, Simple Equations, and Relationships” was 0.662, higher than scores of “Fractions
and Decimals” (0.601), “Measurement” (0.491), and “Reading, Interpreting, and Representing”
(0.294). This pattern existed for the student No. 4396 and No. 4091 as well.</p>
        <p>In contrast, scores estimated by the traditional MF were distributed differently. In the
No. 822
No. 4396
No. 4091</p>
        <p>MP61232
MP71216AA
MP61182
MP71071
MP71098
MP61211A
MP71202
MP71045
MP71179C
MP71067
MP71070
MP51103
MP51079
MP61266
MP51100
MP61018
MP61018B
traditional MF, no item-factor constraints were used to deliver interpretable information for
each factor, so it was difficult to interpret resulting factors and relevant user-factor scores In
other words, the current corresponding position for user-factor scores in the traditional MF in
Table 4 can be rearranged freely. In addition, we also randomly selected other single cases to
examine the robustness of detected patterns. We found some exceptional cases in the results of
revised MF, and they were usually associated with high prediction errors.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and Conclusion</title>
      <p>In the present study, we proposed a revised version of MF to integrate expert knowledge in
estimation procedures. Specifically, in the revised MF the item-factor matrix was constrained
in the estimation based on the expert-defined item-factor relationship. The added constraints
delivered interpretable information to resulting latent factors with relevant estimates and
further provides possibilities for the theoretical validity examination. We illustrated the revised
MF and compared it with the traditional MF based on empirical assessment data from the TIMSS
2019. The empirical analyses show that using the revised MF generally makes the estimation
faster than the traditional MF while the prediction performance of the former slightly goes
down compared to the latter. This is expected somehow because adding constraints to the
itemfactor matrix means that the available parameter space for optimization is shrunken, which
reduces models’ expressiveness. At the same time, thanks to the constraints, the time usage for
tuning hyperparameters decreases because certain entries in the item-factor matrix are fixed to
zero and those are skipped in the optimization routine.</p>
      <p>Except for the results of prediction performance and time usage, the analysis of individual
students suggested that integrating expert-defined item-factor information produced different
user-factor estimates. In detail, regarding the interpretability, adding constraints from an
expert-defined item-factor matrix helps match resulting factors with defined factor tags. In
terms of validity, user-factor scores of giving correct answers are higher than the case of giving
wrong answers in the revised MF. Both improvements cannot be observed in the traditional MF,
which is a key reason for proposing the revised MF. As mentioned before, the traditional MF is
a purely data-driven method, and the only consideration of estimation optimization is to
minimize a defined loss function (mainly related to the prediction error). From the perspective
of application domains, it is difficult to interpret and validate resulting item-factor and
userfactor estimates. In contrast, using the revised MF largely improves this issue. In practice, the
latent factor is usually interpreted as skills in the context of educational science. It is much more
logical that when students give wrong answers to certain questions, corresponding skills are
lower than when students give right answers, which is in line with results from the revised MF.</p>
      <p>Compared to previous approaches [14,15], our approach provides an easy way to integrate
external information to improve the interpretability in the MF. Previous methods focus on
giving explainable information on resulting recommendations in business applications by
quantifying similarities to link predicted recommendations to given ratings. In contrast, our
approach concentrates on giving interpretable information on latent factors and estimates, and
this kind of information can also be used to provide explainable information for possible
recommendations. For example, student No. 4396 had lower values for “Measurement” and
learning systems could provide materials related to “Measurement” for that student. Apart from
that, learning systems can also examine predicted responses to items that students do not try
and focus on items that students cannot answer correctly. Overall, materials in learning systems
are always designed to reach certain learning goals, such as improving students’ math skills,
which is significantly different from materials in commercial systems. This crucial difference
calls for the information that can be interpreted and validated based on educational theories
regarding technologies applied in learning systems.</p>
      <p>Several limitations of this study need to be acknowledged. First, MF has evolved into
different versions in the past years. The proposed version of MF is based on the basic MF
without considering newly developed features, which can be improved in the future. Second,
the illustration analysis assumed that each item had the same difficulty and was designed for
measuring one skill. When items have different difficulty levels and are developed to measure
multiple skills, patterns may change. In addition, we only considered binary responses, which
could be extended to graded responses. Third, expert-defined item-factor information might be
biased or limited, which could be detected by comparing or validating estimates based on
different sets of constraints in the revised MF, and this operation is not studied in the above
analysis. For example, some studies have explored data-driven refinement methods for
Qmatrices based on the performance of defined index [24]. Furthermore, constraints, such as
fixing some item-factor entries to zero, might be too strict to reflect realistic complex situations,
which can be further relaxed somehow. Fourth, the study is mainly based on empirical data
rather than simulation or synthetic data. In a simulation study, true models behind data are
known or controlled by researchers, which offers grounds for more comprehensive comparison.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>This work was partly supported by Research Fund Flanders (FWO) 1S38023N (Robbe D’hondt).
We also acknowledge the Flemish Government (AI Research Program). Additionally, the
computational resources and services used in this work were provided by the VSC (Flemish
Supercomputer Center), funded by the Research Foundation-Flanders (FWO) and the Flemish
Government-department EWI.
[13]A. Gharahighehi, K. Pliakos, C. Vens, Addressing the cold-start problem in collaborative
filtering through positive-unlabeled learning and multi-target prediction, IEEE Access 10
(2022) 117189–117198. https://doi.org/10.1109/ACCESS.2022.3219071.
[14]B. Abdollahi, O. Nasraoui, Using explainability for constrained matrix factorization, in:
Proceedings of the Eleventh ACM Conference on Recommender Systems, ACM, Como
Italy, 2017: pp. 79–83. https://doi.org/10.1145/3109859.3109913.
[15]M. Vlachos, C. Dunner, R. Heckel, V.G. Vassiliadis, T. Parnell, K. Atasu, Addressing
interpretability and cold-start in matrix factorization for recommender systems, IEEE
Trans. Knowl. Data Eng. 31 (2019) 1253–1266. https://doi.org/10.1109/TKDE.2018.2829521.
[16]F. Bethany, F. Pierre, Y. Liqun, TIMSS 2019 user guide for the international database, 2nd
ed., Retrieved from Boston College, TIMSS &amp; PIRLS International Study Center website:
https://timssandpirls.bc.edu/timss2019/international-database/, n.d.
[17]I.V.S. Mullis Ed, M.O. Martin Ed, TIMSS 2019 assessment frameworks, International</p>
      <p>Association for the Evaluation of Educational Achievement., 2017.
[18]M.O. Martin, M. von Davier, I.V.S. Mullis, Methods and procedures: TIMSS 2019 technical
report, International Association for the Evaluation of Educational Achievement, 2020.
https://eric.ed.gov/?id=ED610099 (accessed August 13, 2024).
[19]F. Simon, Netflix update: Try this at home, (2006).</p>
      <p>https://sifter.org/~simon/journal/20061211.html.
[20]R Core Team, R: A language and environment for statistical computing, (2024).</p>
      <p>https://www.R-project.org/.
[21]U. Ocepek, J. Rugelj, Z. Bosnić, Improving matrix factorization recommendations for
examples in cold start, Expert Systems with Applications 42 (2015) 6784–6794.
https://doi.org/10.1016/j.eswa.2015.04.071.
[22]N. Hug, Surprise: A Python library for recommender systems, JOSS 5 (2020) 2174.</p>
      <p>https://doi.org/10.21105/joss.02174.
[23]W.-S. Chin, Y. Zhuang, Y.-C. Juan, C.-J. Lin, A fast parallel stochastic gradient method for
matrix factorization in shared memory systems, ACM Trans. Intell. Syst. Technol. 6 (2015)
2:1-2:24. https://doi.org/10.1145/2668133.
[24]J. Delafontaine, C. Chen, J.Y. Park, W. Van den Noortgate, Using country-specific
Qmatrices for cognitive diagnostic assessments with international large-scale data,
LargeScale Assessments in Education 10 (2022) 19. https://doi.org/10.1186/s40536-022-00138-4.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.I. Ramírez</given-names>
            <surname>Luelmo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. El</given-names>
            <surname>Mawas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heutte</surname>
          </string-name>
          ,
          <article-title>Learner models for MOOC in a lifelong learning context: a systematic literature review</article-title>
          , in: H.
          <string-name>
            <surname>C. Lane</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Zvacek</surname>
          </string-name>
          , J. Uhomoibhi (Eds.), Computer Supported Education, Springer International Publishing, Cham,
          <year>2021</year>
          : pp.
          <fpage>392</fpage>
          -
          <lpage>415</lpage>
          . https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -86439-2_
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          ,
          <article-title>Lifelong Learner Modeling for Lifelong Personalized Pervasive Learning</article-title>
          ,
          <source>IEEE Trans. Learning Technol</source>
          .
          <volume>1</volume>
          (
          <year>2008</year>
          )
          <fpage>215</fpage>
          -
          <lpage>228</lpage>
          . https://doi.org/10.1109/TLT.
          <year>2009</year>
          .
          <volume>9</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gharahighehi</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. Van Schoors</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Topali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ooge</surname>
          </string-name>
          ,
          <article-title>Adaptive Lifelong Learning (ALL)</article-title>
          ,
          <source>in: International Conference on Artificial Intelligence in Education</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          : pp.
          <fpage>452</fpage>
          -
          <lpage>459</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Conesa</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-M. Batalla-Busquets</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Bañeres</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Carrion</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Conejero-Arto</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Del Carmen Cruz Gil</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Garcia-Alsina</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Gómez-Zúñiga</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          <string-name>
            <surname>Martinez-Argüelles</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Mas</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Monjo</surname>
          </string-name>
          , E. Mor,
          <article-title>Towards an educational model for lifelong learning</article-title>
          , in: L.
          <string-name>
            <surname>Barolli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Hellinckx</surname>
          </string-name>
          , J. Natwichai (Eds.),
          <source>Advances on P2P, Parallel, Grid, Cloud and Internet Computing</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          : pp.
          <fpage>537</fpage>
          -
          <lpage>546</lpage>
          . https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          - 33509-0_
          <fpage>50</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gharahighehi</surname>
          </string-name>
          ,
          <article-title>Recommender systems for personalized applications, Doctoral dissertation</article-title>
          ,
          <source>KU Leuven</source>
          ,
          <year>2022</year>
          . https://lirias.kuleuven.be/retrieve/675420.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Mouri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Suzuki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shimada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Uosaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaneko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ogata</surname>
          </string-name>
          ,
          <article-title>Educational data mining for discovering hidden browsing patterns using non-negative matrix factorization</article-title>
          ,
          <source>Interactive Learning Environments</source>
          <volume>29</volume>
          (
          <year>2021</year>
          )
          <fpage>1176</fpage>
          -
          <lpage>1188</lpage>
          . https://doi.org/10.1080/10494820.
          <year>2019</year>
          .
          <volume>1619594</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.C.</given-names>
            <surname>Desmarais</surname>
          </string-name>
          ,
          <article-title>Mapping question items to skills with non-negative matrix factorization</article-title>
          ,
          <source>ACM SIGKDD Explorations Newsletter</source>
          <volume>13</volume>
          (
          <year>2012</year>
          )
          <fpage>30</fpage>
          -
          <lpage>36</lpage>
          . https://doi.org/10.1145/2207243.2207248.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O.C.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.G.</given-names>
            <surname>Boticario</surname>
          </string-name>
          ,
          <article-title>Educational recommender systems and technologies: practices and challenges</article-title>
          ,
          <source>IGI Global</source>
          ,
          <year>2012</year>
          . https://doi.org/10.4018/978-1-
          <fpage>61350</fpage>
          -489-5.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.Y.</given-names>
            <surname>Park</surname>
          </string-name>
          , W. Van den Noortgate,
          <article-title>On the use of bayesian probabilistic matrix factorization for predicting student performance in online learning environments</article-title>
          ,
          <source>in: 6th International Conference on Higher Education Advances (HEAd'20)</source>
          , Universitat Politècnica de València,
          <year>2020</year>
          . https://doi.org/10.4995/HEAd20.
          <year>2020</year>
          .
          <volume>11137</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Koren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Volinsky</surname>
          </string-name>
          ,
          <article-title>Matrix factorization techniques for recommender systems</article-title>
          ,
          <source>Computer</source>
          <volume>42</volume>
          (
          <year>2009</year>
          )
          <fpage>30</fpage>
          -
          <lpage>37</lpage>
          . https://doi.org/10.1109/
          <string-name>
            <surname>MC</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <volume>263</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Aleksandrova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Brun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.G.</given-names>
            <surname>Chertov</surname>
          </string-name>
          ,
          <article-title>What about interpreting features in matrix factorization-based recommender systems as users?</article-title>
          ,
          <source>in: ACM Conference on Hypertext &amp; Social Media</source>
          ,
          <year>2014</year>
          . https://api.semanticscholar.org/CorpusID:1761880.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rossetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Stella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zanker</surname>
          </string-name>
          ,
          <article-title>Towards explaining latent factors with topic models in collaborative recommender systems</article-title>
          ,
          <source>in: 2013 24th International Workshop on Database and Expert Systems Applications</source>
          , IEEE, Los Alamitos, CA, USA,
          <year>2013</year>
          : pp.
          <fpage>162</fpage>
          -
          <lpage>167</lpage>
          . https://doi.org/10.1109/DEXA.
          <year>2013</year>
          .
          <volume>26</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>