=Paper= {{Paper |id=Vol-3815/paper3 |storemode=property |title=Designing and Evaluating an Educational Recommender System with Different Levels of User Control |pdfUrl=https://ceur-ws.org/Vol-3815/paper3.pdf |volume=Vol-3815 |authors=Qurat Ul Ain,Mohamed Amine Chatti,William Kana Tsoplefack,Rawaa Alatrash,Shoeb Joarder |dblpUrl=https://dblp.org/rec/conf/intrs/AinCTAJ24 }} ==Designing and Evaluating an Educational Recommender System with Different Levels of User Control== https://ceur-ws.org/Vol-3815/paper3.pdf
                         Designing and Evaluating an Educational Recommender
                         System with Different Levels of User Control
                         Qurat Ul Ain* , Mohamed Amine Chatti* , William Kana Tsoplefack, Rawaa Alatrash and
                         Shoeb Joarder
                         Social Computing Group, Faculty of Computer Science, University of Duisburg-Essen, Duisburg, Germany


                                      Abstract
                                      Educational recommender systems (ERSs) play a crucial role in personalizing learning experiences and enhancing
                                      educational outcomes by providing recommendations of personalized resources and activities to learners, tailored
                                      to their individual learning needs. However, their effectiveness is often diminished by insufficient user control and
                                      limited transparency. To address these challenges, in this paper, we present the systematic design and evaluation
                                      of an interactive ERS, in which we introduce different levels of user control. Concretely, we introduce user control
                                      around the input (i.e., user profile), process (i.e., recommendation algorithm), and output (i.e., recommendations)
                                      of the ERS. To evaluate our system, we conducted an online user study (N=30) to explore the impact of user control
                                      on users’ perceptions of the ERS in terms of several important user-centric aspects. Moreover, we investigated the
                                      effects of user control on multiple recommendation goals, namely transparency, trust, and satisfaction, as well as
                                      the interactions between these goals. Our results demonstrate the positive impact of user control on user perceived
                                      benefits of the ERS. Moreover, our study shows that user control strongly correlates with transparency and
                                      moderately correlates with trust and satisfaction. In terms of interaction between these goals, our results reveal
                                      that transparency moderately correlates and trust strongly correlates with satisfaction. Whereas, transparency
                                      and trust stand out as less correlated with each other.

                                      Keywords
                                      Educational Recommender Systems, Interactive Recommender Systems, User Control, Transparency, Trust




                         1. Introduction
                         Recommender systems (RSs) are widely used across various application domains, such as e-commerce
                         sites, online streaming websites, and social media platforms. These systems have proven effective at
                         enhancing user experience and aiding decision-making through personalized recommendations. In
                         recent decades, RSs have also been applied to the field of education, leading to the development of
                         educational recommender systems (ERS) [1, 2]. In this context, RSs are for example used to create
                         personalized learning experiences [3], recommend suitable formal or informal learning materials [4],
                         suggest MOOCs [5], and adapt to context-aware learning environments [6].
                            Conventional RSs usually offer minimal feedback options in the user interface, permitting users
                         merely to indicate if they like/dislike a recommendation [7]. Interactive RSs (IntRSs) have been into the
                         limelight as an approach to empower users to control and interact with the RS [8, 9, 10]. Controllability
                         refers to the extent to which the system allows users to adjust the recommendation process to enhance
                         the quality of recommendations [8]. Concretely, the users can control the RS at three different levels,
                         namely interacting with the input (i.e., user profile), process (i.e., recommendation algorithm), or output
                         (recommendations) of the RS [8, 11]. The control is provided in the RS by allowing users to adjust
                         preferences, change parameters of the underlying algorithm, directly interact with recommendations,
                         and provide feedback, resulting in greater perceived control and a more transparent recommendation
                         process [12]. Compared to the application of IntRSs in e-commerce, entertainment, and social media
                         domains, providing user control is under-explored in ERSs [13].
                            User control has proven to have positive impact on different recommendation goals. These include

                          IntRS’24: Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, October 18, 2024, Bari (Italy)
                          $ qurat.ain@stud.uni-due.de (Q. U. Ain); mohamed.chatti@uni-due.de (M. A. Chatti);
                          william.kana-tsoplefack@stud.uni-due.de (W. K. Tsoplefack); rawaa.alatrash@stud.uni-due.de (R. Alatrash);
                          shoeb.joarder@uni-due.de (S. Joarder)
                                     © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
perceived accuracy of recommendations [14, 15, 16, 17, 18], usability [19, 20, 21, 22], perceived usefulness
[23, 20, 24, 22], user-perceived transparency [19, 25], trust [14, 21, 26], user experience [12, 15, 16, 27],
cognitive load and recommendation acceptance [7], user satisfaction [19, 28, 29, 18, 30], and user
acceptance [20, 29]. These findings highlight the impact of user control on various recommendation
goals, suggesting that these goals may interact with each other. However, to the best of our knowledge,
no work has yet investigated the impact of user control on transparency (i.e, explain how the system
works), trust (i.e., increase user’s confidence in the system), and satisfaction (i.e., increase the ease of
use or enjoyment) [31] together, nor has there been an investigation into how these goals interact with
each other in an interactive recommendation context.
   This paper addresses these gaps by introducing user control at multiple levels within the ERS module
of the MOOC platform CourseMapper [32]. In this way, we enable users to interact with the input,
process, and output of the ERS. Moreover, we present the systematic design of the ERS which is also
lacking in the existing literature on IntRS and ERS. Furthermore, we examine the impact of user control
on multiple recommendation goals, namely transparency, trust, and satisfaction. We also explore how
these goals interact with each other. The following research questions guide our investigation:

    • RQ1. How does complementing an ERS with user control impact users’ perceptions of the ERS?

    • RQ2. What are the effects of user control on transparency of, trust in, and satisfaction with the
      ERS?

    • RQ3. How do the recommendation goals of transparency, trust, and satisfaction interact with
      each other in an interactive recommendation setting?

   To answer these research questions, we conducted an online user study (N=30). Our findings
demonstrate the positive impact of user control in ERSs in terms of several important user-centric aspects
including perceived accuracy, novelty, interaction adequacy, perceived user control, transparency, trust,
user satisfaction, and use intentions. Moreover, our analysis shows that user control has at least
moderate correlation with all goals, while some pairs are particularly strongly correlated with each
other. More specifically, user control strongly correlates with transparency and moderately correlates
with trust and satisfaction. Referring to the interaction between these goals, our study indicate that while
transparency moderately correlates with satisfaction and trust strongly correlates with satisfaction,
transparency and trust stand out as less correlated with each other.
   The remainder of this paper is organized as follows. We first introduce two branches of related work,
namely educational recommender systems and interactive recommender systems in Section 2. We then
describe the systematic design and implementation of our interactive ERS in Section 3. Next, we present
the details of the online user study that we conducted to evaluate our ERS in Section 4 and the analysis
of the results in Section 5. Finally, in Section 6, we summarize the work and outline our future research
plans.


2. Background and Related Work
This section discusses related work on the application of recommender systems in the educational
domain and interactive recommender systems that support user interaction with and control of recom-
mender systems.

2.1. Educational Recommender Systems
Educational recommender systems (ERSs) have become a vital tool in personalized learning envi-
ronments, offering tailored recommendations to enhance the educational experience. These systems
leverage various algorithms and data sources to suggest resources, courses, and learning activities that
align with individual needs, preferences, and learning styles [33]. ERSs have been facilitating learning
and teaching in various ways. For example, to recommend learning materials to support instructors in
online programming courses [4], to recommend educational activities to a group [34], and to provide
recommendations while preparing for the oral examination of a language learning course [6]. Most
of these ERSs either propose algorithmic enhancements or new frameworks for recommendation, or
implement existing/new recommendation techniques in an educational context, such as collaborative fil-
tering [35], emotion detection [35], content-based similarity [5], and data mining and machine learning
[34]. We refer the interested reader to two recent literature reviews in this area [2, 36].
    While many ERSs have been proposed in the literature, there has been limited emphasis on enhancing
the interactivity of these systems by incorporating various control options into the user interface (UI).
Only few attempts have been made to provide interactivity and control in ERSs. Bustos López et al.
[35] presented an interactive ERS where the user can search for educational resources based on three
main criteria, namely keywords, category, and type of resource. The user can view recommendations
generated through collaborative filtering or emotion detection. In the list of recommendations, the
user can view more details using ’details’ button. Moreover, they can provide feedback to the recom-
mendations using five-star rating as well as mark them as favorite to view them later. Furthermore,
the user can write reviews about the recommended resources in the comments section. Another in-
teresting attempt to interact with the ERS has been presented in [6] to offer context-aware affective
educational recommendations in computer-assisted language learning in an Arduino-based platform.
The recommendation module takes input through sensors and provide interactive support to learners
using different communication methods like visuals, sounds, or touch. Bousbahi and Chorfi [5] pro-
posed an interactive MOOC recommender, where users can interact with the system to formulate the
request as an input to the RS, e.g., add keyword for course title (text input), and select features (e.g.,
course fee, availability, language) using checkboxes and dropdown. Zapata et al. [37] presented a group
recommender (DELPHOS) that recommends learning objects (LOs) to a group of individuals. As an
input, similar to a search engine, the user defines the desired search parameters based on a required text
query or keywords, some optional metadata values and different filtering or recommendation criteria
using sliders and checkboxes. Afterwards, DELPHOS shows the user a ranked list of recommended LOs
which users can rate on a Likert scale of five stars, group members can add one or more tags to LOs, as
well as add personal comments or additional information to them. In the area of conversational RSs
in education, Valtolina et al. [3] presented an intelligent chatbot-based RS to assist teachers in their
activities by suggesting the best LOs and how to combine them according to their prerequisites and
outcomes. The interaction with the RS is via text input where chatbot asks specific questions and the
user provide answers in textual format. While there are few attempts to make ERSs more transparent
by introducing open learner models (OLMs) (e.g., [13, 38]), the used OLMs, however, just show learners
their system-generated interests, but do not allow learners to interact with them or modify them.
    In summary, ERSs generally provide less user control and interactivity compared to interactive
RSs in other domains such as e-commerce, entertainment, and social media. A possible reason is
that introducing user control in ERSs has the risk that the user interfaces and the complexity of the
interactive recommendation task might overwhelm learners, and consequently can have a negative
effect on the learning experience. Moreover, the recent attempts to introduce interactivity with ERSs
have mainly focused on providing user control mechanisms either with the input or the output of the
ERS. There exists a significant research gap and potential opportunities to make ERSs more transparent
by incorporating user control and interaction. To this end, in this paper, we present an interactive ERS
in which we introduce different levels of user control by allowing users to interact with all the three
parts of the ERS, namely input (i.e., user profile), process (i.e., recommendation algorithm), and output
(i.e., recommendations).

2.2. Interactive Recommender Systems
Research on RSs has traditionally focused on improving the accuracy of recommendations by developing
new algorithms or integrating additional data sources into the recommendation process [10]. However,
many studies have demonstrated that higher accuracy does not always enhance the user experience of
the RS [8]. Consequently, recent research has shifted towards understanding how different interface
elements and user characteristics impact the overall user experience with RSs. Furthermore, an effective
RS should also take into account factors such as transparency to ensure societal value and trust [31, 11].
This shift in focus from purely algorithmic improvements to enhancing user experience has led to the
development of what are known as interactive recommender systems (IntRS), which emphasize user
control and interactivity to achieve greater transparency in RS [8, 9, 10].
    IntRSs offer visual and exploratory UIs, allowing users to inspect the recommendation process and
control the system to receive better recommendations [8]. These interactive, visual, and exploratory
UIs progressively guide users toward their objectives, enhance their understanding of the system’s
functionality, and ultimately contribute to transparency [8]. IntRSs have been developed in various
domains including movies [39, 14, 26, 15, 40, 17], music [16, 41, 7], news [11], publications [27, 21],
tweets [23], group recommenders [24], social recommenders [42, 22, 18], conference recommenders
[30], and job recommenders [15]. To gain a deeper understanding of IntRSs, we refer the interested
reader to the excellent literature reviews on this topic in [10, 8].
    IntRSs can roughly be grouped by the level they allow users to take control on, namely the RS input
(i.e., user profile), process (i.e., recommendation algorithm), and/or output (i.e., recommendations)
[8, 10]. Interaction with the input of the RS allows users to create or modify their interests as they want.
This level of control is provided by either allowing users to add, delete, or re-rate items in their profile
using various UI elements [14, 16, 29, 21, 28, 40, 23, 7, 11] or allowing them to visually interact with
visualizations of their interest profile to modify them [20, 42, 15]. Users can control the process of the
RS by either choosing the recommendation algorithm [43, 30, 11], or by manipulating the algorithmic
parameters [43, 17, 16, 27, 29, 7], using the UI elements provided. Lastly, users can control the output of
the recommender by providing feedback to the recommendations [26, 24], or ordering and sorting the
recommendations as they want [16, 11, 17, 7, 23, 30, 18, 22, 42, 21], based on the interactive elements
provided in the UI.
    In summary, IntRSs offer varying levels of user interaction and control at three different levels,
namely input, process, and output. As summarized in Table 1, only a few IntRSs support interaction at
all three levels [16, 41, 7, 11]. Most of the IntRSs enable user interaction at the input level, allowing
users to provide or adjust their preferences. A smaller number of IntRSs facilitate interaction with
the process where users can adjust algorithmic parameters. Only few recommenders allow switching
between different algorithms. At the output level, most systems provide users with the ability to sort
recommendations, while fewer offer options to give feedback on the recommendations. With regard to
UI design, only the study in [11] focused on the systematic design of the different control mechanisms
in the UI of their proposed IntRS. To address these gaps, in this work, we introduce an interactive ERS
that extends interactivity and user control which is not commonly found in the educational domain.
Furthermore, we present the systematic design of the ERS in the MOOC platform CourseMapper in
which we provide control across all three levels, i.e., input, process, and output of the ERS.
    The impact of user control has been investigated in the literature on interactive recommendations in
various ways. Many researchers have studied the impact of user control on one or more recommendation
goals, namely, perceived quality of recommendations [25, 28], perceived accuracy of recommendations
[14, 15, 21, 16, 17, 18], recommendation novelty [26], recommendation diversity [42], usability [20, 21, 22,
19], ease of use and playfulness [24], perceived usefulness [20, 24, 23, 22], user-perceived transparency
[19, 25], trust in the RS [14, 21, 26, 28, 44], user experience with the RS [12, 15, 16, 27], cognitive load
and recommendation acceptance [7], confidence with the RS [40, 28], user satisfaction with the RS
[14, 29, 18, 30, 19, 28], behavioural intentions of the user [28], and user acceptance of the RS [20, 29, 28].
While trust and satisfaction were the focus of a considerable number of studies, transparency remains
under-explored. Moreover, there is a notable gap that no research has comprehensively studied the
effects of user control on transparency, trust, and satisfaction together in the same study. Additionally,
the impact of these goals on each other has yet to be explored in the interactive recommendation
context. To address this research gap, in this paper, we study the impact of user control on transparency,
trust, and satisfaction. Moreover, we investigate how these goals interact with each other.
3. System Design
In this section, we present the design of the ERS module in the MOOC platform CourseMapper [32],
which introduces user control at three different levels, namely input, process, and output (see Figure 1).




Figure 1: User Interface of the ERS in CourseMapper with three levels of user control: input (A), process (B),
and output (C)



3.1. User Interface Design
In this section, we discuss the systematic approach taken to design interactive components for the UI of
the ERS in CourseMapper, focusing on enhancing user control. We began by investigating the existing
literature on IntRSs to identify a range of user control mechanisms and interaction options commonly
employed in these systems, and analyzing their effectiveness in enhancing user control. The literature
was explored focusing on answering the question: How user control has been added to the UI of the RSs to
enable users to interact with different parts of the RS, i.e., input, process, and output? Once the interaction
mechanisms were identified, we chose the ones that are equally applicable to our context to ensure a
better user experience. In this way, we designed interaction and control options in the UI of our ERS,
focusing on the interaction with the input, process, and output of the ERS.
   Beginning with the input of the IntRS, interaction with the input provides users the control to
manage their preferences rather than the traditional method in which user preferences are estimated
by observing their behavior over time. Many interaction options have been implemented at the input
part of the recommenders to help users personalize their recommendations. Typically, to refine their
requirements, users are asked to mark a set of items extracted by the system based on their past activity,
using a binary scale in terms of “like/dislike” options. For instance, using ”Yes” or ”No” buttons [41],
or "Like" or "Dislike" buttons [28]. Another option to interact with the input is provided by letting
the users change the weights of their interests or re-rate items using sliders [23, 29, 14, 16], or using
a pre-defined sliding scale ranging from ‘Strongly Disagree’ to ‘Strongly Agree’ [42]. Another way
is to let the users choose or modify their interests, for example, add or delete items in their profile
using buttons with icons [7, 14, 29], or radio buttons [23], and re-rate items using sliders [16, 14, 21, 11].
Alternatively, there are more complex methods for obtaining preferences. Examples include using filters
for specific items [21], drop down lists, checkboxes, and radio buttons to specify different dimensions
of the interests [28, 40], or using toggle buttons to enable/disable certain interests [11]. Furthermore,
more advanced ways of interactions using visualizations are provided to the users using intent radar
which the users can interact with using a mouse to move interest items [20], or interaction with a graph
visualization of interests using a mouse to drag and drop items [15].
   Interaction with the process is commonly provided by allowing users to select or change the recom-
mendation algorithm or tune the algorithm parameters. The selection of the algorithm is provided using
radio buttons [43], text and icon based buttons [11], or selection using checkboxes [30]. To fine-tune
the algorithm parameters multiple control options are provided to the users including radio buttons for
feature selection [41], buttons to add or remove parameters [29], and sliders to adjust weights of the
parameters [16, 27, 21, 19, 7].
   Once the user’s preferences are identified, recommendation algorithm selected or modified, the system
can provide tailored recommendations. Several control options and UI components have been proposed
in the literature to interact with the output of the RS. Users can change the number of recommendations
or filter the recommendations list using sliders [39, 16], give feedback to the system using Yes/No
[41, 26] or Like/Dislike [7] buttons, give feedback about recommendations using buttons of different
sizes referring to intensity of emotions [24], or can give feedback using radio buttons [17]. Users are
provided with the options to sort the recommendation list based on multiple options using buttons
[7, 23], remove recommendations from the list using remove icon [7], sort recommendations using
drag and drop [7], and reorder recommendations using toggles and sliders [11]. Moreover, advanced
interaction options are provided to the users when recommendations are presented visually instead of
lists. The users can interact with the Venn diagram to examine and filter the recommended items [27],
explore the opinion space using mouse interaction to increase or decrease diameter of circular opinion
space [42], mouse interaction with word cloud [22], drag and drop nodes in a graph [18], and arrange
items in a clustermap using mouse drag interaction [30].
   From this analysis, we identified and selected the most widely adopted interaction techniques that
align with our specific context and features, ensuring that our UI design is both intuitive and effective
for our users (see Table 1). After that, we started with the design of our ERS UI. Based on the UI elements
identified to interact with the input, process, and output, we created initial prototypes (see Figure 2a).
The prototypes were discussed with the authors’ team and different UI elements were refined and
improved. For example, deciding the colors, optimal labels for buttons, and labels to represent the
algorithms, providing UI for ranking the recommendations, and deciding whether or not to show the
impact of ranking in progress bars. The improved prototypes (see Figure 2b) were then translated to
the final design of the system, presented in the next sections.




                 (a) Initial prototype                                  (b) Improved prototype

        Figure 2: Prototypes for different levels of user control in the ERS



3.2. Interaction with the Recommendation Input
In the context of recommending learning resources, it is crucial to recommend accurate resources
tailored to learners’ needs for better learning outcomes [33]. Therefore, providing interaction around
the input of the ERS will facilitate learners to directly communicate their interests to the system. A
simple way to give users more control is to allow them to explicitly specify their interests and preferences
Table 1
UI design elements for interaction with different components of the RS identified from literature
 Paper                       User control with input        User control with process       User control with output
 Jin et al. [28]             Like/Dislike buttons,          -                               -
                             Selection using radio but-
                             tons, and dropdowns
 Schafer et al. [40]         Selection using radio but-     -                               -
                             tons, checkboxes,
                             and dropdowns
 Schaffer et al. [14]        Add or delete buttons,         -                               -
                             Sliders to re-rate items
 Kangasrääsiö et al. [20]    Mouse interaction with         -                               -
                             radar
 O’Donovan et al. [15]       Mouse interaction with         -                               -
                             graph
 Bostandjiev et al. [29]     Sliders to adjust weights      Buttons to add or remove pa-    -
                                                            rameters
 Bruns et al. [21]           Filter items using buttons     Sliders to adjust weights       -
 Bostandjiev et al. [16]     Sliders to adjust weights      Sliders to adjust weights       Filter using sliders
 Saito and Itoh [41]         Yes/No buttons                 Feature selection using radio   Feedback using Yes/No buttons
                                                            buttons
 Loepp et al. [26]           -                              -                               Feedback using Yes/No buttons
 Vlachos and Svonava [39]    -                              -                               Filter using sliders
 Jin et al. [7]              Select input using buttons     Sliders to adjust weights       Feedback using Like/Dislike but-
                                                                                            tons, drag to sort, remove item
 Harper et al. [17]          -                              -                               Feedback using radio buttons
 Chen and Pu [24]            -                              -                               Feedback using size of buttons
 Harambam et al. [11]        Toggle to enable/disable in-   Select using text buttons       Reorder using toggle
                             terests
 Tintarev et al. [23]        Sliders to adjust weights      -                          Sort using buttons
 Parra et al. [27]           -                              Sliders to adjust weights  Filter in Venn diagram
 Wong et al. [42]            Fixed sliders to adjust        -                          Mouse interaction with circular
                             weights                                                   opinion space
 Zhao et al. [22]            -                              -                          Mouse interaction with word cloud
 Gretarsson et al. [18]      -                              -                          Drag and drop nodes in a graph
 Verbert et al. [30]         -                              Select using checkboxes    Drag and drop in cluster map
 Tsai and Brusilovsky [19]   -                              Sliders to adjust weights  -
 Ekstrand et al. [43]        -                              Select using radio buttons -
 ERS in CourseMapper         Add, delete, select using      Select using radio but- Save using buttons, Sort using
                             buttons, include/exclude       tons, adjust weights for dropdown options, Feedback
                             using checkboxes, adjust       ranking using sliders, with connected input selection
                             the weights using sliders      view impact of adjust- using Helpful/Not Helpful but-
                                                            ments in progress bars     ton



rather than relying on the system to determine these preferences from their past interactions [10].
Similarly, in our ERS, to facilitate interaction with the input, we provide control to the users to choose
their interests and preferences explicitly to construct their learner model. Since CourseMapper [32] is a
MOOC platform where learners can enroll into multiple courses which contain learning materials, the
learners have the option to decide their knowledge deficiencies by explicitly marking the concepts as
’Did Not Understand’ (DNU concepts) from various slides of the PDF learning materials. To proceed
with recommendations the learners have various control options to decide the input for recommender.
Figure 3 shows the UI of the ERS with multiple options to interact with the input. First, the users can
decide which DNU concepts they want the recommendations for, using the radio buttons to select
’Current Slide’, ’All Slides’, or ’Select manually’ options (Figure 3a (A)). Clicking on ’Current Slide’ and
’All Slides’ retrieves the DNU concepts related to the current slide or all slides of the learning material,
respectively (Figure 3a (B), 3b). Whereas, clicking on the ’Select manually’ option let the user choose
any concepts from the whole learning material using the dropdown list with a search option (Figure
3c (A)). Once the DNUs are selected as input for the recommender, in all the three options, the user is
provided the opportunity to manipulate the weights of the concepts based on their preferences if they
find some concepts more important than others (Figure 3a (B)). The recommendations will be impacted
(a) Select DNU concepts from the      (b) Select DNU concepts from all    (c) Manually select concepts from
    current slide as input for rec-       slides as input for recommen-       the learning material as input
    ommendation                           dation                              for recommendation

        Figure 3: Interaction with the input of the ERS


by the changes in weights correspondingly. Furthermore, the user can include/exclude certain DNUs
using checkboxes (Figure 3b (C)), as well as remove DNUs from the input list using ’-’ icon (Figure 3c
(B)).

3.3. Interaction with the Recommendation Process
Interacting with the recommendation process allows users to choose or influence the recommendation
strategy or algorithm parameters [45]. To introduce user control around the process of our ERS, we
provide two options to the user. The first is to let them choose between the four recommendation
algorithms using radio buttons (Figure 4a, (A)). For better understandability of the recommendation
process for users, we provide a text-based abstract description of the underlying algorithms. The options
to select the algorithm are presented as an answer to the question "How should the recommendations be
generated?", with four possible answers as: 1) Comparing only keyphrases from articles and videos with
DNU concepts, 2) Comparing the whole content of articles and videos with DNU concepts, 3) Comparing
only keyphrases from articles and videos with main concepts of the current slide, and 4) Comparing the
whole content of articles and videos with whole content of the current slide. For more details about
how these recommendation algorithms work, please refer to our earlier publication [33]. Once the user
selects the algorithm, the second control option is to decide how they want the recommendations to be
ranked. The user can view multiple factors impacting the ranking of the recommendations with their
default weights. The users can adjust the weights of the factors using sliders (Figure 4a, (B)), where the
impact of the change in weight on the ranking is displayed in real-time in the progress bar adjacent
to each factor (Figure 4a, (C)). Furthermore, the user can select/de-select factors from the list using
checkboxes if they do not want to include a factor in ranking (Figure 4a, (D)).

3.4. Interaction with the Recommendation Output
Providing feedback to the system about the provided recommendations is an important factor influencing
the user experience of the RS [14]. We provide multiple control options to the user to interact with the
output of the ERS (see Figure 4b). The first one is to let users provide feedback to the system about
the provided recommendations. To facilitate this, we provide a ’Helpful/Not Helpful’ button with each
recommended video or article (Figure 4b, (A)). Once the user clicks on ’Helpful’ a dropdown menu
appears which prompts the user to select the DNU concepts that the recommended video or article
helped them understand (Figure 4b, (B)). This extended level of control enables the user to provide
detailed feedback to the ERS, which can be used to improve future recommendations. Furthermore,
the user can sort the recommendations using the sort option which opens a dropdown with multiple
options (Figure 4b, (C)). The user can sort the recommendations based on their similarity score, creation
date, or number of views. The options are presented as ’Most similar’, ’Most recent’ and ’Most viewed’.
The third interaction option is to save the recommendations using a save icon (Figure 4b, (D)). These
recommendations are saved to access them later in the ’Saved’ tab (Figure 4b, (E)).




(a) Interaction with the process of
    the ERS                                        (b) Interaction with the output of the ERS

        Figure 4: Interaction with the process and output of the ERS




4. User Evaluation
To evaluate our system, we conducted a detailed user study with end users, employing key measures
from the ResQue evaluation framework [46] to evaluate users’ perceived benefits in terms of perceived
system qualities (recommendation accuracy, recommendation novelty, recommendation diversity,
interface adequacy, information sufficiency, interaction adequacy), beliefs (perceived ease of use, control,
transparency, perceived usefulness), attitudes (overall satisfaction, confidence, trust), and behavioral
intentions (use intentions, watch intention).

4.1. Participants and Procedure
We recruited participants from our current and past courses via emails and phone contacts. In total, 30
students (5F, 25M) participated in the study, including 10 Bachelor, 14 Masters and 6 Ph.D., from various
age groups (21 from 20-30, 8 from 30-40 and 1 above 40 years) and countries (6 Cameroon, 2 Chineese,
6 Egyptian, 2 German, 1 Nigerian, 4 Pakistani, 7 Syrian, and 2 Tunisian). Participants were from various
backgrounds including computer science and applied computer science, physics, and engineering. Most
of the participants were familiar with the use of RSs (n=24, 63%) and interacting with RSs (n=19, 63%).
However, only a few of them (n=12, 40%) were familiar with technical implementations of RSs.
   The study was conducted online through individual sessions with each participant using video
conferencing on Zoom. At the start of each session, participants were informed about the voluntary
nature of their participation, the confidentiality and anonymity of data collection, and were asked for
their consent to record the meeting. Following this, they completed a demographic survey and were
then introduced to the ERS via a demo video. Afterwards, participants were asked to perform specific
tasks using the ERS by taking control of the host’s shared screen. Once done, they filled an online
questionnaire using Google forms based on measures from ResQue [46]. The study took 90 minutes on
average.

4.2. User Tasks
The main tasks for the participants were to interact with multiple control options provided with the
input, process, and output of the ERS. The tasks were divided into sub-tasks to distribute the cognitive
load and facilitate a smooth experience with the ERS.

   1. Task 1: Interaction with the ERS input
        a. Imagine that you are reading a learning material in CourseMapper and while reading you
           do not understand something from the slides. Please interact with some slides and collect
           the concepts you do not understand. (Goal: to add items to their interests).
        b. Now, you want to view the recommended YouTube videos based on the specific concepts
           that you do not understand from a particular slide. How will you proceed with it using this
           system? (Goal: to test ’current slide’ option).
        c. Once you have viewed the provided recommendations, now you want to generate new
           recommendations based on some other concepts that you did not understand. Moreover,
           you feel that some concepts are more important than the other ones and should be included
           with a higher importance while generating the recommendations, or that some concepts
           should not be included in the recommendations. How will you proceed with it? (Goal: to test
           ’all slides/select manually’, weight adjustment using sliders, include/exclude using checkboxes,
           and remove options).

   2. Task 2: Interaction with the ERS process
        a. In this recommender system, you have the option to decide the way you want the recommen-
           dations to be generated. There are multiple recommendation algorithms that you can select
           based on your requirements. How would you proceed with changing the recommendation
           algorithm as per your choice? (Goal: to test ’select algorithm’ option).
        b. The recommendations are ranked after generation in the system’s default way. Imagine
           that you want the recommendations to be ranked differently as you prefer. How would you
           proceed? (Goal: to test ’modify ranking’ options).

   3. Task 3: Interaction with the ERS output
        a. Now, you have the videos recommended to you and you want to view the latest videos first.
           Imagine that you don’t have time to watch all of the recommended videos now, and then
           ensure that you can easily access a few of them later when you have more time to watch
           them. How will you proceed? How will you access them later? (Goal: to test ’sort’ and ’save’
           options).
        b. Imagine that you find some of the recommended videos more useful than others in under-
           standing the concepts you previously did not understand. How will you give this feedback
           to the system? (Goal: to test ’Helpful/Not helpful’ option).
5. Results
We asked all study participants to fill in a questionnaire after their interaction with the ERS. This
questionnaire assessed their experience of controlling different parts of the ERS on a five-point Likert
scale, using items from the ResQue evaluation framework [46]. Table 2 lists the items and the results
based on data obtained from the questionnaires.

Table 2
Post study questionnaire and results based on ResQue [46]
 Measure                    Questions                                                                                                    Score (Mean (SD))
 Recommendation Accuracy    The items recommended to me matched my interests.                                                            4.03 (0.75)
 Recommendation Novelty     The recommender system helped me discover new videos.                                                        4.13 (0.56)
 Recommendation Diversity   The items recommended to me are diverse.                                                                     3.86 (0.56)
                            The labels of the recommender interface are clear.
                            The labels of the recommender interface are adequate.
 Interface Adequacy                                                                                                                      3.86 (0.7)
                            The layout of the recommender interface is attractive.
                            The layout of the recommender interface is adequate.
 Information Sufficiency    The information provided for the recommended videos is sufficient for me to make a decision to watch them.   4.33 (0.59)
                            I found it easy to tell the system what I like/ dislike.
 Interaction Adequacy                                                                                                                    4.45 (0.75)
                            I found it easy to inform the system if I like/dislike the recommended item.
                            I became familiar with the recommender system very quickly.
 Perceived Ease of Use                                                                                                                   4.13 (0.79)
                            I easily found the recommended items.
                            I feel in control of modifying my interest profile.
 Control                                                                                                                                 4.16 (0.8)
                            I found it easy to modify my interest profile in the recommender.
 Transparency               I understood why the videos were recommended to me.                                                          4.2 (0.7)
                            The recommender helped me find the ideal item.
 Perceived Usefulness       Using the recommender to find what I like is easy.                                                           3.94 (0.74)
                            The recommender gave me good suggestions.
 Overall Satisfaction       Overall, I am satisfied with the recommender.                                                                4.26 (0.62)
                            I am confident I will like the items recommended to me.
 Confidence                                                                                                                              4.05 (0.92)
                            The recommender made me more confident about my selection/decision.
 Trust                      The recommender can be trusted.                                                                              3.9 (0.9)
                            I will use this recommender again.
 Use Intentions             I will use this recommender frequently.                                                                      4.04 (0.88)
                            I will tell my friends about this recommender.
 Watch Intention            I would watch the videos recommended, given the opportunity.                                                 4.3 (0.58)




5.1. Perceived Benefits for Users
Concerning RQ1 ("How does complementing an ERS with user control impact users’ perceptions of the
ERS?"), the analysis of the questionnaire data shows that users’ feedback on the ERS is positive across
all perceived aspects. Regarding recommendation-related perceived system qualities, the scores for
recommendation accuracy (Mean=4.03, SD=0.75) and recommendation novelty (Mean=4.13, SD=0.56) are
relatively high, suggesting that user control over the ERS can lead to relevant and novel recommendations
that align with users’ interests and preferences. Similarly, for UI-related perceived system qualities,
the scores for information sufficiency (Mean=4.33, SD=0.59) and interaction adequacy (Mean=4.45,
SD=0.75) are also high, with interaction adequacy scoring the highest among all measures. This indicates
that users appreciated the interaction with the ERS input, process, and output and found it easy to
communicate their preferences and feedback to the ERS. Compared to information sufficiency and
interaction adequacy, interface adequacy received a lower score (Mean=3.86, SD=0.7), indicating that
the interface design needs improvements in terms of labels and layout.
   Regarding user beliefs which refers to a higher level of user perception of a system, which is influenced
by perceived qualities [46], the score for user control (Mean=4.16, SD=0.8) indicates that users felt
in control of their interactions with the ERS. Additionally, a high score for transparency (Mean=4.2,
SD=0.7) suggests that the ability to control different parts of the ERS helped users understand how
the ERS works. These high scores suggest that user control is positively associated with transparency.
We explore in detail the impact of user control on transparency and other recommendation goals in
Section 5.2. Furthermore, the score for perceived ease of use (Mean=4.13, SD=0.79) suggests that the
Figure 5: Results of user evaluation based on ResQue [46]


ERS facilitates users to find their preferred items quickly and that users found the ERS easy to navigate
and quickly became familiar with it. In contrast, perceived usefulness received a comparatively lower
score (Mean=3.94, SD=0.74), possibly because the diverse recommendations were not always perceived
as optimal suggestions for some participants.
   With regard to user attitudes referring to a user’s overall feeling towards an RS [46], a highly positive
score of confidence (Mean=4.05, SD=0.92) reflects the recommender’s ability to convince users of the
recommended items. Furthermore, users showed a high level of overall satisfaction with the ERS
(Mean=4.26, SD=0.62), indicating an increased ease of use and enjoyment with the ERS. According to
Tintarev and Masthoff [47], satisfaction can also be measured indirectly, measuring user loyalty. Thus,
users’ behavioral intentions can be seen as an indirect measure of loyalty and satisfaction with the
system. To this end, the highly positive scores for use intentions (Mean=4.04, SD=0.88) and watch
intention (Mean=4.3, SD=0.58) suggest that the system effectively influenced users’ decisions to continue
using the ERS and reflects their satisfaction with the ERS. In terms of trust, the positive score (Mean=3.9,
SD=0.9) shows an increased users’ confidence in the ERS. However, trust scored comparatively less than
transparency and satisfaction. This suggests that, while the recommendation goals of transparency,
trust, and satisfaction appear to move together, trust seems to be less correlated with the other two
goals. We investigate in detail the relationships and correlations between these goals in Section 5.3.

5.2. Impact of User Control on Different Goals
To identify the impact of user control on different recommendation goals, namely transparency, trust,
and satisfaction (RQ2), and to explore how these goals interact with each other (RQ3), we computed
Pearson Correlation between them. Figure 6 shows the Pearson Correlation between each pair of goals,
by analyzing the responses of participants from the user study. For each correlation, the figure also
shows a 95% confidence interval for the correlation measured using bootstrap sampling, as well as if the
correlations are statistically significant or not (adjusted p-value < 0.05). The results from the correlation
analysis reveal that user control has at least moderate correlation with all goals. More specifically,
user control strongly correlates with transparency with a statistical significance. Therefore, our study
Figure 6: Pearson Correlation between different goals, with 95% confidence interval using bootstrap sampling,
where statistically significant correlations (adjusted p-value < 0.05) are marked with an asterisk (*)
                                                         .


provides evidence that user control with the ERS leads to an increased transparency. This confirms
findings from previous studies from the recommendation domain which pointed out that user control is
very closely tied to transparency [31] and that control significantly affects transparency [46]. This is
also in line with studies from interactive recommendation literature which show that user control can
also contribute to increased transparency of RSs [25, 19]. Furthermore, in our study, most users (n=27,
90%) regardless of their background knowledge concurred that the ERS was perceived as transparent.
One reason we deem responsible for the high level of transparency among the majority of participants is
that they were able to control the ERS in a way that best fits their needs and preferences. Consequently,
we argue that, regardless of the users’ background knowledge, empowering users to take control of the
recommendation should be an integral feature in any ERS, if transparency is a desirable property for
the ERS. We refer to this type of transparency as transparency through controllability.
   Furthermore, our analysis shows that user control moderately correlates with trust with a statistical
significance. This confirms that providing more control over RS is an important strategy to secure trust
[11, 10, 45] and that interactive features that allow users to control the RS increase their trust in the
system [14, 21, 26].
   Our study further shows that user control moderately correlates with user satisfaction with a
statistical significance. This suggests that user control over the RS positively influenced their satisfaction,
indicating an enhanced user experience. Our results confirm the findings by Pu et al. [46], who noted
that user control weighs heavily on the overall user experience with the RS. Furthermore, several
studies from the recommendation domain evidenced the positive effects of user control on satisfaction
[19, 28, 29, 18, 30] and user experience [12, 15, 16, 27]. In summary, our findings clearly demonstrate
the positive impact of user control in ERS, showing that providing user control over the ERS leads to
increased perceived transparency, trust, and satisfaction.

5.3. Interaction Between Different Goals
Regarding RQ3, the results from the correlation analysis (see Figure 6) reveal that all the three goals,
transparency, trust, and satisfaction appear to move together, however, the correlation is not strong in all
cases. We found in our study that transparency moderately correlates with satisfaction with a statistical
significance. Moreover, trust strongly correlates with satisfaction with a statistical significance. We
conclude that satisfaction is most correlated metric with all goals. This aligns with previous research
suggesting that overall user satisfaction with RS is strongly related to transparency and trust. For
instance, in the explainable RS domain, Balog and Radlinski [48], and Guesmi et al. [49, 50] found that
satisfaction is positively correlated with transparency and trust. In the same context, Gedikli et al.
[51] reported results from experiments with different explanations clearly showing that transparency
has a significant positive effect on user satisfaction. Moreover, a lower-transparency RS is known to
negatively affect user satisfaction with the RS [31]. Overall, our results provide evidence that, similar to
findings from the explainable RS domain, transparency and trust have significant positive effects on
user satisfaction in the interactive RS domain as well.
   Additionally, we found in our study that transparency and trust stand out as less correlated with each
other. This suggests that transparency and trust, if they are desirable properties for an IntRS, should be
evaluated as distinct, even if they may interact. One reason we consider responsible for the relative low
correlation between transparency and trust is that offering interaction mechanisms at different levels
provides a significant degree of user control which is, as discussed in Section 5.2, highly influential to
transparency, but also introduces the risk of overwhelming users [45]. As pointed out by Jin et al. [7],
providing additional controls can also increase cognitive load, and different users have different needs
for control. This suggests that providing too much control does not necessarily lead to greater trust,
as it can increase the cognitive load on users. This further highlights that there is a trade-off between
the amount of user control and the level of trust users develop when interacting with the RS. Further
research is needed to find an optimal level of user control that will generate the highest level of users’
trust in the RS.
   Another reason that can explain the relatively low correlation between transparency and trust is
that while transparency can be achieved through controllability, interacting with the input, process,
or output of the RS cannot always assure that users understand the underlying rationale of the RS,
especially when the recommendation mechanism is too complicated to non-expert users [25], which
can negatively impact the perceived trust of the RS. Some considerable transparency could be achieved
through explanation which is also recognized as an important factor that fosters user trust in RS, as it
can improve users’ understanding of how the system works [31]. We hypothesize that a different result
may be observed if explanations were provided in the ERS together with interactions. It is therefore
important to explore in the future work the effects of different types of transparency (i.e., transparency
through controllability vs. transparency through explanation) on users’ trust in RSs [52].
   A further possible reason for the relative low correlation between transparency and trust is that,
as found in [46], trust depends primarily on the RS’s ability to formulate good recommendations (i.e.,
perceived usefulness) and provide useful explanations (i.e., transparency through explanation), rather
than on control (i.e., transparency through controllability).


6. Conclusion and Future Work
In this paper, we aimed to shed light on an aspect that remains under-researched in the literature
on educational recommender systems (ERSs), namely the effects of providing user control on users’
perceptions of ERSs. To this end, we systematically designed and evaluated user control in the ERS
module of the MOOC platform CourseMapper. Specifically, we introduced user control with the input
(i.e., user profile), process (i.e., recommendation algorithm), and output (i.e., recommendations) of
the ERS. We conducted an online user study (N=30) to explore the impact of user control on users’
perceptions of the ERS in terms of several important user-centric aspects. Moreover, we examined the
impact of user control on transparency, trust, and satisfaction, and explored the interactions among
these recommendation goals. Our findings indicate that users responded positively to the control
mechanisms provided in the ERS. Our results further reveal that satisfaction is the most correlated
metric with all goals and that transparency and trust stand out as less correlated with each other. This
suggests that it may be necessary to separately consider transparency and trust in the evaluation of
interactive ERSs. As a future work, we plan to conduct a more comprehensive user study, incorporating
qualitative analysis to gain deeper insights into the relationships between user control, transparency,
trust, and satisfaction. An interesting direction in future work would be to also investigate the effects of
different levels of user control (i.e., input, process, output) on the users’ perceptions of and interactions
with the ERS.


References
 [1] N. Manouselis, H. Drachsler, R. Vuorikari, H. Hummel, R. Koper, Recommender systems in
     technology enhanced learning, Recommender systems handbook (2011) 387–415.
 [2] S. S. Khanal, P. Prasad, A. Alsadoon, A. Maag, A systematic review: machine learning based
     recommendation systems for e-learning, Education and Information Technologies (2020).
 [3] S. Valtolina, R. A. Matamoros, F. Epifania, Design of a conversational recommender system in
     education, User modeling and user-adapted interaction (2024) 1–29.
 [4] H. Chau, J. Barria-Pineda, P. Brusilovsky, Learning content recommender system for instructors
     of programming courses, in: Artificial Intelligence in Education: 19th International Conference,
     AIED 2018, London, UK, June 27–30, 2018, Proceedings, Part II 19, Springer, 2018, pp. 47–51.
 [5] F. Bousbahi, H. Chorfi, Mooc-rec: A case based recommender system for moocs, Procedia - Social
     and Behavioral Sciences 195 (2015) 1813–1822. World Conference on Technology, Innovation and
     Entrepreneurship.
 [6] O. C. Santos, M. Saneiro, J. G. Boticario, M. C. Rodriguez-Sanchez, Toward interactive context-
     aware affective educational recommendations in computer-assisted language learning, New
     Review of Hypermedia and Multimedia 22 (2016) 27–57.
 [7] Y. Jin, N. Tintarev, K. Verbert, Effects of personal characteristics on music recommender sys-
     tems with different levels of controllability, in: Proceedings of the 12th ACM Conference on
     Recommender Systems, 2018, pp. 13–21.
 [8] C. He, D. Parra, K. Verbert, Interactive recommender systems: A survey of the state of the art and
     future research challenges and opportunities, Expert Systems with Applications 56 (2016) 9–27.
 [9] M. Jugovac, D. Jannach, Interacting with recommenders—overview and research directions, ACM
     Transactions on Interactive Intelligent Systems (TiiS) 7 (2017) 1–46.
[10] D. Jannach, S. Naveed, M. Jugovac, User control in recommender systems: Overview and interaction
     challenges, in: E-Commerce and Web Technologies: 17th International Conference, EC-Web 2016,
     Porto, Portugal, September 5-8, 2016, Revised Selected Papers 17, Springer, 2017, pp. 21–33.
[11] J. Harambam, D. Bountouridis, M. Makhortykh, J. van Hoboken, Designing for the better by taking
     users into account: a qualitative evaluation of user control mechanisms in (news) recommender
     systems, in: Proceedings of 13th ACM Conference on Recommender Systems, RecSys ’19, 2019.
[12] B. P. Knijnenburg, S. Bostandjiev, J. O’Donovan, A. Kobsa, Inspectability and control in social
     recommenders, in: Proceedings of the sixth ACM conference on Recommender systems, 2012.
[13] J. Barria-Pineda, P. Brusilovsky, Explaining educational recommendations through a concept-level
     knowledge visualization, in: Companion Proceedings of the 24th International Conference on
     Intelligent User Interfaces, 2019, pp. 103–104.
[14] J. Schaffer, T. Hollerer, J. O’Donovan, Hypothetical recommendation: A study of interactive profile
     manipulation behavior for recommender systems, in: 28th international flairs conference, 2015.
[15] J. O’Donovan, B. Smyth, B. Gretarsson, S. Bostandjiev, T. Höllerer, Peerchooser: visual interactive
     recommendation, in: Proceedings of the SIGCHI Conference on Human Factors in Computing
     Systems, CHI ’08, Association for Computing Machinery, NY, USA, 2008, p. 1085–1088.
[16] S. Bostandjiev, J. O’Donovan, T. Höllerer, Tasteweights: a visual interactive hybrid recommender
     system, in: Proceedings of the 6th ACM Conference on Recommender Systems, Association for
     Computing Machinery, New York, USA, 2012.
[17] F. M. Harper, F. Xu, H. Kaur, K. Condiff, S. Chang, L. Terveen, Putting users in control of their
     recommendations, in: Proceedings of the 9th ACM Conference on Recommender Systems, RecSys
     ’15, Association for Computing Machinery, New York, NY, USA, 2015, p. 3–10.
[18] B. Gretarsson, J. O’Donovan, S. Bostandjiev, C. Hall, T. Höllerer, Smallworlds: Visualizing social
     recommendations, Computer Graphics Forum 29 (2010) 833–842.
[19] C.-H. Tsai, P. Brusilovsky, Providing control and transparency in a social recommender system for
     academic conferences, in: Proceedings of the 25th conference on user modeling, adaptation and
     personalization, 2017, pp. 313–317.
[20] A. Kangasrääsiö, D. Glowacka, S. Kaski, Improving controllability and predictability of interactive
     recommendation interfaces for exploratory search, in: Proceedings of the 20th International
     Conference on Intelligent User Interfaces, IUI ’15, 2015.
[21] S. Bruns, A. C. Valdez, C. Greven, M. Ziefle, U. Schroeder, What should i read next? a personalized
     visual publication recommender system, in: International Conference on Human Interface and
     the Management of Information, Springer, 2015, pp. 89–100.
[22] S. Zhao, M. X. Zhou, Q. Yuan, X. Zhang, R. Zheng, Who is talking about what: social map-based
     recommendation for content-centric social websites, in: Proceedings of the 4th ACM Conference
     on Recommender Systems, New York, USA, 2010.
[23] N. Tintarev, B. Kang, T. Höllerer, J. O’Donovan, Inspection mechanisms for community-based
     content discovery in microblogs., in: IntRS@ RecSys, 2015, pp. 21–28.
[24] Y. Chen, P. Pu, Cofeel: Using emotions for social interaction in group recommender systems
     (2012).
[25] C.-H. Tsai, P. Brusilovsky, The effects of controllability and explainability in a social recommender
     system, User Modeling and User-Adapted Interaction 31 (2021) 591–627.
[26] B. Loepp, T. Hussein, J. Ziegler, Choice-based preference elicitation for collaborative filtering
     recommender systems, in: Proceedings of the SIGCHI Conference on Human Factors in Computing
     Systems, 2014, pp. 3085–3094.
[27] D. Parra, P. Brusilovsky, C. Trattner, See what you want to see: visual user-driven approach for
     hybrid recommendation, in: Proceedings of the 19th International Conference on Intelligent User
     Interfaces, IUI ’14, Association for Computing Machinery, New York, USA, 2014, p. 235–240.
[28] Y. Jin, K. Seipp, E. Duval, K. Verbert, Go with the flow: effects of transparency and user control on
     targeted advertising using flow charts, in: Proceedings of the international working conference
     on advanced visual interfaces, 2016, pp. 68–75.
[29] S. Bostandjiev, J. O’Donovan, T. Höllerer, Linkedvis: Exploring social and semantic career recom-
     mendations, 2013, pp. 107–116. doi:10.1145/2449396.2449412.
[30] K. Verbert, D. Parra, P. Brusilovsky, E. Duval, Visualizing recommendations to support explo-
     ration, transparency and controllability, in: Proceedings of the 2013 International Conference on
     Intelligent User Interfaces, IUI ’13, ACM, New York, NY, USA, 2013.
[31] N. Tintarev, J. Masthoff, Explaining recommendations: Design and evaluation, in: Recommender
     systems handbook, Springer, 2015, pp. 353–382.
[32] Q. U. Ain, M. A. Chatti, S. Joarder, I. Nassif, B. S. Wobiwo Teda, M. Guesmi, R. Alatrash, Learning
     channels to support interaction and collaboration in coursemapper, in: Proceedings of the 14th
     International Conference on Education Technology and Computers, ICETC ’22, 2023.
[33] Q. U. Ain, M. A. Chatti, P. A. Meteng Kamdem, R. Alatrash, S. Joarder, C. Siepmann, Learner
     modeling and recommendation of learning resources using personal knowledge graphs, in:
     Proceedings of the 14th Learning Analytics and Knowledge Conference, 2024, pp. 273–283.
[34] E. Fotopoulou, A. Zafeiropoulos, M. Feidakis, D. Metafas, S. Papavassiliou, An interactive rec-
     ommender system based on reinforcement learning for improving emotional competences in
     educational groups, in: International Conference on Intelligent Tutoring Systems, Springer, 2020.
[35] M. Bustos López, G. Alor-Hernández, J. L. Sánchez-Cervantes, M. A. Paredes-Valverde, M. d. P.
     Salas-Zárate, Edurecomsys: an educational resource recommender system based on collaborative
     filtering and emotion detection, Interacting with Computers 32 (2020) 407–432.
[36] F. L. da Silva, B. K. Slodkowski, K. K. A. da Silva, S. C. Cazella, A systematic literature review
     on educational recommender systems for teaching and learning: research trends, limitations and
     opportunities, Education and Information Technologies 28 (2023) 3289–3328.
[37] A. Zapata, V. H. Menéndez, M. E. Prieto, C. Romero, Evaluation and selection of group recom-
     mendation strategies for collaborative searching of learning objects, International Journal of
     Human-Computer Studies 76 (2015) 22–39.
[38] S. Abdi, H. Khosravi, S. Sadiq, D. Gasevic, Complementing educational recommender systems with
     open learner models, in: Proceedings of the tenth international conference on learning analytics
     & knowledge, 2020, pp. 360–365.
[39] M. Vlachos, D. Svonava, Graph embeddings for movie visualization and recommendation, in: First
     International Workshop on Recommendation Technologies for Lifestyle Change, 2012.
[40] J. B. Schafer, J. A. Konstan, J. Riedl, Meta-recommendation systems: user-controlled integration of
     diverse recommendations, in: Proceedings of the Eleventh International Conference on Information
     and Knowledge Management, Association for Computing Machinery, NY, USA, 2002.
[41] Y. Saito, T. Itoh, Musicube: a visual music recommendation system featuring interactive evolution-
     ary computing, in: Proceedings of the 2011 Visual Information Communication - International
     Symposium, VINCI ’11, Association for Computing Machinery, New York, USA, 2011.
[42] D. Wong, S. Faridani, E. Bitton, B. Hartmann, K. Goldberg, The diversity donut: enabling participant
     control over the diversity of recommended responses, in: CHI’11 Extended Abstracts on Human
     Factors in Computing Systems, 2011, pp. 1471–1476.
[43] M. D. Ekstrand, D. Kluver, F. M. Harper, J. A. Konstan, Letting users choose recommender
     algorithms: An experimental study, Proceedings of the 9th ACM Conference on Recommender
     Systems (2015).
[44] J. Ooge, L. Dereu, K. Verbert, Steering recommendations and visualising its impact: Effects on
     adolescents’ trust in e-learning platforms, in: Proceedings of the 28th International Conference on
     Intelligent User Interfaces, IUI ’23, 2023, p. 156–170.
[45] D. Jannach, M. Jugovac, I. Nunes, Explanations and user control in recommender systems, in:
     Proceedings of the 23rd International Workshop on Personalization and Recommendation on the
     Web and Beyond, 2019, pp. 31–31.
[46] P. Pu, L. Chen, R. Hu, A user-centric evaluation framework for recommender systems, in: ACM
     Conference on Recommender Systems, 2011.
[47] N. Tintarev, J. Masthoff, A survey of explanations in recommender systems, in: 2007 IEEE 23rd
     International Conference on Data Engineering Workshop, 2007, pp. 801–810.
[48] K. Balog, F. Radlinski, Measuring recommendation explanation quality: The conflicting goals of
     explanations, in: Proceedings of the 43rd international ACM SIGIR conference on research and
     development in information retrieval, 2020, pp. 329–338.
[49] M. Guesmi, M. A. Chatti, S. Joarder, Q. U. Ain, R. Alatrash, C. Siepmann, T. Vahidi, Interactive
     explanation with varying level of details in an explainable scientific literature recommender system,
     International Journal of Human–Computer Interaction (2023) 1–22.
[50] M. Guesmi, M. A. Chatti, S. Joarder, Q. U. Ain, C. Siepmann, H. Ghanbarzadeh, R. Alatrash, Justifi-
     cation vs. transparency: Why and how visual explanations in a scientific literature recommender
     system, Information 14 (2023) 401.
[51] F. Gedikli, D. Jannach, M. Ge, How should i explain? a comparison of different explanation types
     for recommender systems, International Journal of Human-Computer Studies 72 (2014) 367–382.
[52] C. Siepmann, M. A. Chatti, Trust and transparency in recommender systems, arXiv preprint
     arXiv:2304.08094 (2023).