<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Designing and Evaluating an Educational Recommender System with Diferent Levels of User Control</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Qurat Ul Ain</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Amine Chatti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>William Kana Tsoplefack</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rawaa Alatrash</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shoeb Joarder</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Social Computing Group, Faculty of Computer Science, University of Duisburg-Essen</institution>
          ,
          <addr-line>Duisburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Educational recommender systems (ERSs) play a crucial role in personalizing learning experiences and enhancing educational outcomes by providing recommendations of personalized resources and activities to learners, tailored to their individual learning needs. However, their efectiveness is often diminished by insuficient user control and limited transparency. To address these challenges, in this paper, we present the systematic design and evaluation of an interactive ERS, in which we introduce diferent levels of user control. Concretely, we introduce user control around the input (i.e., user profile), process (i.e., recommendation algorithm), and output (i.e., recommendations) of the ERS. To evaluate our system, we conducted an online user study (N=30) to explore the impact of user control on users' perceptions of the ERS in terms of several important user-centric aspects. Moreover, we investigated the efects of user control on multiple recommendation goals, namely transparency, trust, and satisfaction, as well as the interactions between these goals. Our results demonstrate the positive impact of user control on user perceived benefits of the ERS. Moreover, our study shows that user control strongly correlates with transparency and moderately correlates with trust and satisfaction. In terms of interaction between these goals, our results reveal that transparency moderately correlates and trust strongly correlates with satisfaction. Whereas, transparency and trust stand out as less correlated with each other.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Educational Recommender Systems</kwd>
        <kwd>Interactive Recommender Systems</kwd>
        <kwd>User Control</kwd>
        <kwd>Transparency</kwd>
        <kwd>Trust</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recommender systems (RSs) are widely used across various application domains, such as e-commerce
sites, online streaming websites, and social media platforms. These systems have proven efective at
enhancing user experience and aiding decision-making through personalized recommendations. In
recent decades, RSs have also been applied to the field of education, leading to the development of
educational recommender systems (ERS) [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. In this context, RSs are for example used to create
personalized learning experiences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], recommend suitable formal or informal learning materials [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
suggest MOOCs [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and adapt to context-aware learning environments [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Conventional RSs usually ofer minimal feedback options in the user interface, permitting users
merely to indicate if they like/dislike a recommendation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Interactive RSs (IntRSs) have been into the
limelight as an approach to empower users to control and interact with the RS [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. Controllability
refers to the extent to which the system allows users to adjust the recommendation process to enhance
the quality of recommendations [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Concretely, the users can control the RS at three diferent levels,
namely interacting with the input (i.e., user profile), process (i.e., recommendation algorithm), or output
(recommendations) of the RS [
        <xref ref-type="bibr" rid="ref11 ref8">8, 11</xref>
        ]. The control is provided in the RS by allowing users to adjust
preferences, change parameters of the underlying algorithm, directly interact with recommendations,
and provide feedback, resulting in greater perceived control and a more transparent recommendation
process [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Compared to the application of IntRSs in e-commerce, entertainment, and social media
domains, providing user control is under-explored in ERSs [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        User control has proven to have positive impact on diferent recommendation goals. These include
perceived accuracy of recommendations [
        <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17 ref18">14, 15, 16, 17, 18</xref>
        ], usability [19, 20, 21, 22], perceived usefulness
[23, 20, 24, 22], user-perceived transparency [19, 25], trust [
        <xref ref-type="bibr" rid="ref14">14, 21, 26</xref>
        ], user experience [
        <xref ref-type="bibr" rid="ref12 ref15 ref16">12, 15, 16, 27</xref>
        ],
cognitive load and recommendation acceptance [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], user satisfaction [
        <xref ref-type="bibr" rid="ref18">19, 28, 29, 18, 30</xref>
        ], and user
acceptance [20, 29]. These findings highlight the impact of user control on various recommendation
goals, suggesting that these goals may interact with each other. However, to the best of our knowledge,
no work has yet investigated the impact of user control on transparency (i.e, explain how the system
works), trust (i.e., increase user’s confidence in the system), and satisfaction (i.e., increase the ease of
use or enjoyment) [31] together, nor has there been an investigation into how these goals interact with
each other in an interactive recommendation context.
      </p>
      <p>This paper addresses these gaps by introducing user control at multiple levels within the ERS module
of the MOOC platform CourseMapper [32]. In this way, we enable users to interact with the input,
process, and output of the ERS. Moreover, we present the systematic design of the ERS which is also
lacking in the existing literature on IntRS and ERS. Furthermore, we examine the impact of user control
on multiple recommendation goals, namely transparency, trust, and satisfaction. We also explore how
these goals interact with each other. The following research questions guide our investigation:
• RQ1. How does complementing an ERS with user control impact users’ perceptions of the ERS?
• RQ2. What are the efects of user control on transparency of, trust in, and satisfaction with the</p>
      <p>ERS?
• RQ3. How do the recommendation goals of transparency, trust, and satisfaction interact with
each other in an interactive recommendation setting?</p>
      <p>To answer these research questions, we conducted an online user study (N=30). Our findings
demonstrate the positive impact of user control in ERSs in terms of several important user-centric aspects
including perceived accuracy, novelty, interaction adequacy, perceived user control, transparency, trust,
user satisfaction, and use intentions. Moreover, our analysis shows that user control has at least
moderate correlation with all goals, while some pairs are particularly strongly correlated with each
other. More specifically, user control strongly correlates with transparency and moderately correlates
with trust and satisfaction. Referring to the interaction between these goals, our study indicate that while
transparency moderately correlates with satisfaction and trust strongly correlates with satisfaction,
transparency and trust stand out as less correlated with each other.</p>
      <p>The remainder of this paper is organized as follows. We first introduce two branches of related work,
namely educational recommender systems and interactive recommender systems in Section 2. We then
describe the systematic design and implementation of our interactive ERS in Section 3. Next, we present
the details of the online user study that we conducted to evaluate our ERS in Section 4 and the analysis
of the results in Section 5. Finally, in Section 6, we summarize the work and outline our future research
plans.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <p>This section discusses related work on the application of recommender systems in the educational
domain and interactive recommender systems that support user interaction with and control of
recommender systems.</p>
      <sec id="sec-2-1">
        <title>2.1. Educational Recommender Systems</title>
        <p>
          Educational recommender systems (ERSs) have become a vital tool in personalized learning
environments, ofering tailored recommendations to enhance the educational experience. These systems
leverage various algorithms and data sources to suggest resources, courses, and learning activities that
align with individual needs, preferences, and learning styles [33]. ERSs have been facilitating learning
and teaching in various ways. For example, to recommend learning materials to support instructors in
online programming courses [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], to recommend educational activities to a group [34], and to provide
recommendations while preparing for the oral examination of a language learning course [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Most
of these ERSs either propose algorithmic enhancements or new frameworks for recommendation, or
implement existing/new recommendation techniques in an educational context, such as collaborative
filtering [35], emotion detection [35], content-based similarity [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], and data mining and machine learning
[34]. We refer the interested reader to two recent literature reviews in this area [
          <xref ref-type="bibr" rid="ref2">2, 36</xref>
          ].
        </p>
        <p>
          While many ERSs have been proposed in the literature, there has been limited emphasis on enhancing
the interactivity of these systems by incorporating various control options into the user interface (UI).
Only few attempts have been made to provide interactivity and control in ERSs. Bustos López et al.
[35] presented an interactive ERS where the user can search for educational resources based on three
main criteria, namely keywords, category, and type of resource. The user can view recommendations
generated through collaborative filtering or emotion detection. In the list of recommendations, the
user can view more details using ’details’ button. Moreover, they can provide feedback to the
recommendations using five-star rating as well as mark them as favorite to view them later. Furthermore,
the user can write reviews about the recommended resources in the comments section. Another
interesting attempt to interact with the ERS has been presented in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] to ofer context-aware afective
educational recommendations in computer-assisted language learning in an Arduino-based platform.
The recommendation module takes input through sensors and provide interactive support to learners
using diferent communication methods like visuals, sounds, or touch. Bousbahi and Chorfi [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
proposed an interactive MOOC recommender, where users can interact with the system to formulate the
request as an input to the RS, e.g., add keyword for course title (text input), and select features (e.g.,
course fee, availability, language) using checkboxes and dropdown. Zapata et al. [37] presented a group
recommender (DELPHOS) that recommends learning objects (LOs) to a group of individuals. As an
input, similar to a search engine, the user defines the desired search parameters based on a required text
query or keywords, some optional metadata values and diferent filtering or recommendation criteria
using sliders and checkboxes. Afterwards, DELPHOS shows the user a ranked list of recommended LOs
which users can rate on a Likert scale of five stars, group members can add one or more tags to LOs, as
well as add personal comments or additional information to them. In the area of conversational RSs
in education, Valtolina et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] presented an intelligent chatbot-based RS to assist teachers in their
activities by suggesting the best LOs and how to combine them according to their prerequisites and
outcomes. The interaction with the RS is via text input where chatbot asks specific questions and the
user provide answers in textual format. While there are few attempts to make ERSs more transparent
by introducing open learner models (OLMs) (e.g., [
          <xref ref-type="bibr" rid="ref13">13, 38</xref>
          ]), the used OLMs, however, just show learners
their system-generated interests, but do not allow learners to interact with them or modify them.
        </p>
        <p>In summary, ERSs generally provide less user control and interactivity compared to interactive
RSs in other domains such as e-commerce, entertainment, and social media. A possible reason is
that introducing user control in ERSs has the risk that the user interfaces and the complexity of the
interactive recommendation task might overwhelm learners, and consequently can have a negative
efect on the learning experience. Moreover, the recent attempts to introduce interactivity with ERSs
have mainly focused on providing user control mechanisms either with the input or the output of the
ERS. There exists a significant research gap and potential opportunities to make ERSs more transparent
by incorporating user control and interaction. To this end, in this paper, we present an interactive ERS
in which we introduce diferent levels of user control by allowing users to interact with all the three
parts of the ERS, namely input (i.e., user profile), process (i.e., recommendation algorithm), and output
(i.e., recommendations).</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Interactive Recommender Systems</title>
        <p>
          Research on RSs has traditionally focused on improving the accuracy of recommendations by developing
new algorithms or integrating additional data sources into the recommendation process [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. However,
many studies have demonstrated that higher accuracy does not always enhance the user experience of
the RS [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Consequently, recent research has shifted towards understanding how diferent interface
elements and user characteristics impact the overall user experience with RSs. Furthermore, an efective
RS should also take into account factors such as transparency to ensure societal value and trust [
          <xref ref-type="bibr" rid="ref11">31, 11</xref>
          ].
This shift in focus from purely algorithmic improvements to enhancing user experience has led to the
development of what are known as interactive recommender systems (IntRS), which emphasize user
control and interactivity to achieve greater transparency in RS [
          <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
          ].
        </p>
        <p>
          IntRSs ofer visual and exploratory UIs, allowing users to inspect the recommendation process and
control the system to receive better recommendations [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. These interactive, visual, and exploratory
UIs progressively guide users toward their objectives, enhance their understanding of the system’s
functionality, and ultimately contribute to transparency [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. IntRSs have been developed in various
domains including movies [
          <xref ref-type="bibr" rid="ref14 ref15 ref17">39, 14, 26, 15, 40, 17</xref>
          ], music [
          <xref ref-type="bibr" rid="ref16 ref7">16, 41, 7</xref>
          ], news [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], publications [27, 21],
tweets [23], group recommenders [24], social recommenders [
          <xref ref-type="bibr" rid="ref18">42, 22, 18</xref>
          ], conference recommenders
[30], and job recommenders [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. To gain a deeper understanding of IntRSs, we refer the interested
reader to the excellent literature reviews on this topic in [
          <xref ref-type="bibr" rid="ref10 ref8">10, 8</xref>
          ].
        </p>
        <p>
          IntRSs can roughly be grouped by the level they allow users to take control on, namely the RS input
(i.e., user profile), process (i.e., recommendation algorithm), and/or output (i.e., recommendations)
[
          <xref ref-type="bibr" rid="ref10 ref8">8, 10</xref>
          ]. Interaction with the input of the RS allows users to create or modify their interests as they want.
This level of control is provided by either allowing users to add, delete, or re-rate items in their profile
using various UI elements [
          <xref ref-type="bibr" rid="ref11 ref14 ref16 ref7">14, 16, 29, 21, 28, 40, 23, 7, 11</xref>
          ] or allowing them to visually interact with
visualizations of their interest profile to modify them [
          <xref ref-type="bibr" rid="ref15">20, 42, 15</xref>
          ]. Users can control the process of the
RS by either choosing the recommendation algorithm [
          <xref ref-type="bibr" rid="ref11">43, 30, 11</xref>
          ], or by manipulating the algorithmic
parameters [
          <xref ref-type="bibr" rid="ref16 ref17 ref7">43, 17, 16, 27, 29, 7</xref>
          ], using the UI elements provided. Lastly, users can control the output of
the recommender by providing feedback to the recommendations [26, 24], or ordering and sorting the
recommendations as they want [
          <xref ref-type="bibr" rid="ref11 ref16 ref17 ref18 ref7">16, 11, 17, 7, 23, 30, 18, 22, 42, 21</xref>
          ], based on the interactive elements
provided in the UI.
        </p>
        <p>
          In summary, IntRSs ofer varying levels of user interaction and control at three diferent levels,
namely input, process, and output. As summarized in Table 1, only a few IntRSs support interaction at
all three levels [
          <xref ref-type="bibr" rid="ref11 ref16 ref7">16, 41, 7, 11</xref>
          ]. Most of the IntRSs enable user interaction at the input level, allowing
users to provide or adjust their preferences. A smaller number of IntRSs facilitate interaction with
the process where users can adjust algorithmic parameters. Only few recommenders allow switching
between diferent algorithms. At the output level, most systems provide users with the ability to sort
recommendations, while fewer ofer options to give feedback on the recommendations. With regard to
UI design, only the study in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] focused on the systematic design of the diferent control mechanisms
in the UI of their proposed IntRS. To address these gaps, in this work, we introduce an interactive ERS
that extends interactivity and user control which is not commonly found in the educational domain.
Furthermore, we present the systematic design of the ERS in the MOOC platform CourseMapper in
which we provide control across all three levels, i.e., input, process, and output of the ERS.
        </p>
        <p>
          The impact of user control has been investigated in the literature on interactive recommendations in
various ways. Many researchers have studied the impact of user control on one or more recommendation
goals, namely, perceived quality of recommendations [25, 28], perceived accuracy of recommendations
[
          <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17 ref18">14, 15, 21, 16, 17, 18</xref>
          ], recommendation novelty [26], recommendation diversity [42], usability [20, 21, 22,
19], ease of use and playfulness [24], perceived usefulness [20, 24, 23, 22], user-perceived transparency
[19, 25], trust in the RS [
          <xref ref-type="bibr" rid="ref14">14, 21, 26, 28, 44</xref>
          ], user experience with the RS [
          <xref ref-type="bibr" rid="ref12 ref15 ref16">12, 15, 16, 27</xref>
          ], cognitive load
and recommendation acceptance [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], confidence with the RS [ 40, 28], user satisfaction with the RS
[
          <xref ref-type="bibr" rid="ref14 ref18">14, 29, 18, 30, 19, 28</xref>
          ], behavioural intentions of the user [28], and user acceptance of the RS [20, 29, 28].
While trust and satisfaction were the focus of a considerable number of studies, transparency remains
under-explored. Moreover, there is a notable gap that no research has comprehensively studied the
efects of user control on transparency, trust, and satisfaction together in the same study. Additionally,
the impact of these goals on each other has yet to be explored in the interactive recommendation
context. To address this research gap, in this paper, we study the impact of user control on transparency,
trust, and satisfaction. Moreover, we investigate how these goals interact with each other.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. System Design</title>
      <p>In this section, we present the design of the ERS module in the MOOC platform CourseMapper [32],
which introduces user control at three diferent levels, namely input, process, and output (see Figure 1).</p>
      <sec id="sec-3-1">
        <title>3.1. User Interface Design</title>
        <p>In this section, we discuss the systematic approach taken to design interactive components for the UI of
the ERS in CourseMapper, focusing on enhancing user control. We began by investigating the existing
literature on IntRSs to identify a range of user control mechanisms and interaction options commonly
employed in these systems, and analyzing their efectiveness in enhancing user control. The literature
was explored focusing on answering the question: How user control has been added to the UI of the RSs to
enable users to interact with diferent parts of the RS, i.e., input, process, and output? Once the interaction
mechanisms were identified, we chose the ones that are equally applicable to our context to ensure a
better user experience. In this way, we designed interaction and control options in the UI of our ERS,
focusing on the interaction with the input, process, and output of the ERS.</p>
        <p>
          Beginning with the input of the IntRS, interaction with the input provides users the control to
manage their preferences rather than the traditional method in which user preferences are estimated
by observing their behavior over time. Many interaction options have been implemented at the input
part of the recommenders to help users personalize their recommendations. Typically, to refine their
requirements, users are asked to mark a set of items extracted by the system based on their past activity,
using a binary scale in terms of “like/dislike” options. For instance, using ”Yes” or ”No” buttons [41],
or "Like" or "Dislike" buttons [28]. Another option to interact with the input is provided by letting
the users change the weights of their interests or re-rate items using sliders [
          <xref ref-type="bibr" rid="ref14 ref16">23, 29, 14, 16</xref>
          ], or using
a pre-defined sliding scale ranging from ‘Strongly Disagree’ to ‘Strongly Agree’ [ 42]. Another way
is to let the users choose or modify their interests, for example, add or delete items in their profile
using buttons with icons [
          <xref ref-type="bibr" rid="ref14 ref7">7, 14, 29</xref>
          ], or radio buttons [23], and re-rate items using sliders [
          <xref ref-type="bibr" rid="ref11 ref14 ref16">16, 14, 21, 11</xref>
          ].
Alternatively, there are more complex methods for obtaining preferences. Examples include using filters
for specific items [ 21], drop down lists, checkboxes, and radio buttons to specify diferent dimensions
of the interests [28, 40], or using toggle buttons to enable/disable certain interests [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Furthermore,
more advanced ways of interactions using visualizations are provided to the users using intent radar
which the users can interact with using a mouse to move interest items [20], or interaction with a graph
visualization of interests using a mouse to drag and drop items [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          Interaction with the process is commonly provided by allowing users to select or change the
recommendation algorithm or tune the algorithm parameters. The selection of the algorithm is provided using
radio buttons [43], text and icon based buttons [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], or selection using checkboxes [30]. To fine-tune
the algorithm parameters multiple control options are provided to the users including radio buttons for
feature selection [41], buttons to add or remove parameters [29], and sliders to adjust weights of the
parameters [
          <xref ref-type="bibr" rid="ref16 ref7">16, 27, 21, 19, 7</xref>
          ].
        </p>
        <p>
          Once the user’s preferences are identified, recommendation algorithm selected or modified, the system
can provide tailored recommendations. Several control options and UI components have been proposed
in the literature to interact with the output of the RS. Users can change the number of recommendations
or filter the recommendations list using sliders [
          <xref ref-type="bibr" rid="ref16">39, 16</xref>
          ], give feedback to the system using Yes/No
[41, 26] or Like/Dislike [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] buttons, give feedback about recommendations using buttons of diferent
sizes referring to intensity of emotions [24], or can give feedback using radio buttons [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Users are
provided with the options to sort the recommendation list based on multiple options using buttons
[
          <xref ref-type="bibr" rid="ref7">7, 23</xref>
          ], remove recommendations from the list using remove icon [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], sort recommendations using
drag and drop [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and reorder recommendations using toggles and sliders [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Moreover, advanced
interaction options are provided to the users when recommendations are presented visually instead of
lists. The users can interact with the Venn diagram to examine and filter the recommended items [ 27],
explore the opinion space using mouse interaction to increase or decrease diameter of circular opinion
space [42], mouse interaction with word cloud [22], drag and drop nodes in a graph [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], and arrange
items in a clustermap using mouse drag interaction [30].
        </p>
        <p>From this analysis, we identified and selected the most widely adopted interaction techniques that
align with our specific context and features, ensuring that our UI design is both intuitive and efective
for our users (see Table 1). After that, we started with the design of our ERS UI. Based on the UI elements
identified to interact with the input, process, and output, we created initial prototypes (see Figure 2a).
The prototypes were discussed with the authors’ team and diferent UI elements were refined and
improved. For example, deciding the colors, optimal labels for buttons, and labels to represent the
algorithms, providing UI for ranking the recommendations, and deciding whether or not to show the
impact of ranking in progress bars. The improved prototypes (see Figure 2b) were then translated to
the final design of the system, presented in the next sections.</p>
        <p>(a) Initial prototype
(b) Improved prototype</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Interaction with the Recommendation Input</title>
        <p>
          In the context of recommending learning resources, it is crucial to recommend accurate resources
tailored to learners’ needs for better learning outcomes [33]. Therefore, providing interaction around
the input of the ERS will facilitate learners to directly communicate their interests to the system. A
simple way to give users more control is to allow them to explicitly specify their interests and preferences
Paper
rather than relying on the system to determine these preferences from their past interactions [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
Similarly, in our ERS, to facilitate interaction with the input, we provide control to the users to choose
their interests and preferences explicitly to construct their learner model. Since CourseMapper [32] is a
MOOC platform where learners can enroll into multiple courses which contain learning materials, the
learners have the option to decide their knowledge deficiencies by explicitly marking the concepts as
’Did Not Understand’ (DNU concepts) from various slides of the PDF learning materials. To proceed
with recommendations the learners have various control options to decide the input for recommender.
Figure 3 shows the UI of the ERS with multiple options to interact with the input. First, the users can
decide which DNU concepts they want the recommendations for, using the radio buttons to select
’Current Slide’, ’All Slides’, or ’Select manually’ options (Figure 3a (A)). Clicking on ’Current Slide’ and
’All Slides’ retrieves the DNU concepts related to the current slide or all slides of the learning material,
respectively (Figure 3a (B), 3b). Whereas, clicking on the ’Select manually’ option let the user choose
any concepts from the whole learning material using the dropdown list with a search option (Figure
3c (A)). Once the DNUs are selected as input for the recommender, in all the three options, the user is
provided the opportunity to manipulate the weights of the concepts based on their preferences if they
ifnd some concepts more important than others (Figure 3a (B)). The recommendations will be impacted
(a) Select DNU concepts from the
current slide as input for
recommendation
(b) Select DNU concepts from all
slides as input for
recommendation
(c) Manually select concepts from
the learning material as input
for recommendation
by the changes in weights correspondingly. Furthermore, the user can include/exclude certain DNUs
using checkboxes (Figure 3b (C)), as well as remove DNUs from the input list using ’-’ icon (Figure 3c
(B)).
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Interaction with the Recommendation Process</title>
        <p>Interacting with the recommendation process allows users to choose or influence the recommendation
strategy or algorithm parameters [45]. To introduce user control around the process of our ERS, we
provide two options to the user. The first is to let them choose between the four recommendation
algorithms using radio buttons (Figure 4a, (A)). For better understandability of the recommendation
process for users, we provide a text-based abstract description of the underlying algorithms. The options
to select the algorithm are presented as an answer to the question "How should the recommendations be
generated?", with four possible answers as: 1) Comparing only keyphrases from articles and videos with
DNU concepts, 2) Comparing the whole content of articles and videos with DNU concepts, 3) Comparing
only keyphrases from articles and videos with main concepts of the current slide, and 4) Comparing the
whole content of articles and videos with whole content of the current slide. For more details about
how these recommendation algorithms work, please refer to our earlier publication [33]. Once the user
selects the algorithm, the second control option is to decide how they want the recommendations to be
ranked. The user can view multiple factors impacting the ranking of the recommendations with their
default weights. The users can adjust the weights of the factors using sliders (Figure 4a, (B)), where the
impact of the change in weight on the ranking is displayed in real-time in the progress bar adjacent
to each factor (Figure 4a, (C)). Furthermore, the user can select/de-select factors from the list using
checkboxes if they do not want to include a factor in ranking (Figure 4a, (D)).</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Interaction with the Recommendation Output</title>
        <p>
          Providing feedback to the system about the provided recommendations is an important factor influencing
the user experience of the RS [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. We provide multiple control options to the user to interact with the
output of the ERS (see Figure 4b). The first one is to let users provide feedback to the system about
the provided recommendations. To facilitate this, we provide a ’Helpful/Not Helpful’ button with each
recommended video or article (Figure 4b, (A)). Once the user clicks on ’Helpful’ a dropdown menu
appears which prompts the user to select the DNU concepts that the recommended video or article
helped them understand (Figure 4b, (B)). This extended level of control enables the user to provide
detailed feedback to the ERS, which can be used to improve future recommendations. Furthermore,
the user can sort the recommendations using the sort option which opens a dropdown with multiple
options (Figure 4b, (C)). The user can sort the recommendations based on their similarity score, creation
date, or number of views. The options are presented as ’Most similar’, ’Most recent’ and ’Most viewed’.
The third interaction option is to save the recommendations using a save icon (Figure 4b, (D)). These
recommendations are saved to access them later in the ’Saved’ tab (Figure 4b, (E)).
(a) Interaction with the process of
the ERS
        </p>
        <p>(b) Interaction with the output of the ERS</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. User Evaluation</title>
      <p>To evaluate our system, we conducted a detailed user study with end users, employing key measures
from the ResQue evaluation framework [46] to evaluate users’ perceived benefits in terms of perceived
system qualities (recommendation accuracy, recommendation novelty, recommendation diversity,
interface adequacy, information suficiency, interaction adequacy), beliefs (perceived ease of use, control,
transparency, perceived usefulness), attitudes (overall satisfaction, confidence, trust), and behavioral
intentions (use intentions, watch intention).</p>
      <sec id="sec-4-1">
        <title>4.1. Participants and Procedure</title>
        <p>We recruited participants from our current and past courses via emails and phone contacts. In total, 30
students (5F, 25M) participated in the study, including 10 Bachelor, 14 Masters and 6 Ph.D., from various
age groups (21 from 20-30, 8 from 30-40 and 1 above 40 years) and countries (6 Cameroon, 2 Chineese,
6 Egyptian, 2 German, 1 Nigerian, 4 Pakistani, 7 Syrian, and 2 Tunisian). Participants were from various
backgrounds including computer science and applied computer science, physics, and engineering. Most
of the participants were familiar with the use of RSs (n=24, 63%) and interacting with RSs (n=19, 63%).
However, only a few of them (n=12, 40%) were familiar with technical implementations of RSs.</p>
        <p>The study was conducted online through individual sessions with each participant using video
conferencing on Zoom. At the start of each session, participants were informed about the voluntary
nature of their participation, the confidentiality and anonymity of data collection, and were asked for
their consent to record the meeting. Following this, they completed a demographic survey and were
then introduced to the ERS via a demo video. Afterwards, participants were asked to perform specific
tasks using the ERS by taking control of the host’s shared screen. Once done, they filled an online
questionnaire using Google forms based on measures from ResQue [46]. The study took 90 minutes on
average.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. User Tasks</title>
        <p>The main tasks for the participants were to interact with multiple control options provided with the
input, process, and output of the ERS. The tasks were divided into sub-tasks to distribute the cognitive
load and facilitate a smooth experience with the ERS.</p>
        <p>1. Task 1: Interaction with the ERS input
a. Imagine that you are reading a learning material in CourseMapper and while reading you
do not understand something from the slides. Please interact with some slides and collect
the concepts you do not understand. (Goal: to add items to their interests).
b. Now, you want to view the recommended YouTube videos based on the specific concepts
that you do not understand from a particular slide. How will you proceed with it using this
system? (Goal: to test ’current slide’ option).
c. Once you have viewed the provided recommendations, now you want to generate new
recommendations based on some other concepts that you did not understand. Moreover,
you feel that some concepts are more important than the other ones and should be included
with a higher importance while generating the recommendations, or that some concepts
should not be included in the recommendations. How will you proceed with it? (Goal: to test
’all slides/select manually’, weight adjustment using sliders, include/exclude using checkboxes,
and remove options).
2. Task 2: Interaction with the ERS process
3. Task 3: Interaction with the ERS output
a. In this recommender system, you have the option to decide the way you want the
recommendations to be generated. There are multiple recommendation algorithms that you can select
based on your requirements. How would you proceed with changing the recommendation
algorithm as per your choice? (Goal: to test ’select algorithm’ option).
b. The recommendations are ranked after generation in the system’s default way. Imagine
that you want the recommendations to be ranked diferently as you prefer. How would you
proceed? (Goal: to test ’modify ranking’ options).
a. Now, you have the videos recommended to you and you want to view the latest videos first.</p>
        <p>Imagine that you don’t have time to watch all of the recommended videos now, and then
ensure that you can easily access a few of them later when you have more time to watch
them. How will you proceed? How will you access them later? (Goal: to test ’sort’ and ’save’
options).
b. Imagine that you find some of the recommended videos more useful than others in
understanding the concepts you previously did not understand. How will you give this feedback
to the system? (Goal: to test ’Helpful/Not helpful’ option).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>We asked all study participants to fill in a questionnaire after their interaction with the ERS. This
questionnaire assessed their experience of controlling diferent parts of the ERS on a five-point Likert
scale, using items from the ResQue evaluation framework [46]. Table 2 lists the items and the results
based on data obtained from the questionnaires.</p>
      <sec id="sec-5-1">
        <title>5.1. Perceived Benefits for Users</title>
        <p>Concerning RQ1 ("How does complementing an ERS with user control impact users’ perceptions of the
ERS?"), the analysis of the questionnaire data shows that users’ feedback on the ERS is positive across
all perceived aspects. Regarding recommendation-related perceived system qualities, the scores for
recommendation accuracy (Mean=4.03, SD=0.75) and recommendation novelty (Mean=4.13, SD=0.56) are
relatively high, suggesting that user control over the ERS can lead to relevant and novel recommendations
that align with users’ interests and preferences. Similarly, for UI-related perceived system qualities,
the scores for information suficiency (Mean=4.33, SD=0.59) and interaction adequacy (Mean=4.45,
SD=0.75) are also high, with interaction adequacy scoring the highest among all measures. This indicates
that users appreciated the interaction with the ERS input, process, and output and found it easy to
communicate their preferences and feedback to the ERS. Compared to information suficiency and
interaction adequacy, interface adequacy received a lower score (Mean=3.86, SD=0.7), indicating that
the interface design needs improvements in terms of labels and layout.</p>
        <p>Regarding user beliefs which refers to a higher level of user perception of a system, which is influenced
by perceived qualities [46], the score for user control (Mean=4.16, SD=0.8) indicates that users felt
in control of their interactions with the ERS. Additionally, a high score for transparency (Mean=4.2,
SD=0.7) suggests that the ability to control diferent parts of the ERS helped users understand how
the ERS works. These high scores suggest that user control is positively associated with transparency.
We explore in detail the impact of user control on transparency and other recommendation goals in
Section 5.2. Furthermore, the score for perceived ease of use (Mean=4.13, SD=0.79) suggests that the
ERS facilitates users to find their preferred items quickly and that users found the ERS easy to navigate
and quickly became familiar with it. In contrast, perceived usefulness received a comparatively lower
score (Mean=3.94, SD=0.74), possibly because the diverse recommendations were not always perceived
as optimal suggestions for some participants.</p>
        <p>With regard to user attitudes referring to a user’s overall feeling towards an RS [46], a highly positive
score of confidence (Mean=4.05, SD=0.92) reflects the recommender’s ability to convince users of the
recommended items. Furthermore, users showed a high level of overall satisfaction with the ERS
(Mean=4.26, SD=0.62), indicating an increased ease of use and enjoyment with the ERS. According to
Tintarev and Masthof [47], satisfaction can also be measured indirectly, measuring user loyalty. Thus,
users’ behavioral intentions can be seen as an indirect measure of loyalty and satisfaction with the
system. To this end, the highly positive scores for use intentions (Mean=4.04, SD=0.88) and watch
intention (Mean=4.3, SD=0.58) suggest that the system efectively influenced users’ decisions to continue
using the ERS and reflects their satisfaction with the ERS. In terms of trust, the positive score (Mean=3.9,
SD=0.9) shows an increased users’ confidence in the ERS. However, trust scored comparatively less than
transparency and satisfaction. This suggests that, while the recommendation goals of transparency,
trust, and satisfaction appear to move together, trust seems to be less correlated with the other two
goals. We investigate in detail the relationships and correlations between these goals in Section 5.3.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Impact of User Control on Diferent Goals</title>
        <p>To identify the impact of user control on diferent recommendation goals, namely transparency, trust,
and satisfaction (RQ2), and to explore how these goals interact with each other (RQ3), we computed
Pearson Correlation between them. Figure 6 shows the Pearson Correlation between each pair of goals,
by analyzing the responses of participants from the user study. For each correlation, the figure also
shows a 95% confidence interval for the correlation measured using bootstrap sampling, as well as if the
correlations are statistically significant or not (adjusted p-value &lt; 0.05). The results from the correlation
analysis reveal that user control has at least moderate correlation with all goals. More specifically,
user control strongly correlates with transparency with a statistical significance. Therefore, our study
provides evidence that user control with the ERS leads to an increased transparency. This confirms
ifndings from previous studies from the recommendation domain which pointed out that user control is
very closely tied to transparency [31] and that control significantly afects transparency [ 46]. This is
also in line with studies from interactive recommendation literature which show that user control can
also contribute to increased transparency of RSs [25, 19]. Furthermore, in our study, most users (n=27,
90%) regardless of their background knowledge concurred that the ERS was perceived as transparent.
One reason we deem responsible for the high level of transparency among the majority of participants is
that they were able to control the ERS in a way that best fits their needs and preferences. Consequently,
we argue that, regardless of the users’ background knowledge, empowering users to take control of the
recommendation should be an integral feature in any ERS, if transparency is a desirable property for
the ERS. We refer to this type of transparency as transparency through controllability.</p>
        <p>
          Furthermore, our analysis shows that user control moderately correlates with trust with a statistical
significance. This confirms that providing more control over RS is an important strategy to secure trust
[
          <xref ref-type="bibr" rid="ref10 ref11">11, 10, 45</xref>
          ] and that interactive features that allow users to control the RS increase their trust in the
system [
          <xref ref-type="bibr" rid="ref14">14, 21, 26</xref>
          ].
        </p>
        <p>
          Our study further shows that user control moderately correlates with user satisfaction with a
statistical significance. This suggests that user control over the RS positively influenced their satisfaction,
indicating an enhanced user experience. Our results confirm the findings by Pu et al. [46], who noted
that user control weighs heavily on the overall user experience with the RS. Furthermore, several
studies from the recommendation domain evidenced the positive efects of user control on satisfaction
[
          <xref ref-type="bibr" rid="ref18">19, 28, 29, 18, 30</xref>
          ] and user experience [
          <xref ref-type="bibr" rid="ref12 ref15 ref16">12, 15, 16, 27</xref>
          ]. In summary, our findings clearly demonstrate
the positive impact of user control in ERS, showing that providing user control over the ERS leads to
increased perceived transparency, trust, and satisfaction.
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Interaction Between Diferent Goals</title>
        <p>Regarding RQ3, the results from the correlation analysis (see Figure 6) reveal that all the three goals,
transparency, trust, and satisfaction appear to move together, however, the correlation is not strong in all
cases. We found in our study that transparency moderately correlates with satisfaction with a statistical
significance. Moreover, trust strongly correlates with satisfaction with a statistical significance. We
conclude that satisfaction is most correlated metric with all goals. This aligns with previous research
suggesting that overall user satisfaction with RS is strongly related to transparency and trust. For
instance, in the explainable RS domain, Balog and Radlinski [48], and Guesmi et al. [49, 50] found that
satisfaction is positively correlated with transparency and trust. In the same context, Gedikli et al.
[51] reported results from experiments with diferent explanations clearly showing that transparency
has a significant positive efect on user satisfaction. Moreover, a lower-transparency RS is known to
negatively afect user satisfaction with the RS [ 31]. Overall, our results provide evidence that, similar to
ifndings from the explainable RS domain, transparency and trust have significant positive efects on
user satisfaction in the interactive RS domain as well.</p>
        <p>
          Additionally, we found in our study that transparency and trust stand out as less correlated with each
other. This suggests that transparency and trust, if they are desirable properties for an IntRS, should be
evaluated as distinct, even if they may interact. One reason we consider responsible for the relative low
correlation between transparency and trust is that ofering interaction mechanisms at diferent levels
provides a significant degree of user control which is, as discussed in Section 5.2, highly influential to
transparency, but also introduces the risk of overwhelming users [45]. As pointed out by Jin et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ],
providing additional controls can also increase cognitive load, and diferent users have diferent needs
for control. This suggests that providing too much control does not necessarily lead to greater trust,
as it can increase the cognitive load on users. This further highlights that there is a trade-of between
the amount of user control and the level of trust users develop when interacting with the RS. Further
research is needed to find an optimal level of user control that will generate the highest level of users’
trust in the RS.
        </p>
        <p>Another reason that can explain the relatively low correlation between transparency and trust is
that while transparency can be achieved through controllability, interacting with the input, process,
or output of the RS cannot always assure that users understand the underlying rationale of the RS,
especially when the recommendation mechanism is too complicated to non-expert users [25], which
can negatively impact the perceived trust of the RS. Some considerable transparency could be achieved
through explanation which is also recognized as an important factor that fosters user trust in RS, as it
can improve users’ understanding of how the system works [31]. We hypothesize that a diferent result
may be observed if explanations were provided in the ERS together with interactions. It is therefore
important to explore in the future work the efects of diferent types of transparency (i.e., transparency
through controllability vs. transparency through explanation) on users’ trust in RSs [52].</p>
        <p>A further possible reason for the relative low correlation between transparency and trust is that,
as found in [46], trust depends primarily on the RS’s ability to formulate good recommendations (i.e.,
perceived usefulness) and provide useful explanations (i.e., transparency through explanation), rather
than on control (i.e., transparency through controllability).</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>In this paper, we aimed to shed light on an aspect that remains under-researched in the literature
on educational recommender systems (ERSs), namely the efects of providing user control on users’
perceptions of ERSs. To this end, we systematically designed and evaluated user control in the ERS
module of the MOOC platform CourseMapper. Specifically, we introduced user control with the input
(i.e., user profile), process (i.e., recommendation algorithm), and output (i.e., recommendations) of
the ERS. We conducted an online user study (N=30) to explore the impact of user control on users’
perceptions of the ERS in terms of several important user-centric aspects. Moreover, we examined the
impact of user control on transparency, trust, and satisfaction, and explored the interactions among
these recommendation goals. Our findings indicate that users responded positively to the control
mechanisms provided in the ERS. Our results further reveal that satisfaction is the most correlated
metric with all goals and that transparency and trust stand out as less correlated with each other. This
suggests that it may be necessary to separately consider transparency and trust in the evaluation of
interactive ERSs. As a future work, we plan to conduct a more comprehensive user study, incorporating
qualitative analysis to gain deeper insights into the relationships between user control, transparency,
trust, and satisfaction. An interesting direction in future work would be to also investigate the efects of
diferent levels of user control (i.e., input, process, output) on the users’ perceptions of and interactions
with the ERS.
[19] C.-H. Tsai, P. Brusilovsky, Providing control and transparency in a social recommender system for
academic conferences, in: Proceedings of the 25th conference on user modeling, adaptation and
personalization, 2017, pp. 313–317.
[20] A. Kangasrääsiö, D. Glowacka, S. Kaski, Improving controllability and predictability of interactive
recommendation interfaces for exploratory search, in: Proceedings of the 20th International
Conference on Intelligent User Interfaces, IUI ’15, 2015.
[21] S. Bruns, A. C. Valdez, C. Greven, M. Ziefle, U. Schroeder, What should i read next? a personalized
visual publication recommender system, in: International Conference on Human Interface and
the Management of Information, Springer, 2015, pp. 89–100.
[22] S. Zhao, M. X. Zhou, Q. Yuan, X. Zhang, R. Zheng, Who is talking about what: social map-based
recommendation for content-centric social websites, in: Proceedings of the 4th ACM Conference
on Recommender Systems, New York, USA, 2010.
[23] N. Tintarev, B. Kang, T. Höllerer, J. O’Donovan, Inspection mechanisms for community-based
content discovery in microblogs., in: IntRS@ RecSys, 2015, pp. 21–28.
[24] Y. Chen, P. Pu, Cofeel: Using emotions for social interaction in group recommender systems
(2012).
[25] C.-H. Tsai, P. Brusilovsky, The efects of controllability and explainability in a social recommender
system, User Modeling and User-Adapted Interaction 31 (2021) 591–627.
[26] B. Loepp, T. Hussein, J. Ziegler, Choice-based preference elicitation for collaborative filtering
recommender systems, in: Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, 2014, pp. 3085–3094.
[27] D. Parra, P. Brusilovsky, C. Trattner, See what you want to see: visual user-driven approach for
hybrid recommendation, in: Proceedings of the 19th International Conference on Intelligent User
Interfaces, IUI ’14, Association for Computing Machinery, New York, USA, 2014, p. 235–240.
[28] Y. Jin, K. Seipp, E. Duval, K. Verbert, Go with the flow: efects of transparency and user control on
targeted advertising using flow charts, in: Proceedings of the international working conference
on advanced visual interfaces, 2016, pp. 68–75.
[29] S. Bostandjiev, J. O’Donovan, T. Höllerer, Linkedvis: Exploring social and semantic career
recommendations, 2013, pp. 107–116. doi:10.1145/2449396.2449412.
[30] K. Verbert, D. Parra, P. Brusilovsky, E. Duval, Visualizing recommendations to support
exploration, transparency and controllability, in: Proceedings of the 2013 International Conference on
Intelligent User Interfaces, IUI ’13, ACM, New York, NY, USA, 2013.
[31] N. Tintarev, J. Masthof, Explaining recommendations: Design and evaluation, in: Recommender
systems handbook, Springer, 2015, pp. 353–382.
[32] Q. U. Ain, M. A. Chatti, S. Joarder, I. Nassif, B. S. Wobiwo Teda, M. Guesmi, R. Alatrash, Learning
channels to support interaction and collaboration in coursemapper, in: Proceedings of the 14th
International Conference on Education Technology and Computers, ICETC ’22, 2023.
[33] Q. U. Ain, M. A. Chatti, P. A. Meteng Kamdem, R. Alatrash, S. Joarder, C. Siepmann, Learner
modeling and recommendation of learning resources using personal knowledge graphs, in:
Proceedings of the 14th Learning Analytics and Knowledge Conference, 2024, pp. 273–283.
[34] E. Fotopoulou, A. Zafeiropoulos, M. Feidakis, D. Metafas, S. Papavassiliou, An interactive
recommender system based on reinforcement learning for improving emotional competences in
educational groups, in: International Conference on Intelligent Tutoring Systems, Springer, 2020.
[35] M. Bustos López, G. Alor-Hernández, J. L. Sánchez-Cervantes, M. A. Paredes-Valverde, M. d. P.</p>
      <p>Salas-Zárate, Edurecomsys: an educational resource recommender system based on collaborative
ifltering and emotion detection, Interacting with Computers 32 (2020) 407–432.
[36] F. L. da Silva, B. K. Slodkowski, K. K. A. da Silva, S. C. Cazella, A systematic literature review
on educational recommender systems for teaching and learning: research trends, limitations and
opportunities, Education and Information Technologies 28 (2023) 3289–3328.
[37] A. Zapata, V. H. Menéndez, M. E. Prieto, C. Romero, Evaluation and selection of group
recommendation strategies for collaborative searching of learning objects, International Journal of
Human-Computer Studies 76 (2015) 22–39.
[38] S. Abdi, H. Khosravi, S. Sadiq, D. Gasevic, Complementing educational recommender systems with
open learner models, in: Proceedings of the tenth international conference on learning analytics
&amp; knowledge, 2020, pp. 360–365.
[39] M. Vlachos, D. Svonava, Graph embeddings for movie visualization and recommendation, in: First</p>
      <p>International Workshop on Recommendation Technologies for Lifestyle Change, 2012.
[40] J. B. Schafer, J. A. Konstan, J. Riedl, Meta-recommendation systems: user-controlled integration of
diverse recommendations, in: Proceedings of the Eleventh International Conference on Information
and Knowledge Management, Association for Computing Machinery, NY, USA, 2002.
[41] Y. Saito, T. Itoh, Musicube: a visual music recommendation system featuring interactive
evolutionary computing, in: Proceedings of the 2011 Visual Information Communication - International
Symposium, VINCI ’11, Association for Computing Machinery, New York, USA, 2011.
[42] D. Wong, S. Faridani, E. Bitton, B. Hartmann, K. Goldberg, The diversity donut: enabling participant
control over the diversity of recommended responses, in: CHI’11 Extended Abstracts on Human
Factors in Computing Systems, 2011, pp. 1471–1476.
[43] M. D. Ekstrand, D. Kluver, F. M. Harper, J. A. Konstan, Letting users choose recommender
algorithms: An experimental study, Proceedings of the 9th ACM Conference on Recommender
Systems (2015).
[44] J. Ooge, L. Dereu, K. Verbert, Steering recommendations and visualising its impact: Efects on
adolescents’ trust in e-learning platforms, in: Proceedings of the 28th International Conference on
Intelligent User Interfaces, IUI ’23, 2023, p. 156–170.
[45] D. Jannach, M. Jugovac, I. Nunes, Explanations and user control in recommender systems, in:
Proceedings of the 23rd International Workshop on Personalization and Recommendation on the
Web and Beyond, 2019, pp. 31–31.
[46] P. Pu, L. Chen, R. Hu, A user-centric evaluation framework for recommender systems, in: ACM</p>
      <p>Conference on Recommender Systems, 2011.
[47] N. Tintarev, J. Masthof, A survey of explanations in recommender systems, in: 2007 IEEE 23rd</p>
      <p>International Conference on Data Engineering Workshop, 2007, pp. 801–810.
[48] K. Balog, F. Radlinski, Measuring recommendation explanation quality: The conflicting goals of
explanations, in: Proceedings of the 43rd international ACM SIGIR conference on research and
development in information retrieval, 2020, pp. 329–338.
[49] M. Guesmi, M. A. Chatti, S. Joarder, Q. U. Ain, R. Alatrash, C. Siepmann, T. Vahidi, Interactive
explanation with varying level of details in an explainable scientific literature recommender system,
International Journal of Human–Computer Interaction (2023) 1–22.
[50] M. Guesmi, M. A. Chatti, S. Joarder, Q. U. Ain, C. Siepmann, H. Ghanbarzadeh, R. Alatrash,
Justification vs. transparency: Why and how visual explanations in a scientific literature recommender
system, Information 14 (2023) 401.
[51] F. Gedikli, D. Jannach, M. Ge, How should i explain? a comparison of diferent explanation types
for recommender systems, International Journal of Human-Computer Studies 72 (2014) 367–382.
[52] C. Siepmann, M. A. Chatti, Trust and transparency in recommender systems, arXiv preprint
arXiv:2304.08094 (2023).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Manouselis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Drachsler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vuorikari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hummel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Koper</surname>
          </string-name>
          ,
          <article-title>Recommender systems in technology enhanced learning</article-title>
          ,
          <source>Recommender systems handbook (</source>
          <year>2011</year>
          )
          <fpage>387</fpage>
          -
          <lpage>415</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Khanal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alsadoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maag</surname>
          </string-name>
          ,
          <article-title>A systematic review: machine learning based recommendation systems for e-learning, Education</article-title>
          and Information
          <string-name>
            <surname>Technologies</surname>
          </string-name>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Valtolina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Matamoros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Epifania</surname>
          </string-name>
          ,
          <article-title>Design of a conversational recommender system in education, User modeling and user-adapted interaction (</article-title>
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Barria-Pineda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          ,
          <article-title>Learning content recommender system for instructors of programming courses</article-title>
          ,
          <source>in: Artificial Intelligence in Education: 19th International Conference, AIED</source>
          <year>2018</year>
          , London, UK, June 27-30,
          <year>2018</year>
          , Proceedings,
          <source>Part II 19</source>
          , Springer,
          <year>2018</year>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bousbahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chorfi</surname>
          </string-name>
          ,
          <article-title>Mooc-rec: A case based recommender system for moocs</article-title>
          ,
          <source>Procedia - Social and Behavioral Sciences</source>
          <volume>195</volume>
          (
          <year>2015</year>
          )
          <fpage>1813</fpage>
          -
          <lpage>1822</lpage>
          . World Conference on Technology,
          <source>Innovation and Entrepreneurship.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O. C.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Saneiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. G.</given-names>
            <surname>Boticario</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Rodriguez-Sanchez</surname>
          </string-name>
          ,
          <article-title>Toward interactive contextaware afective educational recommendations in computer-assisted language learning</article-title>
          ,
          <source>New Review of Hypermedia and Multimedia</source>
          <volume>22</volume>
          (
          <year>2016</year>
          )
          <fpage>27</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tintarev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <article-title>Efects of personal characteristics on music recommender systems with diferent levels of controllability</article-title>
          ,
          <source>in: Proceedings of the 12th ACM Conference on Recommender Systems</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <article-title>Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>56</volume>
          (
          <year>2016</year>
          )
          <fpage>9</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jugovac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jannach</surname>
          </string-name>
          ,
          <article-title>Interacting with recommenders-overview and research directions</article-title>
          ,
          <source>ACM Transactions on Interactive Intelligent Systems (TiiS) 7</source>
          (
          <issue>2017</issue>
          )
          <fpage>1</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jannach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Naveed</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Jugovac, User control in recommender systems: Overview and interaction challenges</article-title>
          , in: E-Commerce and Web Technologies: 17th International Conference, EC-Web
          <year>2016</year>
          , Porto, Portugal, September 5-
          <issue>8</issue>
          ,
          <year>2016</year>
          ,
          <source>Revised Selected Papers 17</source>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Harambam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bountouridis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Makhortykh</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. van Hoboken</surname>
          </string-name>
          ,
          <article-title>Designing for the better by taking users into account: a qualitative evaluation of user control mechanisms in (news) recommender systems</article-title>
          ,
          <source>in: Proceedings of 13th ACM Conference on Recommender Systems, RecSys '19</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B. P.</given-names>
            <surname>Knijnenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bostandjiev</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Donovan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kobsa</surname>
          </string-name>
          ,
          <article-title>Inspectability and control in social recommenders</article-title>
          ,
          <source>in: Proceedings of the sixth ACM conference on Recommender systems</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Barria-Pineda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          ,
          <article-title>Explaining educational recommendations through a concept-level knowledge visualization</article-title>
          ,
          <source>in: Companion Proceedings of the 24th International Conference on Intelligent User Interfaces</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>104</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Schafer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hollerer</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Donovan</surname>
          </string-name>
          ,
          <article-title>Hypothetical recommendation: A study of interactive profile manipulation behavior for recommender systems</article-title>
          ,
          <source>in: 28th international flairs conference</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>J. O'Donovan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Smyth</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Gretarsson</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bostandjiev</surname>
          </string-name>
          , T. Höllerer,
          <article-title>Peerchooser: visual interactive recommendation</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08</source>
          ,
          <string-name>
            <surname>Association</surname>
            for Computing Machinery,
            <given-names>NY</given-names>
          </string-name>
          , USA,
          <year>2008</year>
          , p.
          <fpage>1085</fpage>
          -
          <lpage>1088</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bostandjiev</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Donovan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Höllerer</surname>
          </string-name>
          ,
          <article-title>Tasteweights: a visual interactive hybrid recommender system</article-title>
          ,
          <source>in: Proceedings of the 6th ACM Conference on Recommender Systems</source>
          , Association for Computing Machinery, New York, USA,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Harper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Condif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Terveen</surname>
          </string-name>
          ,
          <article-title>Putting users in control of their recommendations</article-title>
          ,
          <source>in: Proceedings of the 9th ACM Conference on Recommender Systems</source>
          , RecSys '15,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2015</year>
          , p.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gretarsson</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Donovan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bostandjiev</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hall</surname>
          </string-name>
          , T. Höllerer, Smallworlds: Visualizing social recommendations,
          <source>Computer Graphics Forum</source>
          <volume>29</volume>
          (
          <year>2010</year>
          )
          <fpage>833</fpage>
          -
          <lpage>842</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>