<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Making Educational Recommendations Transparent through a Fine-Grained Open Learner Model</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jordan Barria-Pineda</string-name>
          <email>jab464@pitt.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Brusilovsky</string-name>
          <email>peterb@pitt.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pittsburgh</institution>
          ,
          <addr-line>Pittsburgh, PA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>Recommendations for online educational systems generally difer from recommendations generated in other contexts (e.g. movies, e-commerce), given that students' level of knowledge rather then their interests is key for suggesting the most appropriate content. Thus, the challenge of making recommendations more transparent is closely tied to how student skills are estimated and conveyed. In this paper, we present an approach based on Open Learner Model visualization as a first step for making the learning content recommendation process more transparent. A preliminary analysis of students who used the visualization for navigating the content of an introductory programming course showed that considerable time was spent exploring the explanatory interface, which could be linked to the significant likelihood of opening/attempting the recommended activities.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Applied computing → Interactive learning environments;
• Human-centered computing → Information visualization.
IUI Workshops’19, March 20, 2019, Los Angeles, USA
© 2019 Copyright ©2019 for the individual papers by the papers’ authors. Copying
permitted for private and academic purposes. This volume is published and copyrighted
by its editors.
1</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Over the past few years, the research on Explanations for
Recommender Systems attracted attention of many researchers along with
the broader trend of Explainable AI/Machine Learning. These eforts
aim on helping recommender system users understand why a
specific item or a certain decision is being recommended. Explanations
have been studied in many contexts, like e-commerce, people, and
location recommender systems [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. However, little work has been
done in the context of online educational systems, i.e., exploring
how explanations can benefit or hinder the adoption of
recommendations in learning scenarios. In fact, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] argues that explainability
is one of the challenges for educational recommender systems and
points out to information visualizations as a possible way to address
this issue.
2
      </p>
    </sec>
    <sec id="sec-3">
      <title>EXPLANATIONS AND KNOWLEDGE</title>
    </sec>
    <sec id="sec-4">
      <title>VISUALIZATION IN ONLINE EDUCATIONAL</title>
    </sec>
    <sec id="sec-5">
      <title>SCENARIOS</title>
      <p>
        There is a small body of research on how explanations in
recommender systems for learning can improve factors related to student
engagement with recommendations, such as persuasiveness,
learning eficiency, satisfaction, etc. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        On the other hand, there is a solid body of work on Open Learner
Models (OLMs) focused on visualizing student knowledge [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In
particular, in our earlier work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] we explored a fine-grained
visualization of student knowledge, which reflected the distribution of
knowledge gained on every programming concept associated with
every learning activity in the platform. This visualization helped
students to understand their knowledge on a deeper level [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The
work presented below attempts to fill that gap between OLM and
educational recommendations. We argue that OLM interfaces could
be used to explain learning content recommendations when they
are generated based on student level of knowledge of the domain.
3
      </p>
    </sec>
    <sec id="sec-6">
      <title>NAVIGATION SUPPORT AND CONTENT</title>
    </sec>
    <sec id="sec-7">
      <title>RECOMMENDATION IN MASTERY GRIDS</title>
      <p>
        Mastery Grids is an intelligent interface which ofers access to
diferent kinds of practice content for introductory programming
courses. To help students in accessing most relevant content, it
ofers provides several kinds of navigation support as well as direct
recommendation. Figure 1 shows a a version of Mastery Grids for
a Java programming course reviewed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The system organizes
course contents into topics, displayed as columns of the grid. The
ifrst row shows topic-by-topic knowledge progress of the current
student by using green colors of diferent density, the darker the
higher the progress. This is, technically, a topic-level OLM of
student Java knowledge. The third row shows the aggregated progress
of the rest of the students of the class in shades of orange. The
second row presents a diferential color comparing the students
progress and the class progress. For example, in Figure 1 the student
has a higher progress than the class in most of the topics where the
cells in the second row are green, but the class is more advanced
in two of the topics (13th and 20th column) where the cells in the
second row are orange. The student has same progress as the class
in four topics with light gray color (11th, 15th, 18th, and 19th
column). By clicking in cells, the student can access learning content
for each topic. For example, in Figure 1, the student has clicked
the topic Classes and the system displays cells to access questions
and examples related with this topic. Note that the social and the
comparison rows could be hidden to help students focusing on their
own knowledge.
      </p>
      <p>By presenting student’s own knowledge, group knowledge, and
their comparison, the system ofers several kinds of navigation
support, which could help students find most appropriate content
for diferent kinds of learning goals. For example, personal part
of OLM could help focusing on least learned topics, group model
could help in locating “safe” topics that already mastered by a good
part of the class, while the comparison could help to focus on the
knowledge gaps. To augment this kind of navigation support, we
also explored several personalized recommendation approaches.
The older version of our recommendation interface shown in
Figure 1 selects top three recommended content items at each given
moment and displays their presence in the topic using red stars
that appear on both, recommended items and their containing
topics. The size of the stars shows the position of the recommended
items in the top – 3 list. This presentation of recommended items
is consistent with the navigation support nature of the interface:
it does not force students to go to the recommended content, but
informs the students and helps them to make their next
navigational step. The resulting interface combines the social guidance of
social OLM with the personal guidance provided by
recommendation algorithms. Yet, directly recommended content difers from the
navigation support provided by the OLM by the total lack of
communicated reasons behind recommendation. While the low or high
level of individual of social topic knowledge could be easily traced
down to extensive or low work with topic content (clearly
visualized in the content browser when the topic is opened), the system
ofered no hints on why a specific content item is recommended.
In this paper we present an interface that attempts to address this
problem by connecting recommended content with a finer-graned
picture of student knowledge ofered by a concept-level OLM.
4</p>
    </sec>
    <sec id="sec-8">
      <title>ENABLING LEARNING CONTENT</title>
    </sec>
    <sec id="sec-9">
      <title>RECOMMENDATION TRANSPARENCY</title>
    </sec>
    <sec id="sec-10">
      <title>THROUGH A FINE-GRAINED OLM</title>
      <p>
        The design of visual explanation of content recommendation is
based on our earlier work on concept-level OLM [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This work
explored the role of finer-grained OLM on student motivation and
navigation support. Our visualization allowed students to see the
overall level of their knowledge concept-by concept as well as to
see concepts associated with each learning activity by mousing
over this activity cell (see Figure 2B).
      </p>
      <p>Given the recent interest in using visual interfaces to making
recommendation processes more transparent to users, it was natural to
explore the use of OLMs as an interface that could add transparency
to educational recommendation. In this paper we show our first
attempt to use concept-level knowledge visualization to explain
the choices made by the learning content recommendation engine
in order to make the reason behind these recommendations more
clear to the users.
4.1</p>
    </sec>
    <sec id="sec-11">
      <title>The Visual Explanation Interface</title>
      <p>The main features of our visual explanation interface are:
(1) Concepts mastery bar chart: as it can be seen in Figure 2C,
the estimation of the student mastery of domain concepts
is shown through a simple bar chart. In order to emphasize
when the student model is more or less confident about the
student mastery on a concept, we use 50% as the zero of the
y-axis, as with this percentage the model is not sure about
student mastery or lack of it, and also the model probabilities
get initialized with this values in the cold-start scenario (no
evidence of students’ activity). Accordingly, whenever the
student shows evidence that s/he is learning a concept the
mastery percentage increases above this base probability
and hence the corresponding concept bar increases it length
towards the positive y-axis. In contrast, if the learner starts
failing i.e. giving evidence that s/he is having troubles in
learning a specific concept, the estimated mastery probability
decreases below the base value and we reflect this through
an increase in the corresponding concept bar length towards
the negative part of the y-axis. We encode the bars’ color
following the same rule: when the mastery probability is
above 50%, we use green and it gets more intense when</p>
      <p>closer to 100%, whereas when below 50% we use red and it
gets more intense when it is closer to 0%.</p>
      <p>Further, in order to give more context about the concepts
that the student should set as her/his study goal, the "focus
concepts" for the current topic are highlighted with a dashed
frame (see C in Figure 2). It is important to mention that this
visualization component can be used regardless the student
modeling approach used for estimating student knowledge
level, as it only uses the mastery estimates’ values.
(2) Recommendation gauge: The score that represent the
suitability of a certain learning content given its conceptual
composition is shown through a gauge. When a learning
activity is mouseovered, one of three gauge segments will
be targeted by the needle (see C in Figure 2), according to
its appropriateness to her/his level of knowledge. The three
categories are the following: (1) Too hard: if the estimated
probability of a successful attempt is too low (red segment),
(2) Learning opportunity: activities in which some of the
concepts are not mastered yet, but some important ones are
mastered and can help on increasing student learning (green
segment), and (3) Too easy: content that will not report any
important learning increase, given that the underlying
concepts are already mastered (gray segment).
(3) Textual explanation: a textual explanation of the
recommendation rule that was triggered for the recommended item is
shown when the activity cell is mouseovered (see C in Figure
2). We detail the rule-based recommendation algorithm used
in the present study on the next section.
4.2</p>
    </sec>
    <sec id="sec-12">
      <title>Recommendation approach</title>
      <p>
        For this study, we used a rule-based recommendation algorithm
based on the current level of knowledge of the student, which is
updated every time an activity is attempted [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. According to the
correctness of each attempt, the nodes’ values of the Bayesian
network that represent the student model are recomputed (increased
or decreased). These nodes reflect the probability of mastering each
ifne-grained concept, and also the probability of solving a problem
correctly or understanding an example. These last probability
values are considered as appropriateness scores for each activity; if
the value is above 0.7 it is considered as a good candidate for being
recommended.
      </p>
      <p>
        Now, it is important to mention that examples and challenges
(parsons-like activities) were created in groups that share the same
learning goals [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Given this fact, the rule-based recommendation
algorithm gives maximum priority to recommend a specific
challenge whenever an related example was explored - regardless of its
appropriateness score -. If the last is not the case, given the
appropriateness score, the coding problems, non-related challenges and
examples with higher scores (in that order of priority) are suggested
up to complete a set of three recommended activities per topic. The
whole rule set is described in more details in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>As stated in the previous section, the rule that triggered one of
the top three recommended items is shown when the activity cell
is mouseovered in the interface.
5</p>
    </sec>
    <sec id="sec-13">
      <title>PRELIMINARY RESULTS</title>
      <p>We released this version of Mastery Grids with visual/textual
elements for making learning activities’ recommendations more
transparent in an intermediate Java programming course at the
University of Pittsburgh (Fall term, 2018). In order to motivate students
to use this non-mandatory practice system during the term, we
ofered extra-credit for completing a minimum activity threshold (7
coding problems, 5 parsons-like problems and 3 program examples’
explorations). Half of students had access to textual explanations
and half did not. Only 36 students out of 105 that had access to
this interface version fulfilled the extra-credit requirement (13 with
access to textual explanations and 23 without). This subset was used
for the analysis of students’ behavior on the system. We focused
this general analysis on students navigation within the system
and the likelihood of opening/attempting the learning activities’
recommendations.</p>
      <p>From the navigational side, we calculated the proportion of time
that students spent using the Mastery Grids interface, i.e. not
solving problems or reading program examples. In average, students
explored the interface components in a 49.4% of the time (SD=13.2%).
This shows that students used almost half of their time in the
platform exploring the Open Learning Model components, which could
be a sign that they took time for understanding why the
recommended content was suggested to them at every moment.</p>
      <p>In order to study the influence of the recommendations showed
in the system, we explore if there were diferences between the
attempts on recommended and not recommended activities for the
whole group of students. The first metric we computed was the
probability of opening a mouseovered activity (p_open_mouseover ),
calculated as the number of opened activities divided by the number of
mouseovers on the Mastery Grids activity cells. A paired Wilcoxon
Signed Rank test evidenced that p_open_mouseover was
significantly higher (V =552, p&lt;.01) for recommended activities (Mdn =
.171) than for the ones that were not recommended (Mdn=.127). For
details, see Figure 3.</p>
      <p>Furthermore, as students sometimes open an activity but they
close it after feeling is not the right activity to be attempted, we
decided to compute the probability of attempting an opened
activity as a second metric for measuring recommendations’ influence
(p_attempt _open). This probability was computed as the proportion
of activities that were attempted divided by the number of activities
that were opened. A paired Wilcoxon Signed Rank test evidenced
that p_attempt _open was significantly higher ( V =224 , p&lt;.05) for
recommended activities (Mdn=.941) than for non-recommended
ones (Mdn = 0.839).</p>
      <p>Finally, we explored deeper diferences between students with
and without textual explanations (i.e. more and less transparency).
We focused this analysis on the likelihood of opening/attempting
recommended activities. After performing four paired Wilcoxon
Signed Rank tests (see Table 1), we found that the general trend
of having a higher probability of opening a recommended (rec)
activity than a non-recommended (non_rec) one is still significant
(p_open_mouseover ), but marginal for students with textual
explanations (p&lt;.1). On the other hand, only the group with textual
explanations exhibited significantly higher probability of attempting
an opened activity when this is recommended rather than
nonrecommended (p_attempt _open). This result suggests that
including textual explanations seems to be related to a higher students’
confidence about the appropriateness of the activity, which could
be triggering more attempts. It is important to mention that we
need to be careful in interpreting this set of results given the low
diferences in medians ( .05) and the unbalanced number of students
on each subgroup.</p>
    </sec>
    <sec id="sec-14">
      <title>6 CONCLUSION</title>
      <p>In this paper we proposed the use of a fine-grained Open Learner
Model for supporting the understanding of how a learning
content recommender engine works. In this way, the recommendation
process became partially more transparent to the students, as it
was made visible by showing estimations of students’ concept-level
knowledge (recommender’s input) and part of the recommender
rules (recommender’s algorithm).</p>
      <p>After releasing the system for testing it in a real introductory
programming class, we found that transparent recommended
activities by the system seemed to have an influence, as the probability of
opening and attempting it and further, attempting it when opened,
was significantly higher than non-recommended activities.
Moreover, from this study can be inferred that adding transparency
for explaining the outcome of the recommendation could lead to
a higher confidence in attempting the activities that are
recommended, however, a deeper data including students opinions should
be collected to be sure about this claim.</p>
    </sec>
    <sec id="sec-15">
      <title>7 FUTURE WORK</title>
      <p>We plan to evaluate this interface in a controlled user study by
using an eye-tracking setup, in order to study how students
explore and make use of the diferent explanatory components for
making their decisions on attempting activities - as otherwise it is
very dificult to obtain this information -. Also, we plan to gather
students thoughts about the value of adding transparency to an
educational recommender system, with the aim of studying if the
benefit of making the system more transparent surpasses the cost
of increasing its understanding’s complexity.</p>
      <p>Additionally, we are working on analyzing previous students’
activity data in order to define a more ”data-driven” set of rules
for the recommendation algorithm instead of the ad-hoc approach
that we used for the setup of this study, which can open other
visualization setups for making the recommendations transparent.</p>
    </sec>
    <sec id="sec-16">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work was funded by CONICYT PFCHA/Doctorado Becas
Chile/2018 - 72190680.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Jordan</given-names>
            <surname>Barria-Pineda</surname>
          </string-name>
          , Julio Guerra, Yun Huang, and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Concept-Level Knowledge Visualization For Supporting Self-Regulated Learning</article-title>
          .
          <source>In Proceedings of the 22nd Int. Conference on Intelligent User Interfaces (IUI '17 Companion)</source>
          .
          <fpage>141</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Jordan</given-names>
            <surname>Barria-Pineda</surname>
          </string-name>
          ,
          <article-title>Julio Guerra-Hollstein, and</article-title>
          <string-name>
            <given-names>Peter</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>A Fine-Grained Open Learner Model for an Introductory Programming Course</article-title>
          .
          <source>In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP '18)</source>
          .
          <fpage>53</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Susan</given-names>
            <surname>Bull</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judy</given-names>
            <surname>Kay</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Student Models That Invite the Learner In: The SMILI:() Open Learner Modelling Framework</article-title>
          .
          <source>Int. Journal of Artificial Intelligence in Education 17</source>
          ,
          <issue>2</issue>
          (
          <year>2007</year>
          ),
          <fpage>89</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Roya</given-names>
            <surname>Hosseini</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Program Construction Examples in Computer Science Education: From Static to Adaptive Engaging Learning Technology</article-title>
          .
          <source>Ph.D. Dissertation</source>
          . University of Pittsburgh.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Roya</given-names>
            <surname>Hosseini</surname>
          </string-name>
          , Kamil Akhuseyinoglu, Andrew Petersen,
          <string-name>
            <given-names>Christian D.</given-names>
            <surname>Schunn</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>PCEX: Interactive Program Construction Examples for Learning Programming</article-title>
          .
          <source>In Proceedings of the 18th Koli Calling International Conference on Computing Education Research (Koli Calling '18).</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Roya</given-names>
            <surname>Hosseini</surname>
          </string-name>
          ,
          <string-name>
            <surname>I-Han</surname>
            <given-names>Hsiao</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Julio</given-names>
            <surname>Guerra</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>What Should I Do Next? Adaptive Sequencing in the Context of Open Social Student Modeling. In Design for Teaching and Learning in a Networked World, Gráinne Conole</article-title>
          , Tomaž Klobučar, Christoph Rensing, Johannes Konert, and Elise Lavoué (Eds.). Springer International Publishing, Cham,
          <fpage>155</fpage>
          -
          <lpage>168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Nikos</given-names>
            <surname>Manouselis</surname>
          </string-name>
          , Hendrik Drachsler, Katrien Verbert, and
          <string-name>
            <given-names>Erik</given-names>
            <surname>Duval</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Recommender Systems for Learning</article-title>
          . Springer Publishing Company, Incorporated.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Nava</given-names>
            <surname>Tintarev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judith</given-names>
            <surname>Masthof</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Designing and Evaluating Explanations for Recommender Systems</article-title>
          . In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B.
          <source>Kantor (Eds.)</source>
          . Springer US, Boston, MA,
          <fpage>479</fpage>
          -
          <lpage>510</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Yongfeng</given-names>
            <surname>Zhang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Xu</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends in Information Retrieval (</article-title>
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>