<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IEEE Multimedia Journal (Jan</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Individual and Peer Comparison Open Learner Model Visualisations to Identify What to Work On Next</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Susan BULL Univ. College London</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>United Kingdom</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter BRUSILOVSKY</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Univ. of Pittsburgh</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <volume>5</volume>
      <fpage>44</fpage>
      <lpage>55</lpage>
      <abstract>
        <p>Open learner models (OLM) can support self-regulated learning, collaborative interaction, and navigation in adaptive educational systems. Previous research has found that learners have a range of preferences for learner model visualisation. However, research has focused mainly on visualisations that are available in a single system, meaning that not all visualisations have been compared to each other. We present a study using screen shots of OLM visualisations for individuals and for comparing one's own learner model to the models of other individuals or the group, to define visualisations that students would be able to use to identify their next steps, across a wider range of options.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>In some systems the learner model can be shown to the learner
in different forms, and in such cases it has been found that,
while some visualisations may be more popular, users do not all
use the same visualisations, and will often use more than one
[7,10,16,17,20,26]. To date, this research has compared use of
visualisations within specific systems, and so not all
visualisations have been compared against each other. We here introduce
a study using screen shots from a range of successful systems,
some with single, and some multiple model visualisations, to
gain a more general picture of the visualisations that students
expect to find useful to support their self-regulated learning.
Some systems show two sets of skills or beliefs to allow the
learner to directly compare their own beliefs (e.g. from
selfassessments) to the beliefs about the learner’s knowledge or
skills that the system has inferred [1,12,21,34]; or to compare
their level of understanding to the expected level for the current
stage of a course [10]. Some systems allow learners to view
learner model information of others [4,6,11,14,15,23,29,31,32,
33] and/or an average or aggregate model of the group [3,4,6,11,
13,18,19,23,24,31,32]. The second part of the study showed
screen shots of OLM visualisations allowing the learner to
compare a learner model to that of other individuals or the group (in
some cases with some editing to the screen shot). We also
investigate whether participants would expect to find the same
visualisations useful for inspecting their own learner model, as when
Julio GUERRA Rafael ARAUJO
Univ. of Pittsburgh, Univ. of Uberlandia,</p>
      <p>USA Brazil
comparing their learner model to the learner models of others,
and whether they would expect to use their individual or the
comparison models when deciding what to work on next. This
provides some insight into which visualisations to investigate
further in systems with multiple or single visualisations, and
which can be viewed on an individual and/or comparison basis.</p>
    </sec>
    <sec id="sec-2">
      <title>2. LEARNER MODEL VISUALISATIONS</title>
      <p>Figure 1 shows the comparison visualisations used in the study
reported in Section 3. They variously use fill, colour, position or
size to indicate the strength of understanding or skills. Because
the participants were taking computing-related courses, each
screen shot was taken from a system for a computing course, to
provide some familiarity in the domain content. However,
because the screen shots are from real systems, the domains are not
identical – they were the same only when more than one
visualisation was used from a multiple-view OLM.</p>
      <p>The individual visualisations are similar to the comparison
visualisations shown in Figure 1, but without the comparison
features. For example, where the skill meters and graph show
comparison of an individual model to the individual models of other
students, or an overall group model or a specific individual peer,
the individual versions lack the comparison components (e.g.
showing the top ‘my model’ only, for Skill Meters 3 and Graph
2; the second ‘my knowledge’ column in Skill Meters 2 and
Graph 1; or the top skill meter in each row for Skill Meters 1).
Some of our examples are specifically for comparison to a range
of individual other students, as each peer model is shown
separately (Skill Meters 3, Graph 2, Bullets 2, Grid 2, Circle 2). The
remainder could either represent a comparison to another
individual peer’s learner model, or a combined model of the group.
The Skill Meters (1 [7]; 2,3 [10]) show level of understanding
by the proportion of the meter that is filled; the Graph views
[10] show this with the positive knowledge on the right of the
axis, and areas of difficulty on the left. (Skill Meters 2 and
Graph 1 actually show the learner’s current knowledge
alongside the instructor’s stated expected levels for the stage of
the course, but were edited for this study to indicate peer
knowledge; and Skill Meters 1 show data from different sources,
but are used here to indicate peer competencies). The Bullets [5]
indicate level of knowledge by the amount of fill in the bullet.
(The actual individual Bullets visualisation has only one column
of bullets. These screens have been edited to add extra columns
to show peer knowledge.) The Grids [4] use colour to indicate
level of understanding, with Grid 1 comparing against the group
(or, for this study, also a single individual); and Grid 2
comparing against a ranked list of individuals. Table 1 [7] lists the
competencies in the first column, with the remaining columns
ranging from weak to strong, with a dot in the cell indicating the
strength of the competency in each case. (The actual comparison
visualisation shows data from different sources, but is used here
to indicate peer competencies. The corresponding individual
table has only one dot per row.) Table 2 [10] ranks topics from
high to low, with the comparison shown in a separate column.
The Word Clouds show strong competencies in larger text on
the left, and weak competencies in larger text on the right; and
comparison has to be made between the upper (individual’s)
learner model in this case, and the lower word clouds. On the
Radar Plot, the comparison data is overlaid, with the
competencies listed around the rim (The Word Clouds and Radar Plot
also actually show a comparison of data from different sources,
but are used here to illustrate peer comparison). Treemap 1 [7]
shows the individual’s level of competency by the size of the
corresponding square, and has been edited by adding dashed
lines around two competencies – in grey if the learner has a
higher level than the comparison peer(s), and red if their own
competency level is lower; and Treemap 2 [2] uses colour to
show level of understanding (size relates to the number of
problems related to the skill). We used the visualisation in its
original way for the individual part of the study, and informed
participants in the comparison part of the study that colour
represented the individual’s understanding in comparison to that of others
– an individual or the group – with green showing they had
stronger understanding than others, and red, weaker. Both
Treemaps are zoomable, allowing users to access the next layers
in the hierarchical structure. The Circles [19] also use colour to
indicate strength of knowledge, with Circle 1 comparing against
an individual or the group; and Circle 2 showing multiple peer
OLMs. Histogram 1 [7] shows data from different sources in the
two examples given, but here we advised students that the
comparison was between the individual and a single other peer or
the group. Histogram 2 [10] indicates the learner’s own level of
knowledge for each topic by a star, on the scale of weak to
strong, with other students’ knowledge distributed along the
scale as appropriate. The remaining visualisations are more
obviously structured. While Skill Meters 1 and Table 1 do show
the hierarchical structure by indenting sub-competencies, in the
Pre-requisites [26], Concept Map [26], Hierarchical Tree [26]
and Network [7], the layout of the visualisation makes this more
apparent. Each of these has been edited with dashed lines
around nodes (as Treemap 1), to show comparison information.</p>
    </sec>
    <sec id="sec-3">
      <title>3. EVALUATION</title>
      <p>The study presented here investigates the perceived utility of a
range of visualisations from existing OLMs, to determine
preferences for visualisations of one’s own learning and for
comparison of their learning to that of individual peers or a combined
model of the group, to make decisions about their learning.</p>
    </sec>
    <sec id="sec-4">
      <title>3.1 Participants, Materials and Methods</title>
      <p>
        Participants were 33 volunteers who responded to an email
invitation to students studying in the School of Information
Sciences, University of Pittsburgh. They attended one of two 1.5 hour
sessions, and were compensated 20USD for their participation.
Participants were shown 17 examples of OLM screen shots
relating to an individual’s learner model, and the main features of
each were explained. Participants were able to ask questions at
any point. They then received the first questionnaire about their
perceptions of the individual learner model visualisations, which
required responses on a five-point scale: strongly agree (5),
agree (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), neutral (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), disagree (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), strongly disagree (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). They
also received paper copies of the screen shots as a reminder, but
could also ask for the screen shots to be projected again, while
they completed the questionnaire. The procedure was then
repeated using 23 comparison visualisations. There are more
comparison visualisations because some show comparisons to a
single peer or the group, while some show comparisons to
multiple other individuals. Participants were instructed to interpret
the visualisations comparing to just one other, or the group, as
being applicable to both cases.
      </p>
    </sec>
    <sec id="sec-5">
      <title>3.2 Results</title>
      <p>
        Table 1 gives results from the questionnaire item asking whether
students could easily identify what to work on next for each of
the 17 individual visualisations, and Table 2 shows results for
the same item for the 23 comparison visualisations. For both
types of visualisation, the range of responses is similar. Each
visualisation has some people claiming it to be easy to use to
identify what to work on next, and some claiming not to be able
to use it easily for this purpose. However, in most cases, there
are more people agreeing with the statement than disagreeing.
Those that stand out as more towards the negative for the
individual visualisations are Table 1, Word Cloud, Treemap 1,
Treemap 2, Circle and Network, where the means and medians
are in the neutral range; and Bullets (marginally), Grid, Table 2,
Radar Plot, Histogram and Concept Map, where although the
medians are 4 (agree), there is a higher proportion of
participants responding neutrally and/or negatively. Particularly strong
are the responses for the Pre-requisites and Hierarchical Tree
visualisations. The remaining three visualisations had mostly
positive responses (Skill Meters 1, Skill Meters 2, Graph).
Bullets
Graph
Grid
Hierarchical Tree
Concept Map
Network
Concept Map
The results for the comparison visualisations in Table 2 are
generally lower than for the individual visualisations. Those that
scored less on the individual visualisations also scored lower for
the comparison (Table 1, Word Cloud, Treemap 1, Circle 2,
Network). In contrast, the Prerequisites and Hierarchical Tree,
that had the strongest results in the individual visualisations,
were not so strong for comparison purposes, with the means and
medians both being lower. However, these were not out of line
with other comparison visualisations. The other Treemap (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
and Circle (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), while having medians of 4 (higher than for the
corresponding individual visualisations), each had a mean of 3.6
reflecting a greater tendency for neutral/negative responses than
for some of the other visualisations. Both Graph comparisons
have lower means than the individual Graph visualisation. The
Bullets, Grids, Radar Plot, Histograms and Concept Map
comparisons are similar to the corresponding individual results. Skill
Meters 1 and 2, while high for the individual visualisations,
scored highest for comparison. Skill Meters 3 was lower.
Ranking the visualisations for ease of identifying what to work
on next, we obtain the order in Table 3. The ‘structure’ columns
relate to whether the structure of the domain is represented
within the visualisation. For example, the highly structured
visualisations are the Pre-requisites, Concept Map, Hierarchical Tree and
Network, which all display relationships between nodes. Those
labelled medium indicate some structure, but this is less obvious
from looking at the screen shot. For example, the Treemaps are
zoomable to the next level in the hierarchy, but relationships
between different parts of the tree are not shown simultaneously;
the Circles show concepts grouped in segments, but the
relationships are not obvious; and Skill Meters 1 and Table 1 indent
sub-competencies, but this hierarchical structure is not as clear
as in the visualisations defined as highly structured. Indeed,
these require scrolling to see all competencies when there are a
large number. The ‘sing/mult’ column for the comparison
visualisations indicates whether the corresponding visualisation was
for comparison to a single other individual or the combined
group; or for multiple individual peer models.
As indicated above, the rankings of the visualisations for ease of
identifying what to work on next, were not consistent across
individual and comparison views. While the most popular
individual visualisations were highly structured, this is not the sole
reason for their choice, since other structured visualisations are
lower on the ranked list. In the comparison visualisations, the
highly structured visualisations are spread throughout the list.
There is no clear difference between whether participants can
identify what to work on next from visualisations with a single
other comparison (individual peer or group), or where many
individual peer models are available.
      </p>
      <p>Table 4 shows the frequency with which participants claimed
that they would use individual and comparison visualisations to
decide what to work on next; and whether they would expect to
use the same or different visualisations for this purpose,
assuming that multiple options were available in a system.
The medians are high (agreeing with the statement) for
anticipated use of individual and comparison views for identifying
what to work on next, but with a greater tendency towards
neutral or negative responses for the comparison visualisations.
Some participants stated that they expected to use the same, and
some that they expected to use different visualisations to
monitor their own learning and to compare to others. The relative
rankings of individual and corresponding comparison
visualisations was generally reflected in responses about whether
participants could use each to identify what to work on next. The top
ranked visualisation pairings were: Individual Skill Meters 1 /
Comparison Skill Meters 1; Individual Skill Meters 2 /
Comparison Skill Meters 2; Individual Pre-requisites Map / Comparison
Pre-requisites Map; Individual Hierarchical Tree / Comparison
Hierarchical Tree. The bottom ranked pairs were: Individual
Table 1 / Comparison Table 1; Individual Network /
Comparison Network; Individual Word Cloud / Comparison Word
Cloud; Individual Treemap 1 / Comparison Treemap 1.</p>
    </sec>
    <sec id="sec-6">
      <title>3.3 Discussion</title>
      <p>Table 1 showed that the individual Prerequisites and
Hierarchical Tree had particularly strong responses. These
visualisations are more obviously highly structured than most of the
others; and the structure can be easily seen at a glance. However, it
is probably not simply the existence of structure that appeals,
since the Concept Map is also highly structured, also uses
colour of nodes to show level of understanding, but scored lower
despite being from the same system as the two visualisations
that had the very strong responses. It may be that the particular
relationships shown in the Hierarchical Tree (topics and
subtopics) and Pre-requisites (pre-requisite relationships) were
easier to understand than the conceptual relationships portrayed
in the Concept Map, or in the circular display of hierarchical
links in the Network. At this stage, therefore, we tentatively
propose that it is the nature of the relationships and/or
familiarity of the layout that makes the difference for these participants,
and such relationships might be usefully included in OLM
visualisations. For simpler (less or unstructured) visualisations, Skill
Meters and similar (Graph, Bullets), are generally claimed to be
the most useful. Skill Meters are also used relatively often in
practice in systems that have multiple visualisations, where skill
meters are amongst the options available [7,10,17]. Therefore
these or similar might also be usefully considered as options in a
system – unless it is important in a particular context to include
the domain structure in the display. In that case, the hierarchical
Skill Meters 1 might be useful if it is the hierarchical structure
that is important.</p>
      <p>We believe that the lower ranking for the Prerequisites and
Hierarchical Tree in the comparison visualisations is because the
dashed outlines are harder to see easily and/or the comparison of
being behind or ahead of others is harder to interpret. However,
because these are strong for the individual visualisations, and
fare well when individual and comparison visualisation
preferences are considered together, we suggest it useful to retain
these in systems. Visualisations using enclosed areas to show
skills (Treemaps, Circle) are not generally considered as useful
for individual or comparison models, and Word Cloud is also
thought hard to use to identify what to work on next. Although
some would expect to use these, it is likely less useful to include
them in a system if there is to be only one visualisation.
Visualisations that here showed only one comparison, could
also be used to display more individual peer models. For
example, the visualisations that in reality show data from different
sources (Skill Meters 1, Table 1, Radar plot, Word Cloud,
Treemap 1, Histogram 1, Network), can actually show more
than just two sources. Therefore, they could also have been used
to show multiple individual models – Skill Meters 1 and Table 1
can show several sets of data in each row; the Radar plot can
have multiple overlays of data; and the Word Cloud, Treemap 1,
Histogram 1 and Network can show multiple models in separate
displays. We did not investigate the latter here, as we anticipated
that such repetition would be difficult to use in practice.
However, future work could usefully investigate the extent to which
Skill Meters 1 in particular, could support comparison to
multiple peers, given their popularity for all cases studied.
As the screen shots were taken from existing systems, while they
all related to computing, the domains were still different.
Furthermore, some learner models comprised many concepts, while
others were more limited, and levels of understanding of topics
were consistent only in cases where more than one visualisation
of a learner model was used from a multiple-view OLM. This
therefore limits our findings. Conversely, because we used a
range of existing OLMs that have been used successfully in
practice, the advantage is that we are comparing a wider range
of real visualisations. This is clearly a trade-off that needs to be
kept in mind when interpreting the results. Ongoing studies
(such as [22]) may complement this, as new combinations of
visualisations are implemented to compare the same data; and
this could also be extended to include comparison models.
Another limitation is that participants were not aware of the
comparison visualisations until after they had responded to the
questionnaire on the individual visualisations. This was done to
avoid the comparison visualisations having an effect on the
choices for individual visualisations. Thus, the results for
individual visualisations may be used more easily for making
decisions about what to include in a system, or for further study, if
not also considering comparisons. However, when considering
both, our results provide a starting point for further
investigation. Even though some participants considered that they would
use different visualisations to monitor their own learning and to
compare themselves to others, when using a system in practice,
if using both individual and comparison visualisations they
might find it easier to routinely use the same one(s). This needs
further investigation.</p>
      <p>Asking participants what they would expect to do before they
actually do it has limitations – there is no guarantee that they
will actually behave in the way they predicted [28]. However,
the alternative is to implement a prototype containing many
individual and comparison visualisations, and then have
students use it. Our current work aims to help reduce the space of
choices before such a study is undertaken. Those visualisations
at the bottom of the ranked lists might be reasonably omitted,
while those at the top might be especially useful to include.
Given that all the visualisations used in the study were taken
from (or adapted from) visualisations that have been used in
systems in practice, they have, at some stage, been considered
useful by the system designers.</p>
    </sec>
    <sec id="sec-7">
      <title>4. SUMMARY</title>
      <p>This paper has introduced a range of visualisations previously
used in OLMs, and presented a study comparing responses to
questionnaire items about whether the visualisations would help
participants identify what to work on next, with reference to: (i)
an individual learner model; (ii) comparing an individual model
to that of another student or the group; and (iii) comparing an
individual model to the models of several individual peers. It
was found that some of the highly structured visualisations are
perceived useful for this task when it comes to the individual
model, and that skill meters and similar visualisations are
considered easy to use for this purpose especially in comparison
visualisations. While there are individual differences, the
abovementioned visualisation types also do well when considering
individual and comparison visualisations together. Based on our
results, and following previous research showing that multiple
visualisations will be used in practice, we recommend offering
several options in systems that open the learner model to the
learner. Our ranked lists aim to help designers of future studies,
and system developers, to select those most appropriate to their
context (individual, comparison or both).</p>
    </sec>
    <sec id="sec-8">
      <title>ACKNOWLEDGEMENTS</title>
      <p>
        This work was undertaken while the first and last authors were
visiting the University of Pittsburgh, USA.
[12] Bull, S. &amp; Pain, H. (1995). 'Did I Say What I Think I Said,
And Do You Agree With Me?': Inspecting and Questioning
the Student Model, in J. Greer (ed), AIED, AACE,
501508.
[27] Mitrovic, A. &amp; Martin, B. (2007). Evaluating the Effect of
Open Student Models on Self Assessment, IJAIED 17(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),
121-144.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Al-Shanfari</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Demmans Epp</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Bull</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>(in press). Uncertainty in Open Learner Models: Visualising Inconsistencies in the Underlying Data</article-title>
          , in S. Bull,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ginon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kickmeier-Rust</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>M.D</surname>
          </string-name>
          . Johnson (eds),
          <source>Workshop on Learning Analytics for Learners, LAK16</source>
          , CEUR.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Brusilovsky</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baishya</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hosseini</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guerra</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>KnowledgeZoom for Java: A ConceptBased Exam Study Tool with a Zoomable Open Student Model</article-title>
          ,
          <source>Proceedings of ICALT</source>
          , IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Brusilovsky</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsiao</surname>
            ,
            <given-names>I.H.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Folajimi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>QuizMap: Open Social Student Modeling and Adaptive Navigation Support with TreeMaps</article-title>
          , in C.D.
          <string-name>
            <surname>Kloos</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Gillet</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          <string-name>
            <surname>Crespo Garcia</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Wild</surname>
          </string-name>
          &amp; M. Wolpers (eds), ECTEL, Springer-Verlag, Berlin Heidelberg,
          <fpage>71</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Brusilovsky</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Somyurek</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guerra</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hosseini</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Zadorozhny</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>The Value of Social: Comparing Open Student Modeling</article-title>
          and Open Social Student Model-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>