=Paper= {{Paper |id=Vol-1596/paper4 |storemode=property |title=Uncertainty in Open Learner Models: Visualising Inconsistencies in the Underlying Data |pdfUrl=https://ceur-ws.org/Vol-1596/paper4.pdf |volume=Vol-1596 |authors=Lamiya Al-Shanfari,Carrie Demmans Epp,Susan Bull |dblpUrl=https://dblp.org/rec/conf/lak/Al-ShanfariEB16 }} ==Uncertainty in Open Learner Models: Visualising Inconsistencies in the Underlying Data== https://ceur-ws.org/Vol-1596/paper4.pdf
                       Uncertainty in Open Learner Models:
                Visualising Inconsistencies in the Underlying Data
                Lamiya Al-Shanfari                             Carrie Demmans Epp                                 Susan Bull
           University of Birmingham, UK                     University of Pittsburgh, USA               University College London, UK
              lsa339@bham.ac.uk                                cdemmans@pitt.edu                               s.bull@ucl.ac.uk


ABSTRACT                                                                       different parts of the tree. The level of understanding for each topic is
This paper suggests different methods for visualising uncertainty in open      indicated by the size of the corresponding rectangle. The topics in the
learner models (OLM). In order to visualise the uncertainty in OLMs,           word cloud at the bottom of Figure 1 are separated into two boxes: strong
two factors need to be measured, namely the source of the uncertainty in       and weak topics. In the weak box, larger words indicate weaker skills,
the data and the level of uncertainty in the learner model. This paper         whereas those that are larger in the strong box are stronger skills.
proposes a method to detect the source of uncertainty within a learner         The data in the learner model can come from the same system, as has
model: outlier analysis is employed to identify inconsistencies in the         traditionally been the case (e.g., [3,5,9,10,14,19,20,22,26]) or a variety
data set from which the OLM is built. The level of uncertainty that is         of external sources (e.g., [4,6,25,28]). For example, in Next-TELL’s
present in the model is determined by summing the influence weights of
the learner model data that was identified as being inconsistent. Differ-
ent approaches to visualising this uncertainty within OLMs are pro-
posed; and benefits for OLMs that visualise uncertainty in learner mod-
els that can be jointly maintained by student and system, are argued.


Keywords
Uncertainty, open learner models, visualisation


1. INTRODUCTION
Learner models represent what a teaching system believes about the
learner’s knowledge, beliefs, competencies, or other learning-relevant
constructs; the information contained in these models is usually used to
drive the adaptivity in intelligent teaching systems [17]. In most adaptive
systems, the learner model is hidden from the learner. However, open
learner models (OLM) allow the learner to view the information that is
contained within the system’s model of the learner [8]. Making the
learner model open to or allowing it to be viewed by the learner may
increase learners’ metacognitive skills, e.g. promote learner reflection,
and help them to plan and monitor their learning [7].
OLMs allow learners to access learner model information through one
or more visualisations, such as the very common skill meters
[4,5,6,11,15,26], concept maps [15,22,30], hierarchal tree structures
[15,19,21,22], networks [4,6], tree maps [3,4,6,21] word clouds [4,6],
and radar plots [4,6,21]. Figure 1 shows some of the visualisations of the
learner model from the Next-TELL [4] and LEA’s Box OLMs [6]. Skill
meters, at the top of Figure 1, indicate the level of knowledge by filling
in the bar, and can be useful with a low number of topics. If the learner
model has a larger number of topics, the user would need to scroll down
to view all the topics. In contrast, the network visualisation that is below
the skill meters in Figure 1, shows a larger number of topics in the same
screen space, but it can be difficult to read if many nodes become very
close together. The network uses different variables, such as size and
colour, to indicate the level of understanding of the learner. (The larger
and brighter the colour, the higher the level of understanding.) The radar
plot that is below the network on the right of Figure 1 can also show the
relative weaknesses or strengths of learner knowledge for different top-
ics in a smaller space than the skill meters, but does not allow the domain
structure to be shown. The tree map that is to the left of the radar plot
can be useful when a large number of topics need to be shown because           Figure 1: Examples of open learner model visualisations from Next-
learner understanding of sub-topics can be explored by clicking on the         TELL OLM [4] and LEA’s Box OLM [6]
parent topic. However, this means that users cannot compare topics from

            Copyright © 2016 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.
            This volume is published and copyrighted by its editors. LAL 2016 workshop at LAK '16, April 26, 2016, Edinburgh, Scotland.
OLM, data can come from different automated sources (e.g., quizzes,            manipulates different visual elements that include colour, size, proxim-
problems, virtual world activities) or manually entered sources (e.g.,         ity, line thickness and animation to show changes in the node at different
self-assessments, peer-assessments, or teacher assessments of the              time intervals and from different data sources. The student’s and the sys-
learner’s skills) [4]. Using different data sources can allow different ac-    tem/teachers’ beliefs are taken into account in the visualisation of the
tivities to be taken into account during the inference process that creates    student model to indicate uncertainty that results from having these two
the learner model, much like portfolio assessment and e-portfolios use a       sets of beliefs. Each belief is represented with a separate node using col-
variety of evidence when assessing learners [31,33]. However, using            our and size to indicate the strength of the level of understanding. The
varied data sources may increase the likelihood of model uncertainty be-       overall level of understanding of both beliefs are visualised as another
cause of the variability in the data that is included.                         node using the average of the two beliefs (the student and the system’s
Researchers whose focus is on managing uncertainty have recognised             beliefs about the student’s understanding). The colour of the combined
the problem of uncertainty within the learner modelling process [18].          node comes from the belief (from either the system or student) that most
Model uncertainty in general is based on the quality of the data that is       influences the modelled level of student understanding.
influenced by different key components such as error, accuracy, con-           Fuzzy logic is another way of dealing with uncertainty in OLMs. For
sistency, completeness and precision [12]. To address these and other          example, the LOZ open learner model [27] uses vague linguistic values
types of uncertainty, numerical techniques that account for uncertainty        (strong, medium and weak) to represent the learner’s level of
within the learner model have occasionally been used. These methods            knowledge. The learner model is used to select multiple choice ques-
include Bayesian networks and fuzzy logic [18].                                tions, which are classified into three levels of difficulty (high, moderate
In this paper, we focus on uncertainty visualisation in open learner mod-      and low), to learners based on their level of knowledge. When a student
els in terms of inconsistency in the data over which the model reasons.        with a weak level of knowledge correctly answers an assessment task
For instance, a student may receive a low score on one quiz and score          from the difficult level, the system indicates that there is uncertainty due
highly on all of the other quizzes; or there may be inconsistencies be-        to inconsistency between the two sources (level of knowledge and level
tween automatically inferred data and self-assessments. Visualising un-        of difficulty), so the system provides another question to avoid having a
certainty in the learner model can reveal these inconsistencies. Uncer-        lucky guess or a slip unduly influence the learner model.
tainty in the learner model data can be indicated using different aspects      While these approaches provide information about uncertainty in the un-
of the visualisation (i.e., visual variables [13]). The use of well-selected   derlying model, learners and teachers could still benefit from viewing
visual variables can permit users to automatically identify the pattern        additional information about model uncertainty. However, this infor-
depicted by those visual variables without having to focus their attention     mation can only be displayed once it has been measured, and to date
on this task [23]. Different methods of visually representing uncertainty      little effort has been expended on quantifying or representing student
within OLMs, using visual variables such as blur, opacity and arrange-         model uncertainty with a view to visualising this information to the user.
ment have been proposed [13]. We here extend that work to measure the          As indicated above, visual variables can be used to represent uncertainty
uncertainty, which is a precursor to visualising uncertainty in the learner
                                                                               and communicate it to the user [24]. Figure 2 shows some visual varia-
model. Measuring uncertainty in our current research is based on identi-
                                                                               bles that can be used in the context of this paper, with three levels of
fying inconsistencies within the data set.
                                                                               uncertainty indicated from left to right (low, medium and high) [24]. The
This paper is organised as follows. Section 2 presents work related to         uncertainty levels represented in Figure 2 can be applied to different
uncertainty in OLMs and Section 3 discusses uncertainty visualisation.         types of OLM visualisations, for example, those presented in Figure 1.
Section 4 proposes how a teaching system can identify the uncertainty
that is caused by inconsistency within a data set. Following this, Section     Arrangement is used to indicate uncertainty, where messier arrange-
5 suggests the further benefit of uncertainty visualisation in OLMs that       ments show higher uncertainty [29]. Opacity can be used to show uncer-
are jointly maintained by student and system.                                  tainty by increasing the transparency of uncertain data [23] and blur can
                                                                               be used to represent uncertainty by increasing the fuzziness of the visual
                                                                               element with the uncertainty that is present in that element’s underlying
2. UNCERTAINTY IN OLMs                                                         data [23]. The size or thickness of a dashed outline can indicate uncer-
The visualisation community has recently become increasingly aware of          tainty: the thicker the dashed line, the higher the uncertainty [1].
the importance of visualising the uncertainty that is present in data [2].
Understanding uncertainty in the data is important to allow users to make
better decisions based on the information given in the OLM ([13]).
Unreliable data evidence can be obtained by students correctly guessing
or accidentally making a mistake, both of which affect the state of the
learner model [32]. Student modelling has sometimes bypassed the issue
of model uncertainty, or handled it by using simple techniques, such as
fuzzy logic; or complex techniques, such as Bayesian reasoning [18].
Fuzzy logic uses simple variables to represent the level of understanding
for a learner, with imprecise values from within a range assigned to a
variable to make it easier to understand and modify; Bayesian Networks,
which are more complex, assign probability values to each node in the
learner model representing the possibilities of different paths in a cause
and effect relationship [18].
Different methods can be used to visualise the level of knowledge and
the beliefs represented in the learner model. For example, the VisMod
Bayesian Belief Network [37] is a learner model visualised as a concept
map with nodes that relate to the level of understanding and links that                   Figure 2: levels of uncertainty for visual variables.
indicate the learning sequences. The level of understanding is con-
structed based on the probability value within a particular node including
the previous knowledge and the current data evidence. VisMod [37] uses
different data sources to construct the learner model (self-assessment,
teacher-assessment and evidence provided by the system). VisMod then
3. EXAMPLES OF UNCERTAINTY
VISUALISATION FOR OLMS
To take a step towards providing the learner with information about
model uncertainty, we first demonstrate two uncertainty visualisations
that have been integrated into the OLM of an existing teaching system:
OLMlets [5]. OLMlets constructs a learner model using numerical
weightings of student responses to multiple choice questions, with the
learner model based on the last five questions that the learner has at-        Figure 4: Uncertainty visualisation using two skill meters.
tempted in each topic. The age of the evidence affects its weight or in-
fluence on the model, with newer evidence being weighted more heav-
ily. The learner model visualisation uses green to indicate correct
knowledge and grey to indicate difficulty.
OLMlets has been extended to allow student self-assessments to be en-
tered after their response to each multiple choice question (Figure 3).
This additional source of learner model evidence complements the sys-
                                                                               Figure 5: Uncertainty visualisation using opacity in skill meters.
tem’s assessment of learner knowledge.
                                                                               Students viewing the two skill meters are given an indirect representa-
                                                                               tion of model uncertainty that can be seen by comparing their beliefs to
                                                                               the system’s beliefs about their level of knowledge (Figure. 4). Placing
                                                                               these models side by side should enable students to see the discrepancy
                                                                               between these two measures of their knowledge and enable them to rec-
                                                                               ognise any inconsistency that is present. In the second approach to visu-
                                                                               alising uncertainty within an OLM (Figure 5), the opacity of the fill col-
                                                                               our in the skill meter should similarly draw the learner’s attention to top-
                                                                               ics where the data is inconsistent. This version indicates different uncer-
                                                                               tainty levels by increasing or decreasing skill meter opacity: database
                                                                               management is the least opaque topic and most uncertain, whereas the
                                                                               evidence used to infer the learner’s knowledge of programming lan-
                                                                               guages (system inference and student confidence ratings) is highly con-
                                                                               sistent which is why the skill meter is opaque.
                                                                               To illustrate other visual variables (see Figure 2) for showing learner
                                                                               model uncertainty, we present OLM designs based on the Next-TELL
                                                                               [4] and LEA’s Box [6] OLMs, shown in Figure 1.
                                                                               Figure 6 shows how arrangement could be applied in skill meters, where
Figure 3: Question and self-assessment options.                                an untidy arrangement in the skill meter fill (structure in Figure 6) indi-
                                                                               cates high uncertainty. Figure 6 also shows the hierarchy levels of topics
OLMlets uses five visualisations to show the learner model [5]. In this        and sub-topics within the underlying learner model (not present in the
paper, we focus on using the skill meters (similar to those in Figure 1)       previous example of skill meters from OLMlets).
to visualise uncertainty because skill meters are commonly used in
OLMs [8], and they are often popular when multiple visualisations are
available (e.g. [4,5,15]). The first of our new OLM visualisations (Figure
4) shows skill meters placed side by side to represent the two models:
the system’s assessment of the learner and the student’s self-assessment,
with the skill meter fill (green) indicating level of understanding, and the
remaining area of the skill meter (grey) showing the proportion of the
topic in which the learner has difficulties. In this case, uncertainty (var-
iability) can be seen in the discrepancy between the two models. The
second approach (Figure 5) uses skill meters that combine the model that
is based on the automatically inferred values (system model) with the
model that is based on the student’s self-assessments (student model)
into a single set of skill meters. This version uses opacity (see Figure 2)
to indicate where the two data sources conflict: the higher the transpar-
ency of a topic’s green colour, the more inconsistent the data.
Like in other work that used confidence ratings [10, 20], the student
model in the two visualisations is based in part on system inference, and
in part on students selecting their level of confidence from a scale of
‘very sure’, ‘sure’, ‘unsure’, and ‘very unsure’ (see Figure 3). If the stu-
dent selects ‘very sure’ or ‘very unsure’, this is interpreted to mean that
the student is 100% confident about the correctness or incorrectness of
their answer. If the student selects ‘sure’ as their confidence level, the
                                                                               Figure 6: Uncertainty visualisation using arrangement in skill me-
system will weight the new evidence as 75% correct knowledge and 25%
                                                                               ters.
difficulty when visualising that information in the OLM. This is because
the student believes more strongly that their answer is correct, but still
acknowledges that they might be wrong. Selecting the ‘unsure’ option
in the confidence level is represented as 75% difficulty and 25% correct
knowledge.
Figure 7: Uncertainty visualisation using a dashed line around
nodes for uncertain topics in a network.                                       Figure 9: Uncertainty visualisation using the size of dashed line in
                                                                               tree map.




Figure 8: Uncertainty visualisation using opacity on nodes for un-             Figure 10: Uncertainty visualisation using line colour in tree map.
certain topics in a network.
In addition to skill meters, Figure 1 showed network, radar plot, word
cloud and tree map based visualisations in the Next-TELL [4] and LEA’s
Box [6] OLMs. The network visualisation uses size and colour to indi-
cate the knowledge level of the topic. Larger and brighter nodes indicate
that the learner has achieved a higher knowledge or competency level
for that topic. Using a dashed line around the edge of the node could
indicate whether there is uncertainty associated with that topic’s assess-
ment, and using different levels, indicated by the size (thickness) of
dashed lines, can illustrate the uncertainty level (Figure 7). Furthermore,
uncertainty in the sub-topics can be inherited by the parent topic (as can
also occur with skill meters).
Figure 7 includes two of the main topics (reading and writing) that are
subtopics of English language. These two sub-topics also have several
sub-topics of their own. The writing topic has one sub-topic (building
and supporting arguments) that shows a low level of uncertainty by a
thin dashed outline, and one sub-topic (structure) that has medium level
of uncertainty, shown by a thicker dashed outline. The other two sub-          Figure 11: Uncertainty visualisation using opacity of the colour in
topics do not contain uncertainty or conflicting data. The parent topic        tree map.
(writing) takes the average of the uncertainty levels that are associated
                                                                               When a large number of topics or competencies are contained in the
with each of its sub-topics. The parent topic is visualised with a low level
                                                                               learner model, tree maps may be useful to allow learners to explore dif-
of uncertainty that is the result of the uncertainty that it has inherited
                                                                               ferent levels of a hierarchically structured learner model [3,4]. Both
from its children by calculating the average of the uncertainty weight for
                                                                               brightness and line colour have been used as an indicator of uncertainty
all sub-topics (1 had low uncertainty, 1 had medium uncertainty, and 2
                                                                               in tree maps in the field of simulation and visualisation [16]. Following
had no uncertainty). Instead of using dashed lines, the same information
                                                                               from these efforts, we propose using a dashed line around the topic bor-
could be represented using opacity (see Figure 8).
                                                                               der to represent uncertainty within a tree map (Figure 9), where different
                                                                               levels of size (thickness) of the dashed line indicate the uncertainty level.
                                                                               This can also be done by varying the brightness and colour of the line
                                                                               around the edge of a model topic (Figure 10) or through the use of opac-
                                                                               ity (Figure 11).
                                                                              (orange or blue), to allow some structuring of the domain, otherwise dif-
                                                                              ficult to achieve with word clouds.
                                                                              As illustrated in Figure 13, radar plots can show uncertainty by using,
                                                                              for example, a dashed line assigned to a topic with uncertainty in the
                                                                              data associated with it (as previously illustrated for the network and tree
                                                                              map visualisations).
                                                                              This section has presented several visualisation techniques that could be
                                                                              used to display uncertainty within open learner models, showing differ-
                                                                              ent levels of uncertainty using the visual variables of arrangement, opac-
                                                                              ity, blur and size (line thickness); and two separate versions of the
Figure 12: Uncertainty visualisation using the blur in word cloud.
                                                                              learner model placed side-by-side in the simpler skill meter visualisa-
                                                                              tion. As indicated in the introduction, OLMs may facilitate learner re-
                                                                              flection, planning and self-monitoring, which can be a powerful way to
                                                                              help promote effective independent learning [7]. However, for this to be
                                                                              effective, some understanding of the level of uncertainty in the underly-
                                                                              ing model is needed to enable learners to better understand the accuracy
                                                                              of that data, and so better use the learner model information when mak-
                                                                              ing decisions about their learning. The next section proposes a method
                                                                              to measure uncertainty.


                                                                              4. MEASURING UNCERTAINTY USING
                                                                              MULTIPLE DATA SOURCES
                                                                              Rather than managing and designing around uncertainty, we want to
                                                                              measure uncertainty in the learner model within a data set and communi-
                                                                              cate that uncertainty. To measure uncertainty, we should first understand
                                                                              how the data are used within the learner model based on the modelling
                                                                              process that is used within a particular system. While there are many
                                                                              learner modelling techniques (see e.g. [17,18]) for the example in this
                                                                              paper we focus on measuring uncertainty in models that use a numerical
                                                                              weighting method. In the Next-TELL [4] and LEA’s Box [6] OLMs, the
                                                                              data can come from several (or many) different data sources, and all ev-
Figure 13. Uncertainty visualisation using the size of dashed line            idence is used when calculating learner model values. However, each
in radar plot.                                                                piece of evidence may influence the learner model differently, with all
                                                                              of the corresponding weights for each topic in the learner model sum-
Word clouds allow people to quickly identify stronger topics because          ming to 1 [4].
the text is larger (and, in the case of the Next-TELL [4] and LEA’s Box
                                                                              In the Next-TELL and LEA’s Box OLMs, teachers can configure the
[6] OLMs, also the weaker competencies in the second word cloud (see
                                                                              weight of different types of evidence. For example, the teacher may as-
Figure 1). To show uncertainty, blur could be applied to the text: the
                                                                              sign a higher weight to automated assessment sources than the manually
fuzzier the text, the higher the uncertainty (structure in Figure 12). Col-
                                                                              entered data that is collected through self or peer assessments. The level
our could also be used to help indicate the grouping of sub-topics. Figure
                                                                              of influence for each data set is normalised so that they sum to 1.0. The
12 shows two groups of sub-topics where each group has its own colour
                                                                              value of the data (v), where v is greater than or equal to 0.0 and v is less




Figure 14: Next-TELL learner model calculation evidence screen showing the calculation of a student’s competency level for group roles
and responsibilities [4].
Table 1: Example of uncertainty calculations and weighting when                communicates a level of precision that is not present within the system.
an outlier (shown in italics) is present in the data evidence.                 As a result, this range is subdivided into three levels of uncertainty:
                                                                               namely, low (0-0.3), medium (0.3-0.7) and high (0.7-1.0). These three
                   Initial         Calculated Influence     Uncertainty        levels can be visualised using the variables shown in Figure 2, and illus-
Information Source Value             (on knowledge)          Weight            trated in Section 3. Since there is only one outlier detected from the in-
                                                                               formation given in Table 1 and it has a weight of 0.345, a medium level
Self-Assessment           .2               .345                  .345
                                                                               of uncertainty is associated with that competency. The ability to deter-
Peer-Assessment           .9               .243                       0        mine the amount of uncertainty that is associated with a specific compe-
Teacher-Assessment        .8               .174                       0        tency allows us to show that uncertainty to users so that learners or teach-
                                                                               ers can use this information to support their planning and decision-mak-
Quiz1                     .7               .152                       0        ing tasks, facilitating some of the metacognitive benefits argued for
Quiz2                     .9               .086                       0        OLMs [7].
                        Total:             1.00                  .345
                                                                               5. UNCERTAINTY VISUALISATION FOR
than or equal to 1.0, is then multiplied by the level of influence to show     LEARNER MODELS JOINTLY MAINTAINED
how much that piece of evidence contributes to the learner model.
                                                                               BY STUDENT AND SYSTEM
In the Next-TELL OLM, the learner model calculation can be viewed by
the teacher and the student (Figure 14) [4]. Like with the OLMlets ex-         Beyond supporting learner planning and decision making as argued pre-
ample (Figure 4), described in Section 3, users can see the inconsistent       viously, visualising learner model uncertainty may be useful to learners
data when viewing the screen that shows the model calculation (Figure
                                                                               when they are using interactively maintained learner models. These
14), but they only see this inconsistency if they invest additional effort
                                                                               types of OLMs include those that allow the learner to try to persuade the
to search through the data evidence. This effort requires them to look at
                                                                               teaching system to change learner model values because the learner dis-
each line in the whole calculation and compare those lines to one an-
other. Taking advantage of visual communication channels to show the           agrees with some aspect of the system’s model. This can be valid, for
uncertainty in the data upon which the learner model is based could help       example, if a student has done some reading, exercises, etc., away from
learners to identify inconsistencies without going through all of these        the teaching system; or if they had achieved correct answers through
calculations, which holds the potential to better support their self-regu-     (partial) guessing. This challenge to the system’s model can succeed by
lation and planning activities. As indicated above, this was achieved in       having learners verify their proposed change through responses to addi-
a quite simple way when extending the OLMlets skill meters to take ac-         tional questions or assessment items that are administered by the system
count of two sources of data (system and student assessments of the stu-       (e.g. [9, 35]); or by having learners negotiate a change to the learner
dent’s knowledge). We propose the following approach where there may           model through a two-way discussion of the learner model content. This
be more complex relationships between data from different activities or        discussion takes place between the learner and the system with the goal
different parts of activities or, indeed, from different data sources as in    of having both parties agree on the model (e.g., [10,14,20]), but keeping
the Next-TELL [4] and LEA’s Box [6] OLMs.                                      separate representations if agreement is not achieved. Both these ap-
In order to visualise uncertainty based on inconsistency in the underlying     proaches to interactively maintained learner models (persuadable and
data, the source and the level of the uncertainty must be measured.            negotiated), as well as aiming for a more accurate learner model, also
Knowing the source of the uncertain data helps us to indicate the level        aim to prompt reflection (as described above), through the process of
of uncertainty in the learner model by summing the influence weight for        challenging and discussing the model. In cases where students can chal-
all the sources that contribute to model uncertainty. To identify the          lenge the system’s model, as described above, an indication of the cer-
source of the inconsistent data, we apply outlier analysis to detect incon-    tainty of data could be highly beneficial, to focus updates onto topics
sistencies in the data. Outliers are based on the concept of boxplots. To      with the most uncertain or inconsistent data, therby making the learner
detect outliers, formula (1) and (2) are used to calculate the upper fence     model more accurate and improving subsequent adaptation. This is a
and lower fence. These fences are based on the data’s inter-quartile           timely topic as current projects (in the areas of persuadable [6] and ne-
range (IQR), which is the difference between the first (q1) and third          gotiated [34] learner models) strive to involve the learner more in the
quartiles (q3), with the data that are outside these fences classified as      modelling process.
outliers [36].

                          3      1.5                              1            6. SUMMARY
                          1      1.5                              2            Building on the work of [13], this paper proposes several approaches to
                                                                               uncertainty visualisation using different methods such as the width of a
                                                                               dashed line, opacity of OLM elements, the application of blur and ar-
Considering the example shown in Table 1, the learner model has five
                                                                               rangement of visual elements within a learner model component. The
data sources contributing to the calculation of learner knowledge or
                                                                               visual presentation of model uncertainty is based on inconsistency in the
competency, and each source has its initial score value and an associated
                                                                               underlying model’s data. The ability to see model uncertainty was inte-
weighting. Applying formula (1) and (2) to the data in Table 1 results in
                                                                               grated into the OLMlets system through two visualisations that are based
an upper fence of 1.2 and a lower fence of 0.4. In Table 1, the self-as-
                                                                               on the commonly used skill meter representation of learner knowledge.
sessment scores are outside this range (i.e., they are outliers). The uncer-
                                                                               These visualisations are being used in an ongoing study that investigates
tainty level can now be measured by detecting how much weight is as-
                                                                               the effect of uncertainty visualisation on students’ self-assessments and
signed to each outlier. Summing all the weights from all of the outliers
                                                                               learning outcomes.
provides the value for the model’s uncertainty weight, which indicates
how much these uncertain pieces of data influence the model. From Ta-          In addition to this work, a method for identifying inconsistencies in the
ble 1, this is .345.                                                           underlying learner model was developed, and was described with refer-
                                                                               ence to the Next-TELL [4] and LEA’s Box [6 ] OLMs, which have po-
Similar to the learner modelling process, new pieces of evidence that are
                                                                               tentially many data sources. This method uses outlier analysis to identify
classified as outliers influence the level of uncertainty associated with
                                                                               data that contribute to model uncertainty. The identified data is then as-
that model attribute more than an old piece of evidence would. Like the
                                                                               signed a weight based on the underlying learner modelling formula. This
weights that are associated with topics, uncertainty values range from 0
                                                                               information is used to determine the level of uncertainty that is present
(no uncertainty) to 1 (high uncertainty). However, using real numbers
in different model attributes so that the uncertainty can be visualised as   11.    Corbett, A.T., and Bhatnagar, A. Student modeling in the ACT
proposed; the proposed OLM visualisations used the network, tree map,               programming tutor: adjusting a procedural learning model with
word cloud and radar plot versions of the OLM from the Next-TELL and                declarative knowledge. In Jameson, A., Paris, C., and Tasso, C.
LEA’s Box OLMs. These visualisations, which rely on outlier analysis                (eds.), User modeling, Springer, New York, 243-254, 1997.
for identifying uncertainty, will be integrated into an OLM as the next      12.    Correa, C.D., Chan, Y.H., and Ma, K.L. A framework for uncer-
step towards supporting metacognitive activities and learner model ne-              tainty-aware visual analytics. VAST 09 - IEEE Symposium on Vis-
gotiation or persuasion in interactively maintained learner models.                 ual Analytics Science and Technology, Proceedings, 51-58, 2009.
                                                                             13.    Demmans Epp, C., and Bull, S. Uncertainty representation in vis-
7. ACKNOWLEDGEMENT                                                                  ualizations of learning analytics for learners: Current approaches
The first author is supported by a PhD Scholarship from the Ministry of             and opportunities. IEEE Transactions on Learning Technologies,
Higher Education in Oman. The LEA’s Box project is supported by the                 8 (3), 242-260, 2015.
European Commission (EC) under the Information Society Technology            14.    Dimitrova, V. StyLE-OLM: Interactive open learner modelling.
priority FP7 for R&D, contract 619762 LEA’s Box, building on contract               International Journal of Artificial Intelligence in Education, 13
258114 Next-TELL. This document does not represent the opinion of                   (1), 35-78, 2003.
the EC and the EC is not responsible for any use that might be made of
                                                                             15.    Duan, D., Mitrovic, A. and Churcher, N. Evaluating the
its contents.
                                                                                    effectiveness of multiple open student models in EER-Tutor. In
                                                                                    Wong, S.L., Kong, S.C. and Yu, F-Y. (eds). International
                                                                                    Conference on Computers in Education, Asia-Pacafic Society for
8. REFERENCES                                                                       Computers in Education, 86–88, 2010.
1.    Bertin, J. Semiology of Graphics: Diagrams, Networks, Maps.
      University of Wisconsin Press, Madison, WI, 1983.                      16.    Griethe H. and Schumann, H. The visualization of uncertain
                                                                                    data: methods and problems. In Proceedings of simulation and
2.    Bonneau, G.P., Hege, H.C., Johnson, C.R., Oliveira, M.M., Pot-                visualization, SCS Publishing House, Magdeburg, Germany,
      ter, K., Rheingans, P. and Schultz, T. Overview and state-of-the-             143-156, 2006.
      art of uncertainty visualization. In Hansen, C.D., Chen, M., John-
      son, C.R., Kaufman, A.E. and Hagen, H. (eds.), Scientific Visual-      17.    Holt, P., Dubs, S., Jones, M. and Greer J. The state of student
      ization, Uncertainty, Multifield, Biomedical, and Scalable Visu-              modeling. In Greer, J. and McCalla, G. (eds.), Student Modeling:
      alization Springer-Verlag, London, 3-27, 2014.                                The Key to Individualized Knowledge-Based Instruction,
                                                                                    Springer, New York, 3-39, 1994.
3.    Brusilovsky, P., Hsaio, I.-H., and Folajimi, Y. QuizMap: open so-
      cial student modeling and adaptive navigation support with              18.   Jameson, A. Numerical uncertainty management in user and stu-
      treemaps. In Kloos, C.D. Gillet, D., Crespo Garcia, R.M., Wild,               dent modeling: An overview of systems and issues. User Model-
      F., and Wolpers, M. (eds.), Towards Ubiquitous Learning, EC-                  ing and User-Adapted Interaction, 5(3), 193-251, 1995.
      TEL, Springer-Verlag, Berlin, 71-82, 2011.                              19.   Kay, J. Learner know thyself: Student models to give learner con-
4.    Bull, S., Johnson, M.D., Masci, D. and Biel, C. Integrating and               trol and responsibility. In Halim,Z., Ottomann,T., Razak,Z. (eds).
      visualising diagnostic information for the benefit of learning. In            International Conference on Computers in Education, 17-24,
      Reimann, P., Bull, S., Kickmeier-Rust, M., Vatrapu, R.K. and                  AACE, 1997.
      Wasson, B. (eds.), Measuring and Visualizing Learning in the In-       20.    Kerly, A., Ellis, R. and Bull, S. CALM system: A conversational
      formation-Rich Classroom, Routledge/Taylor and Francis, 167-                  agent for learner modelling. In Ellis, R., Allen, T. and Petridis,
      180, 2016.                                                                    M. (eds.), Applications and Innovations in Intelligent Systems,
5.    Bull, S. and Mabbott, A. 20,000 inspections of a domain-inde-                 XV, Proceedings of AI-2007, 27th SGAI International Confer-
      pendent open learner model with individual and comparison                     ence on Innovative Techniques and Applications of Artificial In-
      views. In Ikeda, M., Ashley, K. and Chan, T-W. (eds.) Intelligent             telligence, Springer Verlag, 89-102, 2007.
      Tutoring Systems, Springer-Verlag, Berlin Heidelberg, 422-432,         21.    Mathews, M., Mitrovic, A., Lin, B. Holland, J. and Churcher, N.
      2006.                                                                         Do your eyes give it away? Using eye tracking data to understand
6.    Bull, S., Ginon, B., Boscolo, C. and Johnson, M.D. Introduction               students’ attitudes towards open student model representations. In
      of learning visualisations and metacognitive support in a persuad-            Cerri, S.A., Clancey,W.J., Papadourakis,G. and Panourgia,K.
      able open learner model, In Gasevic, D. and Lynch, G. (eds.)                  (eds). Intelligent Tutoring Systems, Springer, Heidelberg, 422-
      Learning Analytics and Knowledge 2016, ACM, in press.                         427, 2012.
7.    Bull S., and Kay, J. Open learner models as drivers for metacog-       22.    Mabbott, A. and Bull, S. Alternative views on knowledge:
      nitive processes. In Azevedo, R., Aleven, V. (eds.), International            Presentation of open learner models. In Lester, J.C., Vicari, R.M.
      Handbook on Metacognition and Learning Technologies,                          and Paraguacu, F. (eds.), Intelligent Tutoring Systems, Springer-
      Springer, New York, 349-366, 2013.                                            Verlag, Berlin, Heidelberg, 689-698, 2004.
8.    Bull, S. and Kay, J. SMILI☺: a framework for interfaces to learn-      23.    MacEachren, A.M. Visualizing uncertain information. Carto-
      ing data in open learner models, learning analytics and related               graphic Perspective, 13, 10-19, 1992.
      fields. International Journal of Artificial Intelligence in Educa-
                                                                             24.    MacEachren, A.M., Roth, R.E., O’Brien, J., Swingley, D. and Ga-
      tion, 26 (1), 293–331, 2016.
                                                                                    hegan, M. Visual semiotics and uncertainty visualization: an em-
9.    Bull, S., Mabbott, A. and Abu-Issa, A. UMPTEEN: Named and                     pirical study. IEEE Trans., Visualization and Computer
      anonymous learner model access for instructors and peers.                     Graphics, 18 (12), 2496–2505, 2012.
      International Journal of Artificial Intelligence in Education, 17
                                                                             25.    Mazzola, L. and Mazza, R.: GVIS: A Facility for adaptively
      (3), 227-253, 2007.
                                                                                    mashing up and presenting open learner models. In Wolpers, M.,
10.   Bull, S., Pain, H. “Did I say what I think I said, and do you                 Kirschner, P.A., Scheffel, M., Lindstaedt, S. and Dimitrova, V.
      agree with me?”: Inspecting and questioning the student model.                (eds.), EC-TEL 2010, Springer-Verlag, Berlin Heidelberg, 554-
      in Greer, J. (ed.), Proceedings of World Conference on Artificial             559, 2010.
      Intelligence and Education, Association for the Advancement
                                                                             26.    Mitrovic, A. and Martin, B. Evaluating the effect of open student
      of Computing in Education, Charlottesville VA, USA, 501-508,
                                                                                    models on self-assessment. International Journal of Artificial
      1995.
      Intelligence in Education, 17 (2):121–144, 2007.
27.   Mohanarajah, S., Kemp, R. and Kemp, E. Opening a fuzzy learner
      model. In Proceedings of the Workshop on Learner Modelling for
      Reflection, International Conference on Artificial Intelligence in
      Education, Amsterdam, Netherlands, 62-71, 2005.
28.   Morales, R., Van Labeke, N., Brna, P. and Chan, M. Open learner
      modelling as the keystone of the next generation of adaptive
      learning environments. In Mourlas, C. and Germanakos, P. (eds.),
      Intelligent User Interfaces: Adaptation and Personalization
      Systems and Technologies. Information Science Reference, ICT
      Global, London, 288-312, 2009.
29.   Morrison, J.L. A theoretical framework for cartographic general-
      ization with the emphasis on the process of symbolization. Inter-
      national Yearbook of Cartography, 14, 115-127, 1974. Cited in
      [24].
30.   Perez-Marin, D., Alfonseca, E., Rodriguez, P. and Pascual-Neito,
      I. A study on the possibility of automatically estimating the con-
      fidence value of students’ knowledge in generated conceptual
      models. Journal of Computers, 2 (5), 17-26, 2007.
31.   Raybourn, E.M. and Regan, D. Exploring e-portfolios and inde-
      pendent open learner models: Toward army learning concepts
      2015. Interservice / Industry Training, Simulation and Education
      Conference Proccedings, Florida, USA, 2011.
32.   Reye J. Student modelling based on belief networks. Interna-
      tional Journal of Artificial. Intelligent in Education, 14(1), 63-
      96, 2004.
33.   Seers, M., Gijbels, D. and Thurlings, M. The relationship between
      students’ perceptions of portfolio assessment practice and their
      approaches to learning. Educational Studies, 34 (1), 35-44, 2008.
34.   Suleman, R.M., Mizoguchi, R. and Ikeda, M. Negotiation-driven
      learning. In Conati, C., Heffernan, N., Mitrovic, A. and Verdejo,
      M.F. (eds.), Artificial Intelligence in Education, Springer Interna-
      tional Publishing, 470-479, 2015.
35.   Thomson, D. and Mitrovic, A. Preliminary evaluation of a nego-
      tiable student model in a constraint-based ITS. Research and
      Practice in Technology Enhanced Learning, 5 (1), 19-33, 2010.
36.   Tukey, J.W. Exploratory Data Analysis. Addison-Wesley, Read-
      ing, MA, USA, 1977.
37.   Zapata-Rivera, J.D., and Greer, J.E. Interacting with inspectable
      Bayesian student models. International Journal of Artificial In-
      telligence in Education, IOS Press, 14(2), 127–163, 2004.