=Paper= {{Paper |id=Vol-1443/paper25 |storemode=property |title=Learning Analytics Evaluation – Beyond Usability |pdfUrl=https://ceur-ws.org/Vol-1443/paper25.pdf |volume=Vol-1443 |dblpUrl=https://dblp.org/rec/conf/delfi/LukarovCS15 }} ==Learning Analytics Evaluation – Beyond Usability== https://ceur-ws.org/Vol-1443/paper25.pdf
                  Sabine Rathmayer, Hans Pongratz (Hrsg.): Proceedings of DeLFI Workshops 2015
         co-located with 13th e-Learning Conference of the German Computer Society (DeLFI 2015)
                                                     München, Germany, September 1, 2015 123

Learning Analytics Evaluation – Beyond Usability

Vlatko Lukarov1, Mohamed Amine Chatti1, Ulrik Schroeder1



Abstract: Learning analytics tools should be useful, i.e., they should be usable and provide the
functionality for reaching the goals attributed to learning analytics. We present a short summary of
the learning analytics goals, and the importance of evaluation of learning analytics while trying to
attain these goals. Furthermore, we present three different case studies of learning analytics eval-
uation, and in the end provide a short outlook about the necessity of systematic way of learning
analytics evaluation.
Keywords: learning analytics, evaluation, HCI



1        Introduction
Over the past two decades learning has been extensively influenced by technology.
Learning is a dynamic activity, which should constantly be monitored, evaluated, and
adjusted to the demands of changing social contexts and needs of the different involved
stakeholders, to ensure quality and the best possible outcomes. The incorporation of
educational technologies created new prospects and opportunities to gain insight into
student learning [GDS01]. One area of these educational technologies or technology-
enhanced learning that is specifically concerned with improving the learning processes is
learning analytics.
During the past decade the field of learning analytics (LA) has been defined in several
ways; see [DV12], [El11], [JAC12], [Si10]. In the context of our research, we under-
stand learning analytics as development and exploration of methods and tools for visual
analysis and pattern recognition in educational data to permit institutions, teachers, and
students to iteratively reflect on learning processes and, thus, call for the optimization of
learning designs [LD12] on the one hand and aid the improvement of learning on the
other [CDS+12]. In our understanding, learning analytics thus subsumes research areas
of educational data mining (methods and tools), and teaching analytics, as well as aca-
demic (or organizational) analytics, when all are applied to optimize learning opportuni-
ties. In this paper we will present a short summary of the learning analytics goals, pro-
vide three case studies of learning analytics tools’ evaluation in correlation with the
goals, and conclude with the challenges and the outlook of the evaluation of learning
analytics tools.


1
    Informatik 9 (Learning Technologies ), RWTH Aachen University, Ahornstr. 55, 52074 Aachen,
    lukarov@cil.rwth-aachen.de, chatti@cs.rwth-aachen.de, schroeder@cil.rwth-aachen.de
124 Vlatko Lukarov et al.

2       Learning Analytics Goals
Intrinsically, learning analytics has the noble goal of improving learning and has a peda-
gogical focus. It puts different analytics methods into practice for studying their actual
effectiveness on the improvement of teaching and learning (learner-focused analytics)
[CLT+15]. As Clow [Cl13] puts it “Learning analytics is first and foremost concerned
with learning”. Table 1 presents a collection of the overall goals of LA from various
literature published in the respective field. The goals of LA can be divided into goals
that:
        a)     explicitly inform the design of learning analytics tools
        b)     involve a behavioral reaction of the teacher
        c)     involve a behavioral reaction of the student


a. Learning analytics tools are
                                     b. Educators are supposed to     c. Learners are supposed to
supposed to
  track user activities                 monitor learning pro-           monitor own activities /
                                          cess / way of learning /         interactions / learning
                                          students’ effort                 process
       capture the interaction          explore student data /          compare own behavior
        of students with re-              get to know students’            with the whole group /
        sources / the interactions        strategies                       high performing students
        among students
       gather data of different         identify difficulties           become aware
        systems
       provide educators / stu-         discover patterns               reflect / self-reflect
        dents with feed-
        back/information on stu-
        dents’ activities
       provide an overview              find early indicators for       improve discussion
                                          success / poor marks /           participation / learning
                                          drop- out                        behavior / performance
       highlight important              draw conclusions about          become better learners
        aspects of data                   usefulness of certain
                                          learning materials and
                                          success factors
       provide                          become aware / reflect /        learn
        differentperspectives             selfreflect
       offer possibilities for          better understand effec-
        (peer) comparison                 tiveness of learning en-
                                          vironments
       draw the users attention         intervene / supervise /
                                           Learning Analytics Evaluation – Beyond Usability      125

         to interesting correla-         advice
         tions                           / assist
        pinpoint problematic
         issues                          improve teaching /
                                         resources / environment
        establish an early warn-
         ing system
        provide decision sup-
         port
        Tab. 1: Goals of learning analytics concerning tools, educators, and students [DLM+13]
Gašević et. al [GDS15] in a recent publication focused on critical goals, topics, and as-
pects that require immediate attention in order for LA to make sustainable impact on the
research and practice of teaching and learning. In their work, they provide a summary of
critical points and discuss a growing set of issues that need to be addressed and strongly
points out that learning analytics are about learning [GDS15]. Their work focuses on
specific points which encompass the LA goals provided in Table 1. One point they pro-
vide is that LA resources should be aligned to well-established research on effective
instructional practice. To argue this point they point out that observations and analyses
suggested that instructors preferred tools and features which offered insights into the
learning processes and identified student gaps, rather than simple performance measures.
Additionally, teachers should be aware of what their students i.e. learners are doing
within a course, reflect and draw conclusions about the quality of the learning content
they are providing, how are the students interacting with the learning materials, the ped-
agogical practices, the level of collaboration and interaction among the students, while
supporting them within a course [DLM+13]. Likewise, LA can stimulate and motivate
students to self-reflect on their learning behavior, become aware of their actions, learn-
ing practices and processes [SDS+14]. This, in turn, could initiate change in the learners’
behavior in order to become better learners, improve their communication skills, im-
prove their performance, etc. [DLM+13]. In order to check whether the learning analyt-
ics tools attain and support these goals, we need conclusive evidence. Learning analytics
evaluation practice could help in providing conclusive evidence and showing that the
tools are not only usable, but also useful for the teaching and learning processes.


3       Learning Analytics Evaluation
According to the research, learning analytics provides added value to both learners and
educators [DLM+13], [LS11]. However, very little research has been done to actually
confirm and reassure that LA studies and the tools have the desired effect and positive
impact on the involved parties [SDS+14]. Surprisingly, there are very few publications
that report about findings related to the behavioral reactions of teachers and students, i.e.
126 Vlatko Lukarov et al.

few studies measure the impact of using LA tools. This raises several questions: What
are the effects of usage of LA? How do LA systems influence practical learning situa-
tions? How does an indicator, or a set of indicators help the user to reflect and change his
behavior? If there are behavioral changes, how do we see them? [DLM+13].
In order to answer these questions (and more questions that will arise) we need to im-
plement evaluation techniques and carry out effective evaluation. Effective evaluation is
difficult and is problem-prone, but it is essential to support the LA tasks. LA tools (as
most information visualization interfaces) are in essence, generative artefacts. They do
not have value in themselves, but they generate results in a particular context. In essence,
an LA tool is used for a particular reason by a particular user, on a particular dataset.
Hence, the evaluation of such tool is very complicated and diverse [CLT+15]. Ellis and
Dix [ED06] argue that to look for empirical evaluation of validation of generative arte-
facts, is methodologically unsound. Any empirical evaluation, cannot tell you, in itself,
that the LA tool works, or does not work [CLT+15].


3.1    Evaluation Case Studies

In this section, three different case studies will demonstrate different evaluations carried
out on LA tools. Early on, the research on evaluation of LA tools focused on functionali-
ty and usability issues (comprehensibility, the design of indicators, terminology) and
perceived usefulness of specific indicators [DLM+13]. For this purpose, well defined
methods from the HCI field had been applied in these three different case studies.
LOCO Analyst Evaluation
LOCO-Analyst is a learning analytics tool that was developed to provide educators with
feedback on the learning activities and performance of students. The researchers have
done the evaluation of their tool in two iterations. The first iteration they conducted was
a formative evaluation aimed to investigate how educators perceive the usefulness of
such a tool to help them improve the content in their courses, and to which extend the
user interface of the tool impacted this perceived value. Additionally, they used the eval-
uation as chance to elicit additional requirements for improvement of the tool. The study
design was implemented with focus on collection of quantitative data and perceived
qualitative data from a larger sample of educators. During the study, 18 participants from
different higher education received the tool, and a questionnaire with guidelines how to
evaluate the tool. The researchers analyzed and coded the results in three different cate-
gories: Data Visualization, GUI, and Feedback. The results of the first evaluation of the
tool influenced the enhancement of the tool’s data visualization, user interface, and sup-
ported feedback types [AHD+12].
The second evaluation iteration, conducted on the improved LA tool, was summative
evaluation. The main goal of it was to reassess the perceived usefulness of the improved
tool, focusing on assessing how the changes influenced the perceived value of the LA
                                      Learning Analytics Evaluation – Beyond Usability   127

tool and determining the extent of interconnection between the variables that character-
ized this perceived usefulness. The design of the study and the artifacts used, were the
same as in the previous iteration i.e. the participants received the tool, a questionnaire
and guidelines how to evaluate the tool. Additionally, the participants received video
clips which introduced the tool, and described how each individual feature works. The
researchers analyzed and coded the results of the second evaluation in the same way as
in the first one [AHD+12].
The results of the second evaluation provided information how the implemented im-
provements to the tool affected the users’ perception of the tool’s value. In the end, the
evaluation showed that educators find the kinds of feedback implemented in the tool
informative and they valued the mix of textual and graphical representations of different
kinds of feedback provided by the tool [AHD+12].
Student Activity Meter (SAM) Evaluation
Student Activity Meter (SAM) is a LA tool that visualizes collected data from learning
environments. The researches incorporated the evaluation in the development of the tool,
i.e. applied design-based research methodology which relied on rapid prototyping in
order to evaluate ideas in frequent short cycles. They did four iterations over the course
of 24 months. The results of each evaluation iteration were put into two major groups:
positive and negative. The results and the provided feedback were incorporated to im-
prove the tool [GVD+12].
The methodology for the first iteration were task based interviews coupled with think-
aloud strategy, and usage of System Usability Scale, and Microsoft Desirability Toolkit
on Computer Science students. The negative results were the identification of usability
issues, and points for improvement. The positive results revealed that learnability was
high, the error rate was low. The user satisfaction and usability were decent, and prelim-
inary usefulness was regarded as positive. The study also revealed which LA indicators
were considered as most useful [GVD+12].
The methodology for the second iteration was conducting an online survey with Likert
items on teachers in order to assess and evaluate teacher needs, extract information about
use and usefulness, and whether SAM can assist them in their everyday work. The most
prominent result that was considered negative was that teachers did not find resource
recommendation useful. On the positive side, the results showed that SAM provided
awareness to the teachers, that all of the indicators were useful, and that 90% of them
wanted to continue using SAM [GVD+12].
The third iteration was also an online survey with Likert items, but the demographics
was LAK course participants (teachers and visualization experts). The goal was also
similar like in the second iteration, to assess the teacher needs, improve the use and use-
fulness, and enhanced to collect feedback from the experts in the field. The negative
results of the evaluation was the failure to find which needs needed more attention. On
128 Vlatko Lukarov et al.

the positive side, the results from the second iteration were strengthened, with the addi-
tion that resource recommendation could be useful [GVD+12].
The fourth iteration was conducted with Computer Science teachers and teaching assis-
tants. The methodology was conducting structured face-to-face interviews with tasks and
open ended questions with the goal to assess the user satisfaction, the use and usefulness
of SAM. The fourth iteration revealed that there are conflicting visions of what were
students who were doing well, or what were student who were at risk. Furthermore, it
revealed which indicators were good and useful, provided different insights from the
teachers, and also further use cases for SAM were discovered [GVD+12].
Overall, the conducted evaluation discovered that the most important feature that SAM
addresses was to help teachers provide feedback to the students. Another important pro-
vision was the methodology of the evaluation which could be applied/used from other
researchers when creating a visualization tool [GVD+12].
Course Signals
Course Signals is an early intervention solution for higher education faculty, allowing
instructors the opportunity to use analytics in order to provide real-time feedback to a
student. The development team had closely tracked the student experience from the
introduction of the tool (the pilot phase), and at the end of every additional semester.
Furthermore, they conducted an anonymous student survey to collect feedback at the end
of each semester, and had held focus groups. In general, students reported positive expe-
rience with the feedback they received from Course Signals. The students felt supported,
and the feedback provided by the system was labeled as motivating. Some students had
concerns that the system did over penetration (e-mails, text messages, LMS messages)
all of them conveying the same message [AP12].
In general, the faculty and instructors had positive response, but still approached it with
caution. The main points the development team extracted from the faculty feedback were
that the students might create a dependency on the system, instead of developing their
own learning traits. Furthermore, the evaluation discovered that there was a clear lack of
best practices how to use Course Signals. This was also confirmed by the students. The
most important point here was that this tool with its evaluation provided actual impact on
teaching and learning [AP12].


3.2    Challenges

The three different evaluation case studies show that there is no standardized approach
how to effectively evaluate learning analytics tools. Measuring the impact and usefulness
of LA tools is a very challenging task. LA tools try to support both learners and educa-
tors in their respective tasks, and fulfil the goals mentioned in section 2. Although much
work has been done on visualizing learning analytics results – typically in the form of
                                       Learning Analytics Evaluation – Beyond Usability   129

dashboards, their design and use is far less understood [VDK12]. So far not many have
conclusive evaluation and strong proof for beneficial impact on either educators, or stu-
dents. These LA tools should not only be usable and interesting to use, but also useful in
the context of the goals: awareness, self-reflection, and above all, improvement of learn-
ing.
In order to do effective evaluation, there are several things that need to be taken into
account. First and foremost, one has to consider the purpose and the gains of the evalua-
tion. The evaluators need to carefully design the goals and attempt to meet them with the
evaluation. Once the goals are set, the next step is to think about the measures and tasks
that will be included in the evaluation. It is very important to define the appropriate
indicators and metrics. Wherever possible, the evaluators should also collect qualitative
data, and use qualitative methods in pair with the quantitative evaluation. Mixed-method
evaluation approach that combines both quantitative and qualitative evaluation methods
can be very powerful to capture when, why, and how often a peculiar behavior happens
in the learning environment [CLT+15].
While usability is relatively easy to evaluate, the challenge is to investigate how LA
could impact learning and how it could be evaluated. Measuring the impact of LA tools
is a challenging task, as the process needs long periods of time, as well as, a lot of effort
and active participation from researchers and participants [CLT+15]. Moreover, the
analysis of the qualitative data from the evaluation is always prone to personal interpre-
tations and biased while making conclusions [DLM+13].


4    Research Directions
As the research field matures, there is a continuous increase in research about systematic
evaluation of learning analytics. Impact remains especially hard to determine in evalua-
tion studies and further research is also required to investigate effective mixed- method
evaluation approaches that focus on usability and usefulness of the Learning Analytics
tools [VDK12]. To enhance the work in capturing and measuring impact, collaboration
with cognitive sciences is necessary in order to develop methods how to attain this quali-
tative information. This means that asking the right questions, selecting elements of the
environment and the tool to examine, and processing, visualizing, and analyzing the data
become the challenges for researchers. There already has been literature and community
based research that empirically tries to identify quality criteria and quality indicators for
LA tools to form an evaluation framework [SDS15]. This evaluation framework has five
criteria, and each criteria contains different quality indicators. The main limitation is that
the participants who helped create this evaluation framework were more research than
practice oriented. More importantly, the researchers should strive to a common goal,
which is to unify and standardize the different evaluation methods into a structured tool
that can help researchers and developers to build better Learning Analytics tools.
130 Vlatko Lukarov et al.

5    Conclusion
In this paper we presented a short summary about the goals in learning analytics. Fur-
thermore, we argued that learning analytics evaluation will provide the necessary and
conclusive evidence that LA tools help both teachers and students in their work. We
have provided three different case studies from learning analytics evaluation where the
respective researchers evaluated their LA tools with successful outcomes. Additionally,
we presented a summary of the challenges that are yet to be resolved by the research
community in order to do effective evaluation. Finally, we gave concrete directions
which need to be investigated into details in order to help in overcoming the evaluation
challenges. There is still a lot of work to be done in the direction of standardizing and
structuring the evaluation of LA tools, and hence providing enough evidence that LA
tools are assisting and valuable asset for the learning and teaching processes. However,
the true test for learning analytics is demonstrating a longer term impact on student
learning and teaching practices.


References
[AHD+12] Ali, L.; Hatala, M.; Gašević, D.; Jovanović, J.: “A qualitative evaluation of evolution
         of a learning analytics tool” in Computers&Education, 2012, pp.470-489
[AP12]     Arnold, K.; Pistilli, M.: "Course Signals at Purdue: Using Learning Analytics to In-
           crease Student Success," in Proceedings of the 2nd International Conference on Learn-
           ing Analytics and Knowledge, Vancouver, 2012.
[CDS+12] Chatti, M. A.; Dyckhoff, A. L.; Schroeder, U.; Thüs, H.: "A reference model for learn-
         ing analytics," International Journal of Technology Enhanced Learning, 2012, pp. 318-
         331.
[CLT+15] Chatti, M. A.; Lukarov, V.; Thüs, H.; Muslim, A.; Yousef, A. M. F.; Wahid, U.;
         Greven, C.; Chakrabarti, A.; Schroeder, U.: "Learning Analytics: Challenges and Fu-
         ture Research Directions," Eleed, 2014.
[Cl13]     Clow, D.: "An overview of learning analytics," Teaching in Higher Education, 2013,
           pp. 683-695.
[DLM+13] Dyckhoff, A. L.; Lukarov, V.; Muslim, A.; Chatti, M. A.; Schroeder, U.: "Supporting
         action research with learning analytics," in Proceedings of the Third International Con-
         ference on Learning Analytics and Knowledge, Leuven, 2013.
[DV12]     Duval, E.; Verbert, K.: "Learning Analytics," Eleed, no. 8, 2012.
[ED06]     Ellis, G; and Dix, A.: "An explorative analysis of user evaluation studies in infor-
           mation visualisation," in Proceedings of the 2006 AVI workshop on BEyond time and
           errors: novel evaluation methods for information visualization, Venice, 2006.
                                         Learning Analytics Evaluation – Beyond Usability     131



[El11]     Elias, T.: "Learning Analytics: Definitions, Processes and Potential," 2011.
[GDS15] Gašević, D.; Dawson, S.; Siemens, G.: "Let’s not forget: Learning analytics are about
         learning," In TechTrends, 2015, pp. 64-71.
[GVD+12] Govaerts, S.; Verbert, K.; Duval, E.; Pardo, A.:"The student activity meter for aware-
         ness and self-reflection," in CHI '12 Extended Abstracts on Human Factors in Compu-
         ting Systems, Austin, 2012.
[JAC12]    Johnson, L.; Adams, S.; Cummins, M.: "NMC Horizon Report: 2012 Higher Education
           Edition.," 2012.
[LD12]     Lockyer, L.; Dawson S.: "Where learning analytics meets learning design," in Proceed-
           ings of the 2nd International Conference on Learning Analytics and Knowledge, Van-
           couver, 2012.
[LS11]     Long, P.; Siemens, G.: "Penetrating the Fog, Analytics in Learning and Education,"
           Educase, 2011.
[Si10]     Siemens, G: "What are Learning Analytics?," 2010.
[SDS+14] Scheffel, M.; Drachsler, H.; Stoyanov, S.; Specht, M.: "Quality Indicators for Learning
          Analytics," Educational Technology & Society, 2014, pp. 117-132.
[SDS15] Scheffel, M.; Drachsler, H.; Specht, M.:"Developing an evaluation framework of quality
          indicators for learning analytics," in Proceedings of the Fifth International Conference
          on Learning Analytics And Knowledge, Poughkeepsie, 2015.
[TAC+13] Thompson, K.; Ashe, D.; Carvalho, L.; Goodyear, P.; Kelly, N.; Parisio, M.: "Pro-
         cessing and Visualizing Data in Complex Learning Environments," American Behav-
         ioral Scientist, 2013, p. 1401–1420.
[VDK12] Verbert, K.; Duval, E.; Klerkx, J.; Govaerts, S.; Santos, J. L.: "Learning Analytics
         Dashboard Applications," American Behavioral Scientist, 2013.