=Paper= {{Paper |id=Vol-2092/paper4 |storemode=property |title=Analytics on video-based learning. A literature review |pdfUrl=https://ceur-ws.org/Vol-2092/paper4.pdf |volume=Vol-2092 |authors=Niels Seidel |dblpUrl=https://dblp.org/rec/conf/delfi/Seidel17 }} ==Analytics on video-based learning. A literature review== https://ceur-ws.org/Vol-2092/paper4.pdf
       Carsten Ullrich, Martin Wessner (Eds.): Proceedings of DeLFI and GMW Workshops 2017
                                                         Chemnitz, Germany, September 5, 2017


Analytics on video-based learning. A literature review


Niels Seidel1

Abstract: This article provides a systematic literature review on Learning Analytics methods and
applications for video-based learning. For that purpose 33 research articles have been analyzed and
described regarding aspects of capturing, measuring, visualizing data that represent user behavior and
learning activities.

Keywords: Video Analytics; Video Usage Mining; video-based learning



1    Introduction

The advent of video in online and blended learning started at the turn of the century and
became more and more popular as lecture recording, how-to-videos and screen casts could
be easily produced and distributed. Since 2012 video obtained a wide echo in Massive Open
Online Courses (MOOCs). Because videos are mainly used in online distance learning
teachers can not observe the user behavior, resource usage, and learning activities in a
directly manner. Instead methods from Learning Analytics, Educational Data Mining,
and Video Usage Mining [MBD06] are required to track and analyze the user activities.
This paper offers a systematic literature review of the state of research in the field that
could be summarized as video analytics. Since this is a work in progress paper, the review
focuses only on three research question (RQ) concerning data gathering, measurements and
visualizations, rather than providing a complete overview on video analytics:

•      RQ1: What data needs to be captured form video players and learning environments
       in order to perform analytics?
•      RQ2: What measures can be derived from the captured data?
•      RQ3: What data representation are suitable to support visual analytics.


2    Methodology

The literature review was conducted in a four-step process: i) search in the selected academic
databases by using the proposed search terms; ii) selecting relevant articles from the title
and abstract of the search results; iii) identify further articles that were referenced in the
1 FernUniversität in Hagen, Chair of Cooperative Systems, Universitätsstr. 1, 58084 Hagen, Germany niels.

 seidel@fernuni-hagen.de


cbe
    Niels Seidel

selected articles; vi) paper review by following the guidelines of [Ch16].For the review we
selected seven academic databases for articles related to technology enhanced learning:
ACM Digital Library, IEEE Xplore, SpringerLink, Science Direct, Taylor & Fransis, dblp,
and Wiley. Additionally, we queried Google Scholar, Research Gate, and Mendeley in order
to embrace potentially relevant “gray literature” such as technical reports or position papers.
To perform the search we used combinations of two sets of search terms: i) video, audiovisual
media, electure, lecture recording, and ii) analytics, data, user behavior, usage, mining,
watch*, click, log. Overall 93 publications were gathered from search and the recognized
references. After getting an overview of the field 44 relevant articles could be identified for
deeper analysis. Most of these articles were retrieved from the ACM Digital Library. The
publication dates range from 1994 to 2017, whereas 14 were published in the year 2014. 14
articles covered analytics about MOOCs. The same amount of articles described studies in
a university setting. The remaining papers covered technological or methodological aspects
as well as experiments that were not directly related with educational technology.


3      Results
3.1     Data gathering (RQ1)

Analytics about video-based learning mainly gather data form log files. In terms of video-
based learning environments dedicated logs from the video player are required to capture the
entire user interactions. Currently there is a lack in standardization of log formats and data
structures. Only a minority of platform providers published their log format (e.g. edX data
API), while existing drafts (e.g. the videoprofiles for the xAPI) have not been recognized by
the community yet.
Watching: On of the core question in video analytics is the way of approximating the
user’s time spent on watching a video. Whereas modern web application make use of
accurate Javascript timeupdate events, some systems still lack the possibility to gather
fine-grained second-by-second data. However, captured playback activity does not imply
user engagement. Therefore, playback measures need to be compared with clickstream data
in order to ensure minimal engagement indicators. Furthermore, the time that a user needs
to watch a particular part of the video also depends on the playback speed [Li15b, GKR14].
As a consequence the length of a video depends on the users’ watching habits.Table 1
summarizes different approaches for approximating playback durations by amplifying
varying granularities.
Although imprecise measurements may limit statistical inferences, privacy concerns should
urged as an important argument. [HGM14, Br11] outlined how the principle of data economy
could be applied by factoring playback traces as well as click events into binary data.
Video: Learning videos can not be considered as a homogeneous type of media. Today we
find various technical representations, formats, and styles. A few researchers focused on
                                                              Analytics on video-based learning

                         Tab. 1: Methods for approximating playback duration
 Timeupdate           This HTML5 event is fired when the playing position of a video has changed.
                      It returns the current position in milliseconds.
 Segments             Split the video in segments of equal size and write a log as soon the user
                      completes playback of the segment. [KE16b] define segments of 120 seconds,
                      while in most cases fine-grained segments of one second are used (e.g. [MKB10,
                      Ki14b, Si14, Ki14a]).
 Clickstream          Approximates playback duration by comparing time differences of physical
                      time and playback time of subsequent click events (e.g. pause or timeline
                      navigation) [Se14].
 Heartbeat            Request the play head position in regular periods of time to approximate the
                      watched segments [Br11, BTG13].
 Section visits       Number of times a specified content section has been visited. The extent of a
                      section can be derived from a table of content [KE16b], quiz [WL15] or the
                      temporal boundaries of presentation slides synchronized to the video [MKB10]
 Videos assessed      Total number of assessed videos [CdBB17]
 Video visits         The number of times a video has been accessed poses as loose estimation
                      [KE16b, HGM14, KE16a, LW10, BTG13, BLGS17].


these particularities by using video, audio, and text properties as indicators for analytical
investigations (see Tab. 2).

                                   Tab. 2: Video-related indicators
 Length                  Duration of the video [GKR14]
 Visual transitions      Determine the playback positions of visual cues such as slide changes or
                         scene breaks [Ki14c, Ki14a].
 Speaking rate           Counting the number of words (e.g. taken from a transcript or the subtitles)
                         divided by the video duration [GKR14, Ki14a] or time per sentence [AF17]
 Speech                  Analyzing transcript text for discourse analysis[Ki14a, Fi12, AF17]
 Audio                   Changes in the volume as well as the pitch frequency of speech [Ki14a].
 Type of video           Manually classify the video into categories like ’lecture’, ’tutorial’ or
                         ’documentary’ [GKR14, GC14]
 Production style        Manually classify video styles like slide, code, Khan-style, classroom,
                         studio or office desk [GKR14].


User: Surprisingly demographic informations about the students did not play a large role in
the past studies on video analytics. Only [GR14] related video coverage and inter-video
navigation to demographic data (age, country of origin). Possible relations between video
usage and demographic factors remain an open research question. [RM02] used clickstream
data as depended variable to determine relations to the users’ personality types. Beside that,
learners can be further involved by participating in surveys during or after watching the
videos. [dBT08] tried to confirm findings from log analysis by esquiring students about
their viewing patterns (e.g. ”one-pass”, ”zapping”). [SMP01] requested the intentions for
 Niels Seidel

browsing and watching (e.g. ”looking for something”, ”aimless browse”) at random times
during playback.
Other: Except the research about MOOCs the majority of the studies are based on a
small number of participants. As a consequence particular statistical methods can not
be applied or will not return to significant results. Thus, stochastically generated data
may be an suitable alternative or addition to real log files. Methods for modeling user
behaviors including clickstreams and video playback on basis of existing data are well
established. [SMP01] used hidden Markov models of user behavior to generate video
previews, whereas [MBD05, Mo07] identified clusters from user behavior data that were
modeled by non-hidden Markov models. Similar approaches have been used for making
predictions about in-video and course drop outs [HGM14].


3.2   Measurements (RQ2)

Measurements are built upon the captured data that was described in the previous subsection.
Regarding video-based learning measurements can be classified in three categories: i) video
watching behavior, ii) video interactions, and iii) other user input considered as learning
results.
Video watching behavior: Analyzing the users’ behavior when watching the video can
provide insight regarding in-video drop out rates [Li15a, Ki14c, BLGS17] and most frequent
watched segments. [KE16b] even demonstrated on how to derive playback events from the
timeupdate events. Table 3 provides an overview on common indicators that can help to
describe video usage behaviors.

                   Tab. 3: Indicators that describe the video watching behaviors
 Viewing duration           Time spent on watching a video. [BET99, Ch16]
 Replay segments            Counting the number of segments that were played more than once.
                            [SJD15]
 Total watching time        Total number of seconds spent viewing all videos. [RM02, Dí15]
 Watching ratio             Relative watching time per video. [Dí15]
 Watching threshold         minimum amount of time a video has been watched. [Br11]
 Retention rate             Number of unique users who watched a video segment / the number of
                            views for a particular moment of a video as a percentage of the total
                            number of views of the video. [Li15b, Ki14a, Dí15] / [Le17]
 Coverage                   Fraction of the video that the student visited. [GR14]
 Session length             Time span between start and end of a session. [BET99, GKR14, dBT08]
 Average session length Average duration of a viewing session. [RM02]
 Number of sessions         Number of distinct user sessions. [BET99, Ki14a]
 Session views              Number of viewings per session. [BET99]
 Length threshold           Number sessions longer then n. [RM02]
                                                         Analytics on video-based learning

Video interactions: Gathering clickstream data is essential for analyzing, modeling and
predicting video interactions. Typically the frequency (total and per segment) and the
duration of clickstream events is used (see Tab. 4) to perform various analyses. The majority
of researchers focus on in-video interactions, rather than inter-video interactions (see
[GR14, HGM14, Br11]). The latter consider browsing behavior between multiple videos in
a course or database. The analysis of clickstream data has different purposes. Basically, the

                       Tab. 4: Measurements for typical video interactions
 event            frequency                                                  duration
 total events     [Ki14b, CdBB17]
 play             [GKR14, MBD06, SMP01, BET99, Si14, GC14, MD13,             [BET99]
                  Ki14a, Dí15, CdBB17]
 pause            [Li15b, KE16b, GKR14, MBD06, SMP01, BET99, Si14,           [Li15b, BET99]
                  GC14, MD13, Ki14a, Dí15, SJD15, AF17, CdBB17]
 volume           [KE16b, Ki14a]
 full screen      [Ki14a]
 show captions    [AF17]
 speed changes    [Li15b, Si14, Ki14a, Dí15, AF17, CdBB17]
 slow forward     [SMP01]
 slow reverse     [SMP01]
 fast forward     [MBD06, SMP01, De94, BET99]                                [BET99]
 fast rewind      [MBD06, SMP01, De94]                                       [BET99]
 seeks            [KE16b, SMP01, AF17, CdBB17]                               [BET99, Li15b,
                                                                             AF17]
 seek forward     [Li15b, KE16b, Si14, GC14]
 seek backward    [Li15b, KE16b, Si14, GC14, SJD15]
 seek from        [Dí15]
 seek to          [Dí15]

events are used to identify access patterns across courses [HGM14], week days or hours of
the day [Br11, Se14]. Considering in-video interactions [Ki14c] identified and analyzed
peaks of both frequently watched scenes and the used playback controls during that scenes.
So fare the properties of the peaks (width, height, area) have been analyzed and related
to possible explanations (e.g. visual transitions or returning to missed content) [Ki14c].
[Ki14a] is looking forward to classify peaks automatically considering visual transitions,
speech properties and topic transitions derived from the video transcripts. [SJD15] found
significant lower pause and seek back rates when the teacher gaze augmented to the video.
[Si14] determined clickstream profiles of students in-order to predict engagement states
as well as in-video and course drop outs. [MBD06, dBT08, Ch17, CdBB17], and [Li15b]
found different in-video viewing and interaction patterns. [Li15a] demonstrated how the
perceived difficulty of the video content correlates with some of the determined video
interaction patterns.
Learning results: Learning results in a broader sense include student contributions such as
    Niels Seidel

answers to quizzes, forum or wiki entries as well as annotations. These contributions may
either being entered during the video playback or separate from the video.
[Li15b] distinguished strong from weak students by comparing correct answers in relation
to the number of attempts made to pass an assignments. The interaction patterns of
strong students significantly differed from the students considered as weak. [MD13] found
correlations between quiz scores and watched portions of lecture recordings, not least
because the correct answers were told in the corresponding parts of the video. [KE16b]
found significant correlations between the quiz attempts as well as results and the watched
video segments. [GMD14] applied quantitative text analysis to evaluate large amounts of
video annotations. Measuring word counts related to linguistic and psychometric processes
aims to reduce the time for reading and scoring submissions.


3.3     Visualizations (RQ3)

Data charts are essential for visual analytics tasks. Effective visualization are primarily
determined by the selected dimensions, rather than the type of chart. Going into detail
about the various forms of data visualization would go beyond the scope of this article,
but should be considered as a future research direction. The same is true for learning
dashboards. According to [Sc16] a “learning dashboard is a single display that aggregates
different indicators about learner(s), learning process(es) and/or learning context(s) into
one or multiple visualizations.” The latest review articles on learning dashboards did
not go into detail about visualizations or dashboards representing video-based learning
activities [Sc16, Ve14]. Particular dashboards for MOOC instructors or students as reported
by [Fr16, Vi17, KE16a] stay on the surface by explaining selected data charts instead of
providing a complete overview. Some insights could be gained from edX. However, the
advances in visual analytics in terms of data visualization like rewatching graphs [BTG13],
forward-backward diagrams [Se14], or interaction peaks [Ki14c] have not been transfered to
dashboards yet. Learner-centered social navigation aids along the player timeline are known
for many years now [MKB10, Ki14c, WL15, Ch16], but have not spread beyond research
prototypes. Potential data representations for visual analytics tasks could be identified in the
works of [GKR14, Li15a, HGM14, BET99, LW10, Br13, De14, Co14].


4      Conclusions

This literature review presented the foundations, current state and potentials of video
analytics. However, the review should be enlarged upon effective visualizations for video
learning dashboards. Furthermore, the analysis methods that were stated in the literature are
worth to be compared considering the available data. The method set ranges from statistics
over sequence mining to natural language processing.
                                                             Analytics on video-based learning

References
[AF17]   Atapattu, Thushari; Falkner, Katrina: Discourse Analysis to Improve the Effective Engage-
         ment of MOOC Videos. In: Proceedings of the Seventh International Learning Analytics
         & Knowledge Conference. LAK ’17, ACM, New York, NY, USA, pp. 580–581, 2017.
[BET99] Branch, P; Egan, G; Tonkin, B: Modeling interactive behaviour of a video based multi-
         media system. In: 1999 IEEE International Conference on Communications (Cat. No.
         99CH36311). volume 2, pp. 978–982 vol.2, 1999.
[BLGS17] Bote-Lorenzo, Miguel L; Gómez-Sánchez, Eduardo: Predicting the Decrease of Engage-
         ment Indicators in a MOOC. In: Proceedings of the Seventh International Learning
         Analytics & Knowledge Conference. LAK ’17, ACM, New York, NY, USA, pp. 143–147,
         2017.
[Br11]   Brooks, Christopher; Epp, Carrie Demmans; Logan, Greg; Greer, Jim: The Who, What,
         when, and Why of Lecture Capture. In: Proceedings of the 1st International Conference
         on Learning Analytics and Knowledge. LAK ’11, ACM, New York, NY, USA, pp. 86–92,
         2011.
[Br13]   Breslow, Lori; Pritchard, David E.; DeBoer, Jennifer; Stump, Glenda S.; Ho, Andrew D.;
         Seaton, Daniel T.: Studying Learning in the Worldwide Classroom Research into edX’s
         First MOOC. Research & Practice in Assessment, 8(Summer):13–25, 2013.
[BTG13] Brooks, Christopher; Thompson, Craig; Greer, Jim: Visualizing Lecture Capture Usage:
         A Learning Analytics Case Study. In: Workshop on Analytics on Video-Based Learning
         at 3rd Conference on Learning Analytics and Knowledge 2013. ACM, Leuven, pp. 9–14,
         2013.
[CdBB17] Corrin, Linda; de Barba, Paula G; Bakharia, Aneesha: Using Learning Analytics to
         Explore Help-seeking Learner Profiles in MOOCs. In: Proceedings of the Seventh
         International Learning Analytics & Knowledge Conference. LAK ’17, ACM, New York,
         NY, USA, pp. 424–428, 2017.
[Ch16]   Chatti, Mohamed Amine; Marinov, Momchil; Sabov, Oleksandr; Laksono, Ridho; Sofyan,
         Zuhra; Yousef, Ahmed Mohamed Fahmy; Schroeder, Ulrik: Video annotation and analytics
         in CourseMapper. Smart Learning Environments, 3(1):1–21, 2016.
[Ch17]   Chen, Bodong; Fan, Yizhou; Zhang, Guogang; Wang, Qiong: Examining Motivations and
         Self-regulated Learning Strategies of Returning MOOCs Learners. In: Proceedings of the
         Seventh International Learning Analytics & Knowledge Conference. LAK ’17, ACM,
         New York, NY, USA, pp. 542–543, 2017.
[Co14]   Coffrin, Carleton; Corrin, Linda; de Barba, Paula; Kennedy, Gregor: Visualizing patterns
         of student engagement and performance in MOOCs. In: LAK ’14. pp. 83–92, 2014.
[dBT08] de Boer, Jelle; Tolboom, Jos: How to interpret viewing scenarios in log files from
         streaming media servers. Int. J. Continuing Engineering Education and Life-Long
         Learning, 18(4):432–445, 2008.
[De94]   Dey-Sircar, Jayanata K; Salehi, James D; Kurose, James F; Towsley, Don: Providing
         VCR Capabilities in Large-scale Video Servers. In: Proceedings of the Second ACM
         International Conference on Multimedia. MULTIMEDIA ’94, ACM, New York, NY,
         USA, pp. 25–32, 1994.
[De14]   DeBoer, Jennifer; Ho, Andrew D.; Stump, Glenda S.; Breslow, Lori: Changing “Course”:
         Reconceptualizing Educational Variables for Massive Open Online Courses. Educational
         Researcher, 43(2):74–84, 2014.
[Dí15]   Díaz, Héctor J. Pijeira; Ruiz, Javier Santofimia; Ruipérez-Valiente, José A.; Muñoz-Merino,
         Pedro J.; Kloos, Carlos Delgado: Using Video Visualizations in Open edX to Understand
         Learning Interactions of Students. In: 2EC-TEL 2015. Springer, pp. 522–525, 2015.
[Fi12]   FitzGerald, Elizabeth: Analysing video and audio data: existing approaches and new
         innovations. In: Surface Learning Workshop 2012. ACM, 2012.
 Niels Seidel

[Fr16]  Fredericks, Colin; Lopez, Glenn; Shnayder, Victor; Rayyan, Saif; Seaton, Daniel: Instructor
        dashboards in edX. L@S 2016 - Proceedings of the 3rd 2016 ACM Conference on Learning
        at Scale, pp. 335–336, 2016.
[GC14]  Gkonela, Chrysoula; Chorianopoulos, Konstantinos: VideoSkip: event detection in social
        web videos with an implicit user heuristic. Multimedia Tools and Applications, 69(2):383–
        396, 2014.
[GKR14] Guo, Philip J; Kim, Juho; Rubin, Rob: How Video Production Affects Student Engagement:
        An Empirical Study of MOOC Videos. In: Proceedings of the First ACM Conference on
        Learning @ Scale Conference. L@S ’14, ACM, New York, NY, USA, pp. 41–50, 2014.
[GMD14] Gašević, Dragan; Mirriahi, Negin; Dawson, Shane: Analytics of the Effects of Video Use
        and Instruction to Support Reflective Learning. In: Proceedings of the Fourth International
        Conference on Learning Analytics And Knowledge. LAK ’14, ACM, New York, NY,
        USA, pp. 123–132, 2014.
[GR14]  Guo, Philip J; Reinecke, Katharina: Demographic Differences in How Students Navigate
        Through MOOCs. In: Proceedings of the First ACM Conference on Learning @ Scale
        Conference. L@S ’14, ACM, New York, NY, USA, pp. 21–30, 2014.
[HGM14] Halawa, Sherif; Greene, Daniel; Mitchell, John: Dropout Prediction in MOOCs using
        Learner Activity Features. eLearning papers, 37, 2014.
[KE16a] Khalil, Mohammad; Ebner, Martin: When Learning Analytics Meets MOOCs - a Review
        on iMooX Case Studies. In (Fahrnberger, Günter; Eichler, Gerald; Erfurth, Christian, eds):
        Innovations for Community Services: 16th International Conference, I4CS 2016, Vienna,
        Austria, June 27-29, 2016, Revised Selected Papers. Springer International Publishing,
        Cham, pp. 3–19, 2016.
[KE16b] Kleftodimos, Alexandros; Evangelidis, Georgios: An interactive video-based learning
        environment that supports learning analytics for teaching Image Editing. In: Workshop on
        Smart Environments and Analytics in Video-Based Learning at LAK Conference. 2016.
[Ki14a] Kim, Juho; Gajos, Krzysztof Z; Li, Shang-Wen (Daniel); Miller, Robert C; Carrie J. Cai:
        Leveraging Video Interaction Data and Content Analysis to Improve Video Learning.
        In: CHI2014 Workshop - Leveraging Video Interaction Data and Content Analysis to
        Improve Video Learning. 2014.
[Ki14b] Kim, Juho; Guo, Philip J; Cai, Carrie J; Li, Shang-Wen (Daniel); Gajos, Krzysztof Z; Miller,
        Robert C: Data-driven Interaction Techniques for Improving Navigation of Educational
        Videos. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software
        and Technology. UIST ’14, ACM, New York, NY, USA, pp. 563–572, 2014.
[Ki14c] Kim, Juho; Guo, Philip J; Seaton, Daniel T; Mitros, Piotr; Gajos, Krzysztof Z; Miller,
        Robert C: Understanding In-video Dropouts and Interaction Peaks Inonline Lecture
        Videos. In: Proceedings of the First ACM Conference on Learning @ Scale Conference.
        L@S ’14, ACM, New York, NY, USA, pp. 31–40, 2014.
[Le17]  Lei, Chi-Un; Gonda, Donn; Hou, Xiangyu; Oh, Elizabeth; Qi, Xinyu; Kwok, Tyrone
        T O; Yeung, Yip-Chun Au; Lau, Ray: Data-assisted Instructional Video Revision via
        Course-level Exploratory Video Retention Analysis. In: Proceedings of the Seventh
        International Learning Analytics & Knowledge Conference. LAK ’17, ACM, New York,
        NY, USA, pp. 554–555, 2017.
[Li15a] Li, Nan; Kidzinski, Lukasz; Jermann, Patrick; Dillenbourg, Pierre: How Do In-video
        Interactions Reflect Perceived Video Difficulty? In: Proceedings of the European MOOC
        Stakeholder Summit 2015. 2015.
[Li15b] Li, Nan; Kidziński, Łukasz; Jermann, Patrick; Dillenbourg, Pierre: MOOC Video Interac-
        tion Patterns: What Do They Tell Us? In (Conole, Gráinne; Klobučar, Tomaž; Rensing,
        Christoph; Konert, Johannes; Lavoué, Élise, eds): Design for Teaching and Learning
        in a Networked World: 10th European Conference on Technology Enhanced Learning.
        Springer International Publishing, Cham, pp. 197–210, 2015.
                                                           Analytics on video-based learning

[LW10]  Lai, Kunfeng; Wang, Dan: Towards understanding the external links of video sharing
        sites: measurement and analysis. In: Proceedings of the 20th international workshop on
        Network and operating systems support for digital audio and video. NOSSDAV ’10, ACM,
        New York, NY, USA, pp. 69–74, 2010.
[MBD05] Mongy, Sylvain; Bouali, Fatma; Djeraba, Chabane: Analyzing user’s behavior on a video
        database. In: Proceedings of the 6th International Workshop on Multimedia Data Mining:
        Mining Integrated Media and Complex Data. MDM ’05, ACM, New York, NY, USA, pp.
        95–100, 2005.
[MBD06] Mongy, Sylvain; Bouali, Fatma; Djeraba, Chabane: Video Usage Mining. Encyclopedia
        of Multimedia, pp. 928–935, 2006.
[MD13] Mirriahi, Negin; Dawson, Shane: The Pairing of Lecture Recording Data with Assessment
        Scores: A Method of Discovering Pedagogical Impact. In: Proceedings of the Third
        International Conference on Learning Analytics and Knowledge. LAK ’13, ACM, New
        York, NY, USA, pp. 180–184, 2013.
[MKB10] Mertens, Robert; Ketterl, Markus; Brusilovsky, Peter: Social Navigation in Web Lectures:
        A Study of virtPresenter. Interactive Technology and Smart Education, 7(3):181–196,
        2010.
[Mo07]  Mongy, Sylvain: A study on video viewing behavior: application to movie trailer miner.
        International Journal of Parallel, Emergent and Distributed Systems, 22(3):163–172, 2007.
[RM02]  Reuther, A I; Meyer, D G: The effect of personality type on the usage of a multimedia
        engineering education system. In: 32nd Annual Frontiers in Education. volume 1, pp.
        T3A–7–T3A–12 vol.1, 2002.
[Sc16]  Schwendimann, B; Rodriguez-Triana, M; Vozniuk, A; Prieto, L; Boroujeni, M; Holzer, A;
        Gillet, D; Dillenbourg, P: Perceiving learning at a glance: A systematic literature review
        of learning dashboard research. IEEE Transactions on Learning Technologies, PP(99):1,
        2016.
[Se14]  Seidel, Niels: Analyse von Nutzeraktivtäten in linearen und nicht-linearen Lernvideos.
        Zeitschrift für Hochschulentwicklung - Videos in der (Hochschul-)Lehre, 9(3):164–186,
        2014.
[Si14]  Sinha, Tanmay; Jermann, Patrick; Li, Nan; Dillenbourg, Pierre: Your click decides
        your fate: Inferring Information Processing and Attrition Behavior from MOOC Video
        Clickstream Interactions Tanmay. CoRR, abs/1407.7, 2014.
[SJD15] Sharma, Kshitij; Jermann, Patrick; Dillenbourg, Pierre: Displaying Teacher’s Gaze in a
        MOOC: Effects on Students’ Video Navigation Patterns Kshitij. In (Conole, G., ed.):
        EC-TEL 2015. Springer, pp. 325–338, 2015.
[SMP01] Syeda-Mahmood, Tanveer; Ponceleon, Dulce: Learning Video Browsing Behavior and
        Its Application in the Generation of Video Previews. In: Proceedings of the Ninth ACM
        International Conference on Multimedia. MULTIMEDIA ’01, ACM, New York, NY,
        USA, pp. 119–128, 2001.
[Ve14]  Verbert, Katrien; Govaerts, Sten; Duval, Erik; Santos, Jose Luis; Van Assche, Frans;
        Parra, Gonzalo; Klerkx, Joris: Learning dashboards: an overview and future research
        opportunities. Personal and Ubiquitous Computing, 18(6):1499–1514, 2014.
[Vi17]  Vigentini, Lorenzo; Clayphan, Andrew; Zhang, Xia; Chitsaz, Mahsa: Overcoming the
        MOOC Data Deluge with Learning Analytic Dashboards. In (Peña-Ayala, Alejandro, ed.):
        Learning Analytics: Fundaments, Applications, and Trends: A View of the Current State
        of the Art to Enhance e-Learning. Springer International Publishing, Cham, pp. 171–198,
        2017.
[WL15]  Wald, Mike; Li, Yunjia: Enhancing Synote with Quizzes , Polls and Analytics. In: 2015
        3rd International Conference on Information and Communication Technology (ICoICT).
        IEEE Computer Society, pp. 402–407, 2015.