=Paper= {{Paper |id=None |storemode=property |title=Presentations Preserved as Interactive Multi-video Objects |pdfUrl=https://ceur-ws.org/Vol-983/paper8.pdf |volume=Vol-983 }} ==Presentations Preserved as Interactive Multi-video Objects== https://ceur-ws.org/Vol-983/paper8.pdf
                         Presentations Preserved as
                         Interactive Multi-video Objects

Caio César Viel
Department of Computer Science                                         Abstract
Universidade Federal de São Carlos                                    We first give an overview of a system which allows
São Carlos-SP, Brazil                                                 capturing a lecture to generate, as a result, a multi-video
caio viel@dc.ufscar.br                                                 multimedia learning object composed of synchronized
                                                                       videos, audio, images and context information. We then
Erick L. Melo                                                          discuss how a group of students interacted with a learning
Department of Computer Science
                                                                       object captured from a problem solving lecture: a similar
Universidade Federal de São Carlos
                                                                       approach can be used by instructors to reflect about their
São Carlos-SP, Brazil
erick melo@dc.ufscar.br                                                performance during their lectures.

Maria da Graça C. Pimentel
                                                                       Author Keywords
Institute of Mathematics and Computer Science                          Student-multimedia interaction. Interactive Multimedia.
Universidade de São Paulo                                             E-learning. Ubiquitous Capture and Access. NCL.
São Carlos-SP, Brazil
mgp@icmc.usp.br                                                        ACM Classification Keywords
                                                                       H.2.3 [Communications Applications]: Information
Cesar A. C. Teixeira
                                                                       browsers.
Department of Computer Science
Universidade Federal de São Carlos                                    General Terms
São Carlos-SP, Brazil                                                 Documentation, Measurement, Verification.
cesar@dc.ufscar.br
                                                                       Introduction
                                                                       Increasingly universities record lectures and make them
Copyright c 2013 for the individual papers by the papers’ au-          available on the web, exploiting the fact that the
thors. Copying permitted only for private and academic pur-            classroom can be viewed as a rich multimedia
poses. This volume is published and copyrighted by its editors.        environment where audiovisual information is combined
WAVe 2013 workshop at LAK’13, April 8, 2013, Leuven, Bel-              with annotating activities to produce complex multimedia
gium ).


                                                                  34
objects [1]. Although in some cases the web lecture may
be a single video stream (e.g. Kan Academy and TED
talks), more elaborate viewing alternatives are available
(e.g. opencast [2] and openEya [3]).

Once captured lectures have been made available, being
able to analyse how the users watch them – and learn
from them – is a challenging task, as illustrated by Brooks
and colleagues [2]. In such a scenario, extracting
semantics from the captured information is a must [7].

We built a system prototype that allows recording a                                                                   (a)
lecture: audio and video streams from the instructor,
slides, writings on whiteboards, as well as contextual
information – the aim is to automatically generate an
interactive multimedia object [5, 6]. Given the several
sources of information, students must be given a broad
range of interaction alternatives when reviewing the
lecture: our system generates multi-video objects in a
standard for interactive multimedia, so that students have
several interaction alternatives at the same time that can                                                            (b)
use a standard HTML5 browser. The actual student
interactions are also captured so they can be analysed.

Next, we briefly introduce our system, and outline results
from analysing the student interactions with the resulting
interactive multi-video object.

From capture to interactive multi-video
We have instrumented a classroom with cameras
(Figure 1(a)), and built a prototype system whose
modules (Figure 1(b)) capture several information streams                                                             (c)
from a lecture and generate an interactive multimedia
object (NCL1 ), which can then be played-back in a player          Figure 1: (a) Classroom. (b) Prototype overview. (c) Player
which runs on standard HTML5 browsers (Figure 1(c)).
                                                                   The player (Figure 1(c)) is designed so that the
   1 Nested Context Language - http://ncl.org.br/en                multi-video object corresponding to the lecture may be


                                                              35
reconstituted and explored in dimensions not achievable in         Checking out student interactions
the classroom. The student may be able, for example, to            Given that the multimedia object has more than one video
obtain multiple synchronized audiovisual content that              stream and that students can choose which stream they
includes the slide presentation (1), the whiteboard content        want as the main stream (presented in the large window
(2), video streams with focus on the slide (3) or the              in the player), the information of which stream is the
lecturer’s full body (4), or the lecturer’s web browsing,          most selected as the main stream at each moment can be
among others. Moreover, the student may choose at any              useful for the instructors to reflect about their
time what content is more appropriated to be exhibited in          performance during the lecture.
full screen. The student may also be able to perform
semantic browsing using points of interest like slides             We present next how students interacted with the several
transitions and the position of lecturer in the classroom.         video components that make up the multi-video object of
Moreover, facilities can be provided for users to annotate         modules 1 and 4. A detailed discussion of the students
the captured lecture while watching it, as advocated by            interaction with all modules is available elsewhere [8].
the Watch-and-Comment paradigm [4].
                                                                   Figure 2(a) and Figure 2(b) summarize which streams
One lecture, 12 modules                                            were most selected as the main stream, respectively, for
Using the capture-tool prototype, one instructor captured          module 1 and module 4. Each line represents how many
one lecture without students in the classroom: students            times a stream was watched in a specific moment:
had access to the multi-video object to prepare to their              • the blue line corresponds to the slides as presented
final exam.                                                             in the instructors notebook (Figure 1(c-1));
                                                                      • the red line corresponds to the conventional
The lecture was a problem solving session for a Computer                whiteboard (Figure 1(c-2));
Organization course in which an instructor solved a total             • the green line corresponds to the electronic
of 15 exercises. These exercises were related to each other             whiteboard which presented slides which could be
and usually a subsequent exercise used some results from                annotated by the instructor (Figure 1(c-3));
the previous one. The exercises also become more difficult
                                                                      • and the purple corresponds to the camera giving an
as the presentation progressed.
                                                                        overview of the classroom (Figure 1(c-4)).
The lecture was divided into 12 modules, totalling 1 hour
and 18 minutes. Module 1 presented 3 exercises, module             As shown in Figure 2(a), students watched more, as the
5 contained 2 exercises, and all the other modules                 main stream, the slides and the whiteboard. The three
presented one exercise each.                                       regions with higher values for the red line correspond to
                                                                   the moments in which the instructor solved the three
Eighteen students watched the lecture for at least 4               exercises writing on the conventional whiteboard.
minutes: the average playback time was 59 minutes, with            Accordingly, the higher values for the blue line correspond
standard deviation of 39 min. The average number of                to the slides with the specification of the exercises, and
interactions was 118.6, also with a large deviation (99.6).        precede properly the higher values of the red line. A


                                                              36
similar behavior is shown in Figure 2(b): the difference is        Acknowledgements
that this module discussed a single exercise.                      We thank the courses instructor and the students, the
                                                                   WAVe13 organizers for the opportunity to present our
                                                                   work, and the workshop participants for their inspiring
                                                                   presentations.2

                                                                   References
                                                                   [1] Abowd, G., Pimentel, M. d. G. C., Kerimbaev, B.,
                                                                       Ishiguro, Y., and Guzdial, M. Anchoring discussions in
                                                                       lecture: an approach to collaboratively extending
                                                                       classroom digital media. In Proc. CSCL’99 (1999).
                                                                   [2] Brooks, C., Thompson, C., and Greer, J. Visualizing
                                                                       lecture capture usage: A learning analytics case study.
                                                                       In Proc. WAVe’2013 (2013).
                                                   (a)             [3] Canessa, E., Fonda, C., Tenze, L., and Zennaro, M.
                                                                       Apps for synchronized photo-audio recordings to
                                                                       support students. In Proc. WAVe’2013 (2013).
                                                                   [4] Cattelan, R. G., Teixeira, C., Goularte, R., and
                                                                       Pimentel, M. D. G. C. Watch-and-comment as a
                                                                       paradigm toward ubiquitous interactive video editing.
                                                                       ACM TOMCCAP 4, 4 (Nov. 2008), 28:1–28:24.
                                                                   [5] Hürst, W., Maass, G., Müller, R., and Ottmann, T.
                                                                       The ”authoring on the fly” system for automatic
                                                                       presentation recording. In CHI’01 Extended Abstracts
                                                                       (2001), 5–6.
                                                                   [6] Pimentel, M., Abowd, G. D., and Ishiguro, Y. Linking
                                                   (b)                 by interacting: a paradigm for authoring hypertext. In
                                                                       Proc. HYPERTEXT’00 (2000), 39–48.
     Figure 2: Studen interactions with modules 1 and 4
                                                                   [7] Ronchetti, M. Videolectures ingredients that can
                                                                       make analytics effective. In Proc. WAVe’2013 (2013).
Final Remarks
                                                                   [8] Viel, C., Melo, E., da Graça Pimentel, M., and
Our plans for future work include capturing more
                                                                       Teixeira, C. How are they watching me: learning from
contextual information during the presentation toward
                                                                       student interactions with multimedia objects captured
providing novel navigation facilities, and the development
                                                                       from classroom presentations. In Proc. ICEIS’13
of visualization tools for the instructors to analyse the
                                                                       (2013).
students multi-video object interaction.
                                                                      2 http://videolectures.net/wave2013




                                                              37