=Paper=
{{Paper
|id=Vol-3667/DS-LAK24_paper_3
|storemode=property
|title=YarnSense: Automated Data Storytelling for Multimodal Learning Analytics
|pdfUrl=https://ceur-ws.org/Vol-3667/DS-LAK24_paper_3.pdf
|volume=Vol-3667
|authors=Gloria Milena Fernández-Nieto,Vanessa Echeverria,Roberto Martinez-Maldonado,Simon Buckingham Shum
|dblpUrl=https://dblp.org/rec/conf/lak/Fernandez-Nieto24a
}}
==YarnSense: Automated Data Storytelling for Multimodal Learning Analytics==
YarnSense: Automated Data Storytelling for
Multimodal Learning Analytics
Gloria Milena Fernández-Nieto1,∗ , Vanessa Echeverria1,3 ,
Roberto Martinez-Maldonado1 and Simon Buckingham Shum2
1
Monash University
2
University of Technology Sydney
3
Escuela Superior Politecnica del Litoral, Guayaquil, Ecuador
Abstract
Professional development and training often require students to reflect on their performance, especially
recalling the mistakes they have made in safe training environments, but these can occur in rapidly
evolving and busy environments where key actions are often missed. Promisingly, rapid improvements in
wearable sensing technologies are opening up new opportunities to capture large amounts of multimodal
behaviour data that can serve as evidence to support student reflection about their performance. How-
ever, while some preliminary research has highlighted the potential of analysing such data to identify
interesting patterns, less work has focused on the problem of automatically communicating meaningful
and contextualised data and insights to end-users. Based on the notion of data storytelling as a means of
extracting actionable insights from data, this paper presents YarnSense, an architecture to automatically
generate data stories with the intention of supporting student reflection and learning. YarnSense maps
low-level sensor data to the pedagogical intentions of teachers, bringing human instructors into the
data analysis loop. We illustrate this approach with a reference implementation of the system and an
in-the-wild study in the context of immersive simulation in healthcare.
Keywords
Data Storytelling, multimodal data, data visualisation, sensor data
1. Introduction
Learning by doing is essential in sectors like emergency response [1], safety training [2], and
healthcare [3], where professionals gain knowledge through practical experiences, including
bodily interactions and emotional responses [4]. However, capturing critical events or errors
during fast-paced training scenarios is challenging.
Integrating digital technologies and sensing devices in physical learning spaces offers a
solution to improve teaching and learning[5]. These technologies, including infrared sensors,
video and audio recorders, and wearables, capture multimodal behaviour data in real time ,
Joint Proceedings of LAK 2024 Workshops, co-located with the 14th International Conference on Learning Analytics and
Knowledge (LAK 2024), Kyoto, Japan, March 18-22, 2024.
Envelope-Open gloriamilena.fernandeznieto@monash.edu (G. M. Fernández-Nieto); vanessa.echeverria@monash.edu
(V. Echeverria); roberto.martinezmaldonado@monash.edu (R. Martinez-Maldonado);
simon.buckinghamshum@uts.edu.au (S. Buckingham Shum)
Orcid 0000-0002-8163-2303 (G. M. Fernández-Nieto); 0000-0002-2022-9588 (V. Echeverria); 0000-0002-8375-1816
(R. Martinez-Maldonado); 0000-0002-6334-7429 (S. Buckingham Shum)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
supporting the understanding of processes such as teamwork [6–8] and communication [9–12],
as well as the impact of emotions on learning [13, 14]. They also assist in analysing teacher-
student interactions [15–18]. Multimodal data can be difficult to interpret if the intention is
to open these data up to educational users, ultimately closing the feedback loop [19]. Thus,
researchers have started to use InfoVis and visual design principles to unpack and communicate
insights coming from multimodal data to non-expert users, such as teachers and students.
Previous research has explored the use of data storytelling (DS) in learning analytics dash-
boards (LAD) for conveying insights to educational users [20–22]. These studies have shown
promising outcomes, demonstrating that DS elements effectively aid in interpreting complex
data. However, in these studies, the integration of pedagogical intentions with DS elements
has been manually conducted. Researchers typically engage in an inquiry process with edu-
cational stakeholders to identify these pedagogical intentions, which are then mapped to DS
elements in the LADs. While advancements have been notable, the field still lacks integrated,
automated solutions that are tailored to both students and teachers, incorporating educators’
instructional strategies and offering custom data interfaces suited to their specific teaching
skills and requirements [20, 23–31].
To overcome these challenges, we introduce YarnSense, a system architecture that employs
data storytelling, an approach combining data, visuals, and narrative [32, 33], to simplify and
communicate insights from multimodal behaviour data in dynamic and collocated settings.
YarnSense includes a context modeller for educators to guide analysis, an automated sensor data
capture, a multimodal modeller to translate sensor data into meaningful constructs, and a data
storytelling generator for learner-facing interfaces. We demonstrate its application through
a reference implementation in a clinical nursing healthcare setting with 254 students and six
teachers, showing how YarnSense helps define and interpret the pedagogical intentions.
2. Background and Related Work
2.1. Automated multimodal sensor-data visual interfaces to support learning
The integration of digital technologies into learning spaces has led to the use of various sensing
devices such as infrared sensors and physiological wearables to capture multimodal behaviour
data in educational settings [5, 34–36]. These data help to study key learning processes such as
effective teamwork [6–8] and communication [9–12]. However, creating effective user interfaces
for non-data experts remains a challenge. Current implementations, such as EduSense [16]
and Sensai [37], have been used to provide feedback in educational contexts, but often lack the
ability to simplify complex multimodal data for end users, such as students and teachers. Recent
efforts have aimed to address these challenges by developing tools that can elucidate complex
team dynamics using collecting audio and user interactions in an online setting (BLINC [38])
and narrative visualisations for MOOCs [39]. Despite these advances, the need to automatically
generate user-friendly interfaces that can translate sensor data into meaningful insights for
educational purposes remains unmet.
2.2. Data storytelling foundations and approaches in education
Data Storytelling (DS) has emerged as an effective technique for communicating complex data
insights through a combination of data, visuals, and narrative [32, 33, 40–42]. DS transforms
data into intuitive visualisations and narratives, making it easier for non-experts to grasp
complex information. They identified key data storytelling principles of effective data
storytelling: (1) focus on purposeful communication, (2) drive audience attention through
meaningful visual elements, (3) select appropriate visuals for different purposes, (4) adhere to
the basic principles of information visualisation design, such as removing unnecessary elements
and using captions, space, shape, and colour wisely, and (5) incorporate narrative structures,
as in narrative visualisation or visual narratives [20, 43]. In education, the application of the
principles of DS has shown promise in enhancing multimodal data visualisation. For instance,
Martinez-Maldonado et al. [21] used a layered storytelling approach to categorise multimodal
data into meaningful information structures.
However, most existing DS applications in education, including those by Martinez-Maldonado
et al. [21], Echeverria et al. [44], Fernández-Nieto et al. [45], are not fully automated and have
been tested primarily in high-fidelity prototypes or controlled settings. Although previous
research has investigated how to automatically generate visual outcomes using multimodal
data, there remains a gap in the generation of meaningful automated interfaces guided by
teacher’s pedagogical intentions and data storytelling principles to facilitate the communication
of complex multimodal data. This paper presents an architecture and its implementation to
automatically generate multimodal data storytelling interfaces to support students’ reflection
in a nursing simulation setting.
2.3. Architectures for Multimodal Learning Analytics -MMLA
Recent efforts in automating Multimodal Learning Analytics (MMLA) interfaces have been
reviewed by Shankar et al. [46], focusing on nine different architectures through the Data Value
Chain (DVC) framework. This framework includes data discovery, integration, and exploitation.
In data discovery, all architectures leveraged multiple data sources, such as physiological data and
posture data. Most included data preparation steps such as pretransformation and organising
data relevant to the learning context. For data integration, over half of the architectures
incorporated mechanisms to merge data from specific modalities, with databases being a
popular choice. The literature review highlights that, in terms of data exploitation, almost all
architectures carried out analysis activities, including statistical analysis or machine learning,
and most produced visualisations like dashboards. The review also noted that three architectures
provided decision-making support, specifically targeting teachers and students. However, two
main challenges were identified: the lack of learning alignment or connections to the learning
context in MMLA architectures, and the complexity of MMLA data and its visualisation posing
challenges for stakeholders’ data literacy.
A more recent architecture by Noël et al. [47] focuses on audio and video data, using hardware
such as Raspberry Pi 4 for data collection and a server for storage and visualisation. Despite
offering five visualisations for educators to assess collaborative activities, further improvements
in design and evaluation are needed for effective use by stakeholders. This highlights the ongoing
need for MMLA architectures that provide contextualised, meaningful interfaces to support
teachers and students, underlining the importance of developing accessible and explanatory
data stories within these systems.
3. YarnSense: Automated Educational Data Storytelling
Architecture
YarnSense is a multi-tiered architecture, that automatically distills insights from multimodal
behavioural data, gathered via sensors worn by students and human observations, translating
these into data stories that reflect teachers’ educational goals. This system architecture helps
students reflect on their learning activities. It comprises four main tiers, as shown in Figure 1:
Figure 1: YarnSense: Multi-tier architecture for automated Data Storytelling
3.1. User Interfaces: The Context Modeller
This tier provides user interfaces for experts (e.g., teachers or researchers) to input what is
known of the learning activity and the teacher’s pedagogical intentions. Knowledge of the
learning activity corresponds to the specifics of a learning activity (e.g., nursing simulations)
based on the learning design. Key features to be identified from the learning design and captured
in these interfaces include: i) Actions of interest expected during the activity, such as critical
moments or milestones (e.g., patient adverse reaction). ii) Information on physical resources
in the learning space, including the positions of manikins, trolleys, or sensors (e.g., number of
beds in a simulation ward). iii) Meta-information, such as the role of team members and devices
to be worn during the activity (e.g., an auxiliary nurse wearing a microphone). In addition,
this tear allows teachers to input the pedagogical intentions of the learning activity into the
system. It defines the teacher assessment criteria into rules that will be used for the system to
interrogate multimodal data and create stories.
Considerations for implementation. Web-based platforms are a suitable choice for
implementing this tier. Technologies such as HTML, CSS, and JavaScript, in conjunction
with frameworks like React or Angular, can be used to develop intuitive and responsive user
interfaces.
3.2. Multimodal Sensor Data Capture
This component focusses on collecting data from both machine sensing (e.g., wearable sensors)
and human sensing (e.g., observations). For wearable sensors, in particular, this tier considers
multiple sensor data captured independently and in a loosely coupled manner. Key features for
collecting data from sensors include: i) a recommended design pattern of ‘pipe and filters’ for
independent data collection and processing. Pipe and filter patterns are commonly used in signal
processing and remote sensing applications [48]. According to this pattern, filters are designed
independently and are typically well defined as services or functions, and pipes are conduits
of information. ii) According to the previous architectural feature, this tier captures sensor
data in parallel with automated functions for start / stop processes, helping to synchronise
and scalability of the data. That way, each data modality is cleaned and stored in its more
convenient format (e.g., json, csv, mp4). in various formats per sensor, with flexibility for
real-time processing or batch collection based on context needs.
For human sensing data, this tear provides a user interface for users to log information into
the system. Web and mobile applications are making it more accessible for users to capture
additional observations during their learning activities. The data provided by the users are used
to label actions that would otherwise be hard to detect using sensing technology.
Considerations for implementation. For machine sensing, wearable sensor technologies
are ideal for implementing this tier. Devices like smartwatches, fitness trackers, or custom
wearables, equipped with sensors for physiological data, indoor positioning, and audio capture,
can be used. A comprehensive list of sensors used in educational data capture is detailed in the
literature review by Chango et al. [49]. To handle parallel data processing, multi threading or
multiprocessing capabilities in programming languages such as Python, Java, or C++ can be
employed. Additionally, parallel processing frameworks like Apache Kafka or RabbitMQ can be
used for efficient data stream management.
For human sensing, mobile applications developed for platforms like Android or iOS would
enable users to conveniently log data during their learning activities.
3.3. Multimodal Modelling
Considering the quantitative ethnography (QE) approach [50] and the multimodal matrix (MM)
concept [51], this tier enhances low-level data with contextual insights from the context modeler.
It transforms sensor data into meaningful constructs by coding multimodal observations into
a data structure for analysis against the assessment criteria. These constructs are crucial for
analysing multiple data modalities. For example, in physiological data, arousal peaks (indicative
of changes in skin conductance levels) are interpreted as stress level indicators [45]. Additionally,
for indoor positioning data, the theory of proxemics helps to identify interactional spaces and
social formations during learning activities [52]. This theory is also applied to model the
combination of modalities, such as positioning data and audio, to detect co-located speech
events [53].
To do that, this tier implements custom software scripts to filter, combine, aggregate, or
summarise the multimodal matrices according to the teacher’s pedagogical intentions previously
defined. As a result of this analysis, a Learner Model is generated. The Learner Model is a
structured representation of student performance, misconceptions, or difficulties. The Learner
Model assesses if the team achieved the pedagogical intentions defined by the teachers.
Considerations for implementation. To implement this tier, data analysis software like R
or Python, equipped with libraries such as Pandas, NumPy, and SciPy, can be used for processing
and analysing multimodal data based on specific constructs. Additionally, data visualisation
libraries like Matplotlib, Seaborn, or D3.js are useful for visualising data during the analysis
phase, which assists in refining the Learner Model.
3.4. Data Storytelling Generator
The final tier uses the Leaner Model and the teacher’s pedagogical intentions and communicates
insights through data, visualisations, and narratives. The key features of this tier include: i)
enhancing the data visualisations with DS principles (Section 2.2), such as highlighting important
elements, colour schemes, and removing unnecessary elements to focus on relevant aspects
of the learning model. ii) Generating visual stories that provide individual or team outcomes
in an easily interpretable format for students. The Narratives are capture from the teacher’s
pedagogical intentions where they can incorporate textual feedback via the user interface. Data
from the Lerner model are visualised and combined with narratives to convey a story for an
individual student or a team.
Considerations for implementation. This tier can be implemented by integrating data
visualisation tools such as Tableau, Qlik, or D3.js, which allows visual enhancements through the
DS principles. Alternatively, custom visualisation software can be developed using programming
languages such as JavaScript (with libraries like Chart.js or Three.js) or Python (with libraries like
Matplotlib or Seaborn), customised to meet the specific requirements of the teacher’s pedagogical
intentions. Another option is the use of narrative generation tools, Natural Language Processing
(NLP) libraries in Python, to partly automate the creation of narratives based on the data (e.g.,
Large Language Models -LLM).
Each tier of YarnSense plays a crucial role in transforming complex multimodal data into
insightful and accessible data stories, supporting reflective learning in educational settings.
4. Reference Implementation
Having introduced our architecture in general terms above, we now turn to an illustrative
example of how the whole architecture can be implemented in a specific learning activity. This
architecture was implemented in an authentic clinical setting in nursing healthcare and was
reported in Fernández-Nieto et al. [54]. Data stories in this clinical context were created using a
completely automated process.
4.1. Learning context and data collection
The clinical scenario provides an opportunity for students to practice teamwork, communication,
and prioritisation skills in the setting of a deteriorating patient. The clinical scenario was run in
38 classes by different instructors. A total of 254 students in their third/fourth year volunteered
to participate in the data collection. The goal of the clinical scenario was to provide care
to four patients and prioritise the care of each bed as a team. According to the assessment
criteria established by the subject coordinator, a highly effective team should have performed
the following five actions in the main bed (useful information for the context modeller tier): i)
administer oxygen after patient respiratory depression; ii) assess vital signs every 5 minutes; iii)
cease PCA (patient-controlled analgesia) after patient altered conscious state; iv) activate MET
(Medical Emergency Team) calls after patient deterioration; and v) administer Naloxone timely.
Additionally, students are supposed to take care of the other 3 beds; they have to prioritise care.
4.2. YarnSense: Implementation of Nursing Simulations in the Wild
YarnSense was implemented and deployed in the 38 simulation classes following the learning
activity described above. Details of the implementation in the wild are presented in Table 1.
In our reference implementation, we present two different types of automated data stories.
The first type highlights errors made by students in simulations. From the positioning data
and observations, using the teachers’ pedagogical intentions, we automatically identified three
error categories, as described in Fernández-Nieto et al. [45] and Fernandez-Nieto et al. [52]:
i) Sequence Errors: Occur when a team performs a critical action in the wrong sequence.
ii) Timeliness Errors: Identified when students respond too slowly, executing actions later
than recommended by healthcare guidelines.
iii) Frequency Errors: Detected by calculating the time difference between two key logged
actions that should be performed repeatedly.
The second type of data stories, called positioning graphs, focuses on the physical interactions
of nurses. These stories provide insights into how much time nurses spend on patients’ bedsides
and in close proximity to other nurses during the simulation. These data help to understand
spatial dynamics and collaboration patterns within the nursing team.
Table 1
YarSense implementation in a in-the-wild nursing simulation
Tier Implementation Details Technology Used
Users Researchers, 6 Teachers, 254 Students NA
Context Pedagogical intentions: Definition of five Web-based platform using Express
Modeller rules according to the five actions expected Node.js framework and hosted on
from students during the learning activity (Sec- an Amazon Elastic Compute Cloud
tion 4.1). instance (Amazon EC2).
Knowledge of the Learning Activity: roles:
2 Graduate Nurses and 2 Ward Graduate
Nurses. Physical resources: 4 beds
Multimodal Machine sensing: Indoor positioning data1 , Apache Kafka for parallel data col-
sensor data physiological data (Empatica e4), audio, and lection and custom scripts for data
capture video. processing.
Human sensing: actions performed by stu-
dents (action log)
Multimodal QE modelling: The theory of Proxemics was Custom scripts in Python to create
Modelling used to identify interpersonal spaces between the multimodal matrix. Matplotlib
nurses. Only indoor positioning data and log to visualise bar graphs and Python-
data were used for modelling. igraph to visualise graphs. Vis-
timeline to visualise a timeline2 .
Data Five data stories were fully automatically ren- Custom scripts in Python and
Storytelling dered, combining data visualisations and the JavaScripts to visualise the data
Generator feedback created by the teacher in the peda- stories.
gogical intentions.
Figure 2: Types of automated data stories: a. Nurses’ proximity to beds; b. Nursing team working in
close proximity; c. Error committed by students during the simulation
5. Discussion, Future Work and Limitations
5.1. Assessing the Efficacy of Architectural Decisions
Our architecture emphasises the involvement of educational users, including teachers, re-
searchers, and students. Accordingly, our Context Modeller tier enables users to input rules
and contextual information, guiding data analysis and the automated generation of data sto-
ries. These features align with user experience (UX) best practices, which advocate for user
input and the ability of users to exert control in human-computer interactions [55]. With this
architecture, teachers can incorporate feedback and adapt the visual representations according
to their learning designs, an approach that is an integral part of the current research agenda
in Learning Analytics (LA) as discussed in Ez-Zaouia [56]. Additionally, our approach fosters
opportunities for user-AI collaboration [57], allowing teachers to stay engaged by modifying
rules for the Multimodal Modelling tier to analyse and generate outcomes tailored to teachers
and students needs. This approach empowers teachers and students with agency.
The architecture decision for the use of paralell data collection and processing provides
flexibility for researchers to decide which technology adapts to certain learning contexts and
needs. The integration of various data modalities presents unique challenges and opportunities
for deeper insight into team and individual dynamics in physical training settings. Employing
mature parallel processing frameworks, such as Apache Kafka and MapReduce, helps mitigate
the complexity associated with handling multiple data sources [58].
Finally, incorporating Data Storytelling (DS) principles to aid user interpretation is consis-
tent with the existing literature, highlighting the need for data representations that are both
comprehensive and interpretable in educational contexts [59–61].
5.2. Reflections on the Reference Implementation Process
Our reference implementation highlights the importance and complexity of effectively visu-
alising multimodal data to provide evidence for professional development and training. The
integration of a tool to generate data stories as evidence of what students achieve in their
learning activity plays a pivotal role in enhancing the learning experience and ensuring the
effectiveness of training programs. In our approach, data storytelling transcends traditional
data presentation methods by weaving complex data into coherent narratives that align with
the teacher’s pedagogical intentions. Automated data stories not only aid in the comprehension
of intricate concepts, but also foster an immersive and intuitive learning experience that sup-
ports deeper reflections on the learning activity. Such integration is particularly invaluable in
professional training, where the assimilation of practical and theoretical knowledge is critical.
We observed that while real-time data processing is not always necessary in educational
settings, near-real-time solutions can greatly benefit both teachers and students. For example,
in nursing simulations, clinical debriefs typically follow team simulations, prompting students
to reflect on their performance. These debrief sessions have been proven to be effective in
helping students identify misconceptions and errors during simulations [62]. Therefore, our
architecture aims to provide timely feedback through data stories, facilitating post-activity
discussions and reflections, and thus enhancing the overall learning experience.
5.3. Learning from the Pilot: Insights and Future Enhancements
The Multimodal Modelling tier necessitates an initial exploration of learning theories, such as
the theory of proxemics, to understand how data modalities can be effectively used for learning.
Our reference implementation drew upon prior research that explored quantitative ethnography
(QE) modelling of positioning data and human observations [52, 63]. Consequently, future
studies should aim to refine QE analysis and data visualisation for additional data modalities
like audio and video, thereby optimising their usefulness and accessibility for educators and
learners. Furthermore, more comprehensive evaluations are needed to identify the most effective
visualisations that can cohesively represent diverse data sets, enabling a thorough understanding
of learning activities, particularly in the context of data fusion [34].
While human-centred design can help address these challenges [64], there lies a promising
opportunity in employing Large Language Models (LLM) to generate explanations for complex
visualisations, thereby aiding user interpretation. The LLM image-to-text functionality, specifi-
cally in its role for data storytelling in education, is an avenue worth exploring. Moreover, the
potential of LLMs to assist in creating narratives that render visualisations more self-explanatory
deserves thorough investigation. Future research should also further explore the role of user-AI
collaboration.
One of the main limitations of our approach is the automation of certain data modalities,
particularly physiological data. There is a need for further development to fully automate
this process, ensuring seamless integration and analysis of all data types. The architecture,
originally used for Nursing Simulations, requires careful adaptation for different contexts.
Successful integration into Learning Design calls for collaboration with educators to align it
with assessment intentions and promote reflective thinking with added narratives.
6. Conclusion
In conclusion, our work makes a significant contribution to the field of educational data analysis
by detailing an architecture that automates the generation of data stories in real-world environ-
ments. Our reference implementation, conducted in a large-scale, in-the-wild setting, not only
demonstrates practical application but also acknowledges the challenges and limitations that
future research must address to further refine architectures supporting physical activities and
the provision of data storytelling.
References
[1] A. Petrosoniak, R. Almeida, L. D. Pozzobon, C. Hicks, M. Fan, K. White, M. McGowan,
P. Trbovich, Tracking workflow during high-stakes resuscitation: the application of a
novel clinician movement tracing tool during in situ trauma simulation, BMJ Simulation
and Technology Enhanced Learning 5 (2019) 78–84. doi:10.1136/bmjstel- 2017- 000300 .
[2] M. Viktorelius, C. Sellberg, The lived body and embodied instructional practices in
maritime basic safety training, Vocations and Learning 15 (2022) 87–109. doi:10.1007/
s12186- 021- 09279- z .
[3] A. B. Cooper, E. J. Tisdell, Embodied aspects of learning to be a surgeon, Medical teacher
42 (2020) 515–522. doi:10.1080/0142159X.2019.1708289 .
[4] M. Kelly, R. Ellaway, A. Scherpbier, N. King, T. Dornan, Body pedagogics: embodied
learning for the health professions, Medical education 53 (2019) 967–977. doi:10.1111/
medu.13916 .
[5] L. Eyal, E. Gil, Hybrid Learning Spaces — A Three-Fold Evolving Perspective, Springer
International Publishing, Cham, 2022, pp. 11–23. doi:10.1007/978- 3- 030- 88520- 5_2 .
[6] M. A. Rosen, A. S. Dietz, T. Yang, C. E. Priebe, P. J. Pronovost, An integrative framework for
sensor-based measurement of teamwork in healthcare, Journal of the American Medical
Informatics Association 22 (2014) 11–18. doi:10.1136/amiajnl- 2013- 002606 .
[7] G. Dafoulas, C. Cardoso Maia, A. Ali, J. Augusto, Collecting sensor-generated data for as-
sessing teamwork and individual contributions in computing student teams, EDULEARN18
Proceedings 1 (2018) 11156–11162. doi:10.21125/EDULEARN.2018.2759 .
[8] J. S. Cha, D. Athanasiadis, N. E. Anton, D. Stefanidis, D. Yu, Measurement of Nontech-
nical Skills During Robotic-Assisted Surgery Using Sensor-Based Communication and
Proximity Metrics, JAMA Network Open 4 (2021) e2132209–e2132209. doi:10.1001/
JAMANETWORKOPEN.2021.32209 .
[9] D. O. Olguín, P. A. Gloor, A. Pentland, Wearable sensors for pervasive healthcare man-
agement, in: 2009 3rd International Conference on Pervasive Computing Technolo-
gies for Healthcare - Pervasive Health 2009, PCTHealth 2009, 2009. doi:10.4108/ICST.
PERVASIVEHEALTH2009.6033 .
[10] I. Winder, D. Delaporte, S. Wanaka, K. Hiekata, Sensing teamwork during multi-objective
optimization, in: 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020, pp.
1–6. doi:10.1109/WF- IoT48130.2020.9221086 .
[11] E. Watanabe, T. Ozeki, T. Kohama, Analysis of interactions between lecturers and students,
in: Proceedings of the 8th International Conference on Learning Analytics and Knowledge,
ACM, New York, NY, USA, 2018, pp. 370–374. doi:10.1145/3170358.3170360 .
[12] E. Chng, M. R. Seyam, W. Yao, B. Schneider, Toward capturing divergent collaboration
in makerspaces using motion sensors, Information and Learning Sciences (2022). doi:10.
1108/ILS- 08- 2020- 0182 .
[13] S. K. D’Mello, N. Bosch, H. Chen, Multimodal-multisensor affect detection, in: The
Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and
Detection of Emotion and Cognition-Volume 2, 2018, pp. 167–202. doi:10.1145/3107990.
3107998 .
[14] I. Villanueva, B. D. Campbell, A. C. Raikes, S. H. Jones, L. G. Putney, A multimodal explo-
ration of engineering students’ emotions and electrodermal activity in design activities,
Journal of Engineering Education 107 (2018) 414–441. doi:10.1002/jee.20225 .
[15] M. Raca, L. Kidzinski, P. Dillenbourg, Translating Head Motion into Attention - Towards
Processing of Student’s Body-Language, EDM (2015). URL: https://api.semanticscholar.
org/CorpusID:15798760.
[16] K. Ahuja, D. Kim, F. Xhakaj, V. Varga, A. Xie, S. Zhang, J. E. Townsend, C. Harrison,
A. Ogan, Y. Agarwal, Edusense: Practical classroom sensing at scale, Proc. ACM Interact.
Mob. Wearable Ubiquitous Technol. 3 (2019). doi:10.1145/3351229 .
[17] N. Saquib, A. Bose, D. George, S. Kamvar, Sensei: sensing educational interaction, Proceed-
ings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1 (2018)
1–27. doi:10.1145/3161172 .
[18] R. Martinez-Maldonado, V. Echeverria, K. Mangaroska, A. Shibani, G. Fernandez-Nieto,
J. Schulte, S. Buckingham Shum, Moodoo the tracker: Spatial classroom analytics for char-
acterising teachers’ pedagogical approaches, International Journal of Artificial Intelligence
in Education (2021) 1–27. doi:10.1007/s40593- 021- 00276- w .
[19] M. Worsley, R. Martinez-Maldonado, C. D’Angelo, A new era in multimodal learning
analytics: Twelve core commitments to ground and grow mmla, Journal of Learning
Analytics 8 (2021) 10–27. doi:10.18608/jla.2021.7361 .
[20] V. Echeverria, R. Martinez-Maldonaldo, S. Buckingham Shum, K. Chiluiza, R. Granda,
C. Conati, Exploratory versus Explanatory Visual Learning Analytics: Driving Teachers’
Attention through Educational Data Storytelling, Journal of Learning Analytics 5 (2018)
72–97. doi:https://doi.org/10.18608/jla.2018.53.6 .
[21] R. Martinez-Maldonado, V. Echeverria, G. Fernandez-Nieto, S. B. Shum, From Data to
Insights: A Layered Storytelling Approach for Multimodal Learning Analytics, in: CHI
’20, 2020, p. 15. doi:https://doi.org/10.1145/3313831.3376148 .
[22] G. M. Fernandez-Nieto, V. Echeverria, S. Buckingham Shum, K. Mangaroska, K. Kitto,
E. Palominos, C. Axisa, R. Martinez-Maldonado, Storytelling With Learner Data: Guiding
Student Reflection on Multimodal Team Data, IEEE Transactions on Learning Technologies
(2021) 1–14. doi:10.1109/TLT.2021.3131842 .
[23] H. Khosravi, J. Kay, S. Sadiq, S. Buckingham Shum, R. Martinez-Maldonado, S. Knight,
G. Chen, Y.-S. Tsai, C. Conati, D. Gasevic, Explainable artificial intelligence in education,
Computers Education: Artificial Intelligence 3 (2022) 1–31. doi:10.1016/j.caeai.2022.
100074 .
[24] A. V. Maltese, J. A. Harsh, D. Svetina, Data visualization literacy: Investigating data
interpretation along the novice—expert continuum, Journal of College Science Teaching
45 (2015) 84–90. URL: https://www.jstor.org/stable/43631889.
[25] J. E. Raffaghelli, B. Stewart, Centering complexity in ‘educators’ data literacy’ to support
future practices in faculty development: a systematic review of the literature, Teaching in
Higher Education 25 (2020) 435–455. doi:10.1080/13562517.2019.1696301 .
[26] M. Worsley, Multimodal Learning Analytics ’ Past , Present , and , Potential Futures (2018)
1–16. URL: http://ceur-ws.org/Vol-2163/paper5.pdf.
[27] S. K. Milligan, P. Griffin, Understanding learning and learning design in moocs: A
measurement-based interpretation, Journal of Learning Analytics 3 (2016) 88–115.
doi:10.18608/jla.2016.32.5 .
[28] V. Shute, M. Ventura, Stealth assessment: Measuring and supporting learning in video
games, MIT press, 2013. doi:10.7551/mitpress/9589.001.0001 .
[29] S. S. Alhadad, Visualizing data to support judgement, inference, and decision making in
learning analytics: Insights from cognitive psychology and visualization science, Journal
of Learning Analytics 5 (2018) 60–85. doi:10.18608/jla.2018.52.5 .
[30] S. Knight, A. Gibson, A. Shibani, Implementing learning analytics for learning impact:
Taking tools to task, The Internet and Higher Education 45 (2020) 100729. doi:10.1016/j.
iheduc.2020.100729 .
[31] V. Echeverria, R. Martinez-Maldonado, S. Buckingham Shum, Towards Collaboration
Translucence: Giving Meaning to Multimodal Group Data, Proceedings of the CHI (2019)
39:1–39:16. doi:10.1145/3290605.3300269 .
[32] B. Dykes, Data storytelling: What it is and how it can be used to effectively communicate
analysis results, Applied Marketing Analytics (2015). URL: https://hstalks.com/article/619/
data-storytelling-what-it-is-and-how-it-can-be-use/.
[33] L. Ryan, The visual imperative: creating a visual culture of data discovery, Kaufmann,
Morgan, Massachusetts, 2016.
[34] L. Yan, L. Zhao, D. Gasevic, R. Martinez-Maldonado, Scalability, sustainability, and ethicality
of multimodal learning analytics, in: LAK22: 12th International Learning Analytics and
Knowledge Conference, 2022, pp. 13–23. doi:10.1145/3506860.3506862 .
[35] K. Sharma, M. Giannakos, Multimodal data capabilities for learning: What can multimodal
data tell us about learning?, British Journal of Educational Technology 51 (2020) 1450–1484.
doi:10.1111/bjet.12993 .
[36] B. Schneider, G. Sung, E. Chng, S. Yang, How can high-frequency sensors capture collab-
oration? a review of the empirical links between multimodal metrics and collaborative
constructs, Sensors 21 (2021) 8185. doi:10.3390/s21248185 .
[37] K. Kurihara, M. Goto, J. Ogata, Y. Matsusaka, T. Igarashi, Presentation sensei: A presen-
tation training system using speech and image processing, in: Proceedings of the 9th
International Conference on Multimodal Interfaces, ICMI ’07, Association for Computing
Machinery, New York, NY, USA, 2007, p. 358–365. doi:10.1145/1322192.1322256 .
[38] M. Worsley, K. Anderson, N. Melo, J. Jang, Designing Analytics for Collaboration Literacy
and Student Empowerment., Journal of Learning Analytics 8 (2021) 30–48. doi:10.18608/
jla.2021.7242 .
[39] Q. Chen, Z. Li, T.-C. Pong, H. Qu, Designing narrative slideshows for learning analytics, in:
2019 IEEE Pacific Visualization Symposium (PacificVis), 2019, pp. 237–246. doi:10.1109/
PacificVis.2019.00036 .
[40] C. N. Knaflic, Storytelling with data: A data visualization guide for business professionals,
12 ed., John Wiley Sons, New Jersey, 2017.
[41] N. Gershon, W. Page, What storytelling can do for information visualization, Commun.
ACM 44 (2001) 31–37. doi:10.1145/381641.381653 .
[42] W. Wojtkowski, W. Wojtkowski, W. G. Wojtkowski, Storytelling: its role in information
visualization, Proceedings of European Systems Science Congress (2002) 1–5. doi:10.1.1.
99.4771 .
[43] E. Segel, J. Heer, Narrative visualization: Telling stories with data, IEEE Transactions on
Visualization and Computer Graphics 16 (2010) 1139–1148. doi:10.1109/TVCG.2010.179 .
[44] V. Echeverria, R. Martinez-Maldonado, R. Granda, K. Chiluiza, C. Conati, S. Buckingham
Shum, Driving data storytelling from learning design, in: ACM, Association for Computing
Machinery, 2018, pp. 131–140. doi:https://doi.org/10.1145/3170358.3170380 .
[45] G. M. Fernández-Nieto, S. Buckingham Shum, K. Kitto, R. Martínez-Maldonado, Beyond
the Learning Analytics Dashboard: Alternative Ways to Communicate Student Data
Insights Combining Visualisation, Narrative and Storytelling, LAK 2022 5 (2022) 1–16.
doi:10.1145/3506860.3506895 .
[46] S. K. Shankar, L. P. Prieto, M. J. Rodríguez-Triana, A. Ruiz-Calleja, A review of multimodal
learning analytics architectures, in: 2018 IEEE 18th International Conference on Advanced
Learning Technologies (ICALT), 2018, pp. 212–214. doi:10.1109/ICALT.2018.00057 .
[47] R. Noël, D. Miranda, C. Cechinel, F. Riquelme, T. T. Primo, R. Munoz, Visualizing collabora-
tion in teamwork: A multimodal learning analytics platform for non-verbal communication,
Applied Sciences 12 (2022). doi:10.3390/app12157499 .
[48] A. B. Pillai, Software architecture with Python : design and architect highly scalable, robust,
clean, and high performance applications in Python, Packt Publishing, 2017.
[49] W. Chango, J. A. Lara, R. Cerezo, C. Romero, A review on data fusion in multimodal
learning analytics and educational data mining, Wiley Interdisciplinary Reviews: Data
Mining and Knowledge Discovery 12 (2022) e1458. doi:10.1002/widm.1458 .
[50] D. W. Shaffer, Quantitative ethnography, Cathcart Press, 2017.
[51] S. Buckingham Shum, V. Echeverria, R. Martinez-Maldonado, The multimodal matrix as a
quantitative ethnography methodology, in: B. Eagan, M. Misfeldt, A. Siebert-Evenstone
(Eds.), Advances in Quantitative Ethnography, Springer International Publishing, Cham,
2019, pp. 26–40. Doi: https://doi.org/10.1007/978-3-030-33232-7_3.
[52] G. Fernandez-Nieto, R. Martinez-Maldonado, V. Echeverria, K. Kitto, P. An, S. Buckingham
Shum, What Can Analytics for Teamwork Proxemics Reveal About Positioning Dynamics
In Clinical Simulations?, Proceedings of the ACM on Human-Computer Interaction 5
(2021) 1–24. doi:10.1145/3449284 .
[53] L. Zhao, L. Yan, D. Gasevic, S. Dix, H. Jaggard, R. Wotherspoon, R. Alfredo, X. Li,
R. Martinez-Maldonado, Modelling co-located team communication from voice detection
and positioning data in healthcare simulation, in: LAK22: 12th International Learn-
ing Analytics and Knowledge Conference, LAK22, Association for Computing Machin-
ery, New York, NY, USA, 2022, p. 370–380. URL: https://doi.org/10.1145/3506860.3506935.
doi:10.1145/3506860.3506935 .
[54] G. M. Fernández-Nieto, R. Martinez-Maldonado, V. Echeverria, D. G. Kirsty Kitto, S. B.
Shum, Data storytelling editor: A teacher-centred tool for customising learning analytics
dashboard narratives, in: 14th Learning Analytics and Knowledge Conference (LAK ’24).
ACM, 2024. doi:10.1145/3636555.3636930 .
[55] J. Bergström, J. Knibbe, H. Pohl, K. Hornbæk, Sense of agency and user experience: Is there a
link?, ACM Trans. Comput.-Hum. Interact. 29 (2022). URL: https://doi.org/10.1145/3490493.
doi:10.1145/3490493 .
[56] M. Ez-Zaouia, Teacher-Centered Dashboards Design Process, in: LAK20, Frankfurt,
Germany, 2020. URL: https://hal.science/hal-02516815.
[57] A. F. Wise, S. Knight, S. B. Shum, Collaborative Learning Analytics, in: U. Cress,
C. Rosé, A. F. Wise, J. Oshima (Eds.), International Handbook of Computer-Supported
Collaborative Learning, Springer International Publishing, Cham, 2021, pp. 425–443.
doi:10.1007/978- 3- 030- 65291- 3_23 .
[58] L. Yao, Z. Ge, Distributed parallel deep learning of hierarchical extreme learning machine
for multimode quality prediction with big process data, Engineering Applications of
Artificial Intelligence 81 (2019) 450–465. doi:10.1016/j.engappai.2019.03.011 .
[59] S. Pozdniakov, R. Martinez-Maldonado, Y.-S. Tsai, V. Echeverria, N. Srivastava, D. Gasevic,
How do teachers use dashboards enhanced with data storytelling elements according to
their data visualisation literacy skills?, in: LAK23: 13th International Learning Analytics
and Knowledge Conference, LAK2023, Association for Computing Machinery, New York,
NY, USA, 2023, p. 89–99. doi:10.1145/3576050.3576063 .
[60] D. Hooshyar, K. Tammets, T. Ley, K. Aus, K. Kollom, Learning analytics in supporting
student agency: A systematic review, Sustainability 15 (2023). doi:10.3390/su151813662 .
[61] E. Villalobos, I. Hilliger, M. Pérez-Sanagustín, C. González, S. Celis, J. Broisin, Analyzing
learners’ perception of indicators in student-facing analytics: A card sorting approach, in:
O. Viberg, I. Jivet, P. J. Muñoz-Merino, M. Perifanou, T. Papathoma (Eds.), Responsive and
Sustainable Educational Futures, Springer Nature Switzerland, Cham, 2023, pp. 430–445.
doi:10.1007/978- 3- 031- 42682- 7_29 .
[62] E. Palominos, T. Levett-Jones, T. Power, R. Martinez-Maldonado, ‘We learn from our
mistakes’: Nursing students’ perceptions of a productive failure simulation, Collegian 29
(2022) 708–712. doi:10.1016/j.colegn.2022.02.006 .
[63] V. Echeverria, R. Martinez-Maldonado, L. Yan, L. Zhao, G. Fernandez-Nieto, D. Gašević,
S. B. Shum, Huceta: A framework for human-centered embodied teamwork analytics,
IEEE Pervasive Computing 22 (2023) 39–49. doi:10.1109/MPRV.2022.3217454 .
[64] R. Alfredo, V. Echeverria, Y. Jin, Z. Swiecki, D. Gašević, , R. Martinez-Maldonado, Slade:
A method for designing human-centred learning analytics systems, in: LAK24: 14th
International Learning Analytics and Knowledge Conference, 2023, p. 16. doi:10.1145/
3636555.3636847 .