<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">YarnSense: Automated Data Storytelling for Multimodal Learning Analytics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gloria</forename><surname>Milena Fernández-Nieto</surname></persName>
							<email>gloriamilena.fernandeznieto@monash.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">Monash University</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vanessa</forename><surname>Echeverria</surname></persName>
							<email>vanessa.echeverria@monash.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">Monash University</orgName>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">Escuela Superior Politecnica del Litoral</orgName>
								<address>
									<settlement>Guayaquil</settlement>
									<country key="EC">Ecuador</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberto</forename><surname>Martinez-Maldonado</surname></persName>
							<email>roberto.martinezmaldonado@monash.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">Monash University</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Simon</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Technology Sydney</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">YarnSense: Automated Data Storytelling for Multimodal Learning Analytics</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E21D6BC09FD33434313509B6D2783CFA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Data Storytelling, multimodal data, data visualisation, sensor data Buckingham Shum) Orcid 0000-0002-8163-2303 (G. M. Fernández-Nieto)</term>
					<term>0000-0002-2022-9588 (V. Echeverria)</term>
					<term>0000-0002-8375-1816 (R. Martinez-Maldonado)</term>
					<term>0000-0002-6334-7429 (S. Buckingham Shum)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Professional development and training often require students to reflect on their performance, especially recalling the mistakes they have made in safe training environments, but these can occur in rapidly evolving and busy environments where key actions are often missed. Promisingly, rapid improvements in wearable sensing technologies are opening up new opportunities to capture large amounts of multimodal behaviour data that can serve as evidence to support student reflection about their performance. However, while some preliminary research has highlighted the potential of analysing such data to identify interesting patterns, less work has focused on the problem of automatically communicating meaningful and contextualised data and insights to end-users. Based on the notion of data storytelling as a means of extracting actionable insights from data, this paper presents YarnSense, an architecture to automatically generate data stories with the intention of supporting student reflection and learning. YarnSense maps low-level sensor data to the pedagogical intentions of teachers, bringing human instructors into the data analysis loop. We illustrate this approach with a reference implementation of the system and an in-the-wild study in the context of immersive simulation in healthcare.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Learning by doing is essential in sectors like emergency response <ref type="bibr" target="#b0">[1]</ref>, safety training <ref type="bibr" target="#b1">[2]</ref>, and healthcare <ref type="bibr" target="#b2">[3]</ref>, where professionals gain knowledge through practical experiences, including bodily interactions and emotional responses <ref type="bibr" target="#b3">[4]</ref>. However, capturing critical events or errors during fast-paced training scenarios is challenging.</p><p>Integrating digital technologies and sensing devices in physical learning spaces offers a solution to improve teaching and learning <ref type="bibr" target="#b4">[5]</ref>. These technologies, including infrared sensors, video and audio recorders, and wearables, capture multimodal behaviour data in real time , supporting the understanding of processes such as teamwork <ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref> and communication <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref>, as well as the impact of emotions on learning <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>. They also assist in analysing teacherstudent interactions <ref type="bibr" target="#b14">[15]</ref><ref type="bibr" target="#b15">[16]</ref><ref type="bibr" target="#b16">[17]</ref><ref type="bibr" target="#b17">[18]</ref>. Multimodal data can be difficult to interpret if the intention is to open these data up to educational users, ultimately closing the feedback loop <ref type="bibr" target="#b18">[19]</ref>. Thus, researchers have started to use InfoVis and visual design principles to unpack and communicate insights coming from multimodal data to non-expert users, such as teachers and students.</p><p>Previous research has explored the use of data storytelling (DS) in learning analytics dashboards (LAD) for conveying insights to educational users <ref type="bibr" target="#b19">[20]</ref><ref type="bibr" target="#b20">[21]</ref><ref type="bibr" target="#b21">[22]</ref>. These studies have shown promising outcomes, demonstrating that DS elements effectively aid in interpreting complex data. However, in these studies, the integration of pedagogical intentions with DS elements has been manually conducted. Researchers typically engage in an inquiry process with educational stakeholders to identify these pedagogical intentions, which are then mapped to DS elements in the LADs. While advancements have been notable, the field still lacks integrated, automated solutions that are tailored to both students and teachers, incorporating educators' instructional strategies and offering custom data interfaces suited to their specific teaching skills and requirements <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref><ref type="bibr" target="#b24">[25]</ref><ref type="bibr" target="#b25">[26]</ref><ref type="bibr" target="#b26">[27]</ref><ref type="bibr" target="#b27">[28]</ref><ref type="bibr" target="#b28">[29]</ref><ref type="bibr" target="#b29">[30]</ref><ref type="bibr" target="#b30">[31]</ref>.</p><p>To overcome these challenges, we introduce YarnSense, a system architecture that employs data storytelling, an approach combining data, visuals, and narrative <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b32">33]</ref>, to simplify and communicate insights from multimodal behaviour data in dynamic and collocated settings. YarnSense includes a context modeller for educators to guide analysis, an automated sensor data capture, a multimodal modeller to translate sensor data into meaningful constructs, and a data storytelling generator for learner-facing interfaces. We demonstrate its application through a reference implementation in a clinical nursing healthcare setting with 254 students and six teachers, showing how YarnSense helps define and interpret the pedagogical intentions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background and Related Work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Automated multimodal sensor-data visual interfaces to support learning</head><p>The integration of digital technologies into learning spaces has led to the use of various sensing devices such as infrared sensors and physiological wearables to capture multimodal behaviour data in educational settings <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b33">[34]</ref><ref type="bibr" target="#b34">[35]</ref><ref type="bibr" target="#b35">[36]</ref>. These data help to study key learning processes such as effective teamwork <ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref> and communication <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref>. However, creating effective user interfaces for non-data experts remains a challenge. Current implementations, such as EduSense <ref type="bibr" target="#b15">[16]</ref> and Sensai <ref type="bibr" target="#b36">[37]</ref>, have been used to provide feedback in educational contexts, but often lack the ability to simplify complex multimodal data for end users, such as students and teachers. Recent efforts have aimed to address these challenges by developing tools that can elucidate complex team dynamics using collecting audio and user interactions in an online setting (BLINC <ref type="bibr" target="#b37">[38]</ref>) and narrative visualisations for MOOCs <ref type="bibr" target="#b38">[39]</ref>. Despite these advances, the need to automatically generate user-friendly interfaces that can translate sensor data into meaningful insights for educational purposes remains unmet.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Data storytelling foundations and approaches in education</head><p>Data Storytelling (DS) has emerged as an effective technique for communicating complex data insights through a combination of data, visuals, and narrative <ref type="bibr" target="#b31">[32,</ref><ref type="bibr" target="#b32">33,</ref><ref type="bibr" target="#b39">[40]</ref><ref type="bibr" target="#b40">[41]</ref><ref type="bibr" target="#b41">[42]</ref>. DS transforms data into intuitive visualisations and narratives, making it easier for non-experts to grasp complex information. They identified key data storytelling principles of effective data storytelling: (1) focus on purposeful communication, (2) drive audience attention through meaningful visual elements, (3) select appropriate visuals for different purposes, (4) adhere to the basic principles of information visualisation design, such as removing unnecessary elements and using captions, space, shape, and colour wisely, and (5) incorporate narrative structures, as in narrative visualisation or visual narratives <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b42">43]</ref>. In education, the application of the principles of DS has shown promise in enhancing multimodal data visualisation. For instance, Martinez-Maldonado et al. <ref type="bibr" target="#b20">[21]</ref> used a layered storytelling approach to categorise multimodal data into meaningful information structures.</p><p>However, most existing DS applications in education, including those by Martinez-Maldonado et al. <ref type="bibr" target="#b20">[21]</ref>, Echeverria et al. <ref type="bibr" target="#b43">[44]</ref>, Fernández-Nieto et al. <ref type="bibr" target="#b44">[45]</ref>, are not fully automated and have been tested primarily in high-fidelity prototypes or controlled settings. Although previous research has investigated how to automatically generate visual outcomes using multimodal data, there remains a gap in the generation of meaningful automated interfaces guided by teacher's pedagogical intentions and data storytelling principles to facilitate the communication of complex multimodal data. This paper presents an architecture and its implementation to automatically generate multimodal data storytelling interfaces to support students' reflection in a nursing simulation setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Architectures for Multimodal Learning Analytics -MMLA</head><p>Recent efforts in automating Multimodal Learning Analytics (MMLA) interfaces have been reviewed by Shankar et al. <ref type="bibr" target="#b45">[46]</ref>, focusing on nine different architectures through the Data Value Chain (DVC) framework. This framework includes data discovery, integration, and exploitation. In data discovery, all architectures leveraged multiple data sources, such as physiological data and posture data. Most included data preparation steps such as pretransformation and organising data relevant to the learning context. For data integration, over half of the architectures incorporated mechanisms to merge data from specific modalities, with databases being a popular choice. The literature review highlights that, in terms of data exploitation, almost all architectures carried out analysis activities, including statistical analysis or machine learning, and most produced visualisations like dashboards. The review also noted that three architectures provided decision-making support, specifically targeting teachers and students. However, two main challenges were identified: the lack of learning alignment or connections to the learning context in MMLA architectures, and the complexity of MMLA data and its visualisation posing challenges for stakeholders' data literacy.</p><p>A more recent architecture by Noël et al. <ref type="bibr" target="#b46">[47]</ref> focuses on audio and video data, using hardware such as Raspberry Pi 4 for data collection and a server for storage and visualisation. Despite offering five visualisations for educators to assess collaborative activities, further improvements in design and evaluation are needed for effective use by stakeholders. This highlights the ongoing need for MMLA architectures that provide contextualised, meaningful interfaces to support teachers and students, underlining the importance of developing accessible and explanatory data stories within these systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">YarnSense: Automated Educational Data Storytelling Architecture</head><p>YarnSense is a multi-tiered architecture, that automatically distills insights from multimodal behavioural data, gathered via sensors worn by students and human observations, translating these into data stories that reflect teachers' educational goals. This system architecture helps students reflect on their learning activities. It comprises four main tiers, as shown in Figure <ref type="figure" target="#fig_0">1</ref>: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">User Interfaces: The Context Modeller</head><p>This tier provides user interfaces for experts (e.g., teachers or researchers) to input what is known of the learning activity and the teacher's pedagogical intentions. Knowledge of the learning activity corresponds to the specifics of a learning activity (e.g., nursing simulations) based on the learning design. Key features to be identified from the learning design and captured in these interfaces include: i) Actions of interest expected during the activity, such as critical moments or milestones (e.g., patient adverse reaction). ii) Information on physical resources in the learning space, including the positions of manikins, trolleys, or sensors (e.g., number of beds in a simulation ward). iii) Meta-information, such as the role of team members and devices to be worn during the activity (e.g., an auxiliary nurse wearing a microphone). In addition, this tear allows teachers to input the pedagogical intentions of the learning activity into the system. It defines the teacher assessment criteria into rules that will be used for the system to interrogate multimodal data and create stories. Considerations for implementation. Web-based platforms are a suitable choice for implementing this tier. Technologies such as HTML, CSS, and JavaScript, in conjunction with frameworks like React or Angular, can be used to develop intuitive and responsive user interfaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Multimodal Sensor Data Capture</head><p>This component focusses on collecting data from both machine sensing (e.g., wearable sensors) and human sensing (e.g., observations). For wearable sensors, in particular, this tier considers multiple sensor data captured independently and in a loosely coupled manner. Key features for collecting data from sensors include: i) a recommended design pattern of 'pipe and filters' for independent data collection and processing. Pipe and filter patterns are commonly used in signal processing and remote sensing applications <ref type="bibr" target="#b47">[48]</ref>. According to this pattern, filters are designed independently and are typically well defined as services or functions, and pipes are conduits of information. ii) According to the previous architectural feature, this tier captures sensor data in parallel with automated functions for start / stop processes, helping to synchronise and scalability of the data. That way, each data modality is cleaned and stored in its more convenient format (e.g., json, csv, mp4). in various formats per sensor, with flexibility for real-time processing or batch collection based on context needs.</p><p>For human sensing data, this tear provides a user interface for users to log information into the system. Web and mobile applications are making it more accessible for users to capture additional observations during their learning activities. The data provided by the users are used to label actions that would otherwise be hard to detect using sensing technology.</p><p>Considerations for implementation. For machine sensing, wearable sensor technologies are ideal for implementing this tier. Devices like smartwatches, fitness trackers, or custom wearables, equipped with sensors for physiological data, indoor positioning, and audio capture, can be used. A comprehensive list of sensors used in educational data capture is detailed in the literature review by Chango et al. <ref type="bibr" target="#b48">[49]</ref>. To handle parallel data processing, multi threading or multiprocessing capabilities in programming languages such as Python, Java, or C++ can be employed. Additionally, parallel processing frameworks like Apache Kafka or RabbitMQ can be used for efficient data stream management.</p><p>For human sensing, mobile applications developed for platforms like Android or iOS would enable users to conveniently log data during their learning activities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Multimodal Modelling</head><p>Considering the quantitative ethnography (QE) approach <ref type="bibr" target="#b49">[50]</ref> and the multimodal matrix (MM) concept <ref type="bibr" target="#b50">[51]</ref>, this tier enhances low-level data with contextual insights from the context modeler. It transforms sensor data into meaningful constructs by coding multimodal observations into a data structure for analysis against the assessment criteria. These constructs are crucial for analysing multiple data modalities. For example, in physiological data, arousal peaks (indicative of changes in skin conductance levels) are interpreted as stress level indicators <ref type="bibr" target="#b44">[45]</ref>. Additionally, for indoor positioning data, the theory of proxemics helps to identify interactional spaces and social formations during learning activities <ref type="bibr" target="#b51">[52]</ref>. This theory is also applied to model the combination of modalities, such as positioning data and audio, to detect co-located speech events <ref type="bibr" target="#b52">[53]</ref>.</p><p>To do that, this tier implements custom software scripts to filter, combine, aggregate, or summarise the multimodal matrices according to the teacher's pedagogical intentions previously defined. As a result of this analysis, a Learner Model is generated. The Learner Model is a structured representation of student performance, misconceptions, or difficulties. The Learner Model assesses if the team achieved the pedagogical intentions defined by the teachers.</p><p>Considerations for implementation. To implement this tier, data analysis software like R or Python, equipped with libraries such as Pandas, NumPy, and SciPy, can be used for processing and analysing multimodal data based on specific constructs. Additionally, data visualisation libraries like Matplotlib, Seaborn, or D3.js are useful for visualising data during the analysis phase, which assists in refining the Learner Model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Data Storytelling Generator</head><p>The final tier uses the Leaner Model and the teacher's pedagogical intentions and communicates insights through data, visualisations, and narratives. The key features of this tier include: i) enhancing the data visualisations with DS principles (Section 2.2), such as highlighting important elements, colour schemes, and removing unnecessary elements to focus on relevant aspects of the learning model. ii) Generating visual stories that provide individual or team outcomes in an easily interpretable format for students. The Narratives are capture from the teacher's pedagogical intentions where they can incorporate textual feedback via the user interface. Data from the Lerner model are visualised and combined with narratives to convey a story for an individual student or a team.</p><p>Considerations for implementation. This tier can be implemented by integrating data visualisation tools such as Tableau, Qlik, or D3.js, which allows visual enhancements through the DS principles. Alternatively, custom visualisation software can be developed using programming languages such as JavaScript (with libraries like Chart.js or Three.js) or Python (with libraries like Matplotlib or Seaborn), customised to meet the specific requirements of the teacher's pedagogical intentions. Another option is the use of narrative generation tools, Natural Language Processing (NLP) libraries in Python, to partly automate the creation of narratives based on the data (e.g., Large Language Models -LLM).</p><p>Each tier of YarnSense plays a crucial role in transforming complex multimodal data into insightful and accessible data stories, supporting reflective learning in educational settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Reference Implementation</head><p>Having introduced our architecture in general terms above, we now turn to an illustrative example of how the whole architecture can be implemented in a specific learning activity. This architecture was implemented in an authentic clinical setting in nursing healthcare and was reported in Fernández-Nieto et al. <ref type="bibr" target="#b53">[54]</ref>. Data stories in this clinical context were created using a completely automated process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Learning context and data collection</head><p>The clinical scenario provides an opportunity for students to practice teamwork, communication, and prioritisation skills in the setting of a deteriorating patient. The clinical scenario was run in 38 classes by different instructors. A total of 254 students in their third/fourth year volunteered to participate in the data collection. The goal of the clinical scenario was to provide care to four patients and prioritise the care of each bed as a team. According to the assessment criteria established by the subject coordinator, a highly effective team should have performed the following five actions in the main bed (useful information for the context modeller tier): i) administer oxygen after patient respiratory depression; ii) assess vital signs every 5 minutes; iii) cease PCA (patient-controlled analgesia) after patient altered conscious state; iv) activate MET (Medical Emergency Team) calls after patient deterioration; and v) administer Naloxone timely. Additionally, students are supposed to take care of the other 3 beds; they have to prioritise care.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">YarnSense: Implementation of Nursing Simulations in the Wild</head><p>YarnSense was implemented and deployed in the 38 simulation classes following the learning activity described above. Details of the implementation in the wild are presented in Table <ref type="table" target="#tab_0">1</ref>.</p><p>In our reference implementation, we present two different types of automated data stories. The first type highlights errors made by students in simulations. From the positioning data and observations, using the teachers' pedagogical intentions, we automatically identified three error categories, as described in Fernández-Nieto et al. <ref type="bibr" target="#b44">[45]</ref> and Fernandez-Nieto et al. <ref type="bibr" target="#b51">[52]</ref>: i) Sequence Errors: Occur when a team performs a critical action in the wrong sequence. ii) Timeliness Errors: Identified when students respond too slowly, executing actions later than recommended by healthcare guidelines. iii) Frequency Errors: Detected by calculating the time difference between two key logged actions that should be performed repeatedly.</p><p>The second type of data stories, called positioning graphs, focuses on the physical interactions of nurses. These stories provide insights into how much time nurses spend on patients' bedsides and in close proximity to other nurses during the simulation. These data help to understand spatial dynamics and collaboration patterns within the nursing team. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Multimodal sensor data capture</head><p>Machine sensing: Indoor positioning data 1 , physiological data (Empatica e4), audio, and video.</p><p>Human sensing: actions performed by students (action log)</p><p>Apache Kafka for parallel data collection and custom scripts for data processing.</p><p>Multimodal Modelling</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>QE modelling:</head><p>The theory of Proxemics was used to identify interpersonal spaces between nurses. Only indoor positioning data and log data were used for modelling.</p><p>Custom scripts in Python to create the multimodal matrix. Matplotlib to visualise bar graphs and Pythonigraph to visualise graphs. Vistimeline to visualise a timeline 2 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data Storytelling Generator</head><p>Five data stories were fully automatically rendered, combining data visualisations and the feedback created by the teacher in the pedagogical intentions.</p><p>Custom scripts in Python and JavaScripts to visualise the data stories. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>. Assessing the Efficacy of Architectural Decisions</head><p>Our architecture emphasises the involvement of educational users, including teachers, researchers, and students. Accordingly, our Context Modeller tier enables users to input rules and contextual information, guiding data analysis and the automated generation of data sto-ries. These features align with user experience (UX) best practices, which advocate for user input and the ability of users to exert control in human-computer interactions <ref type="bibr" target="#b55">[55]</ref>. With this architecture, teachers can incorporate feedback and adapt the visual representations according to their learning designs, an approach that is an integral part of the current research agenda in Learning Analytics (LA) as discussed in Ez-Zaouia <ref type="bibr" target="#b56">[56]</ref>. Additionally, our approach fosters opportunities for user-AI collaboration <ref type="bibr" target="#b57">[57]</ref>, allowing teachers to stay engaged by modifying rules for the Multimodal Modelling tier to analyse and generate outcomes tailored to teachers and students needs. This approach empowers teachers and students with agency.</p><p>The architecture decision for the use of paralell data collection and processing provides flexibility for researchers to decide which technology adapts to certain learning contexts and needs. The integration of various data modalities presents unique challenges and opportunities for deeper insight into team and individual dynamics in physical training settings. Employing mature parallel processing frameworks, such as Apache Kafka and MapReduce, helps mitigate the complexity associated with handling multiple data sources <ref type="bibr" target="#b58">[58]</ref>.</p><p>Finally, incorporating Data Storytelling (DS) principles to aid user interpretation is consistent with the existing literature, highlighting the need for data representations that are both comprehensive and interpretable in educational contexts <ref type="bibr" target="#b59">[59]</ref><ref type="bibr" target="#b60">[60]</ref><ref type="bibr" target="#b61">[61]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Reflections on the Reference Implementation Process</head><p>Our reference implementation highlights the importance and complexity of effectively visualising multimodal data to provide evidence for professional development and training. The integration of a tool to generate data stories as evidence of what students achieve in their learning activity plays a pivotal role in enhancing the learning experience and ensuring the effectiveness of training programs. In our approach, data storytelling transcends traditional data presentation methods by weaving complex data into coherent narratives that align with the teacher's pedagogical intentions. Automated data stories not only aid in the comprehension of intricate concepts, but also foster an immersive and intuitive learning experience that supports deeper reflections on the learning activity. Such integration is particularly invaluable in professional training, where the assimilation of practical and theoretical knowledge is critical.</p><p>We observed that while real-time data processing is not always necessary in educational settings, near-real-time solutions can greatly benefit both teachers and students. For example, in nursing simulations, clinical debriefs typically follow team simulations, prompting students to reflect on their performance. These debrief sessions have been proven to be effective in helping students identify misconceptions and errors during simulations <ref type="bibr" target="#b62">[62]</ref>. Therefore, our architecture aims to provide timely feedback through data stories, facilitating post-activity discussions and reflections, and thus enhancing the overall learning experience.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Learning from the Pilot: Insights and Future Enhancements</head><p>The Multimodal Modelling tier necessitates an initial exploration of learning theories, such as the theory of proxemics, to understand how data modalities can be effectively used for learning. Our reference implementation drew upon prior research that explored quantitative ethnography (QE) modelling of positioning data and human observations <ref type="bibr" target="#b51">[52,</ref><ref type="bibr" target="#b63">63]</ref>. Consequently, future studies should aim to refine QE analysis and data visualisation for additional data modalities like audio and video, thereby optimising their usefulness and accessibility for educators and learners. Furthermore, more comprehensive evaluations are needed to identify the most effective visualisations that can cohesively represent diverse data sets, enabling a thorough understanding of learning activities, particularly in the context of data fusion <ref type="bibr" target="#b33">[34]</ref>.</p><p>While human-centred design can help address these challenges <ref type="bibr" target="#b64">[64]</ref>, there lies a promising opportunity in employing Large Language Models (LLM) to generate explanations for complex visualisations, thereby aiding user interpretation. The LLM image-to-text functionality, specifically in its role for data storytelling in education, is an avenue worth exploring. Moreover, the potential of LLMs to assist in creating narratives that render visualisations more self-explanatory deserves thorough investigation. Future research should also further explore the role of user-AI collaboration.</p><p>One of the main limitations of our approach is the automation of certain data modalities, particularly physiological data. There is a need for further development to fully automate this process, ensuring seamless integration and analysis of all data types. The architecture, originally used for Nursing Simulations, requires careful adaptation for different contexts. Successful integration into Learning Design calls for collaboration with educators to align it with assessment intentions and promote reflective thinking with added narratives.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In conclusion, our work makes a significant contribution to the field of educational data analysis by detailing an architecture that automates the generation of data stories in real-world environments. Our reference implementation, conducted in a large-scale, in-the-wild setting, not only demonstrates practical application but also acknowledges the challenges and limitations that future research must address to further refine architectures supporting physical activities and the provision of data storytelling.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: YarnSense: Multi-tier architecture for automated Data Storytelling</figDesc><graphic coords="4,89.29,254.00,416.69,289.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Types of automated data stories: a. Nurses' proximity to beds; b. Nursing team working in close proximity; c. Error committed by students during the simulation</figDesc><graphic coords="8,89.29,431.22,416.69,103.34" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>YarSense implementation in a in-the-wild nursing simulation</figDesc><table><row><cell>Tier</cell><cell>Implementation Details</cell><cell>Technology Used</cell></row><row><cell>Users</cell><cell>Researchers, 6 Teachers, 254 Students</cell><cell>NA</cell></row><row><cell>Context</cell><cell>Pedagogical intentions: Definition of five</cell><cell>Web-based platform using Express</cell></row><row><cell>Modeller</cell><cell>rules according to the five actions expected</cell><cell>Node.js framework and hosted on</cell></row><row><cell></cell><cell>from students during the learning activity (Sec-</cell><cell>an Amazon Elastic Compute Cloud</cell></row><row><cell></cell><cell>tion 4.1).</cell><cell>instance (Amazon EC2).</cell></row><row><cell></cell><cell>Knowledge of the Learning Activity: roles:</cell><cell></cell></row><row><cell></cell><cell>2 Graduate Nurses and 2 Ward Graduate</cell><cell></cell></row><row><cell></cell><cell>Nurses. Physical resources: 4 beds</cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3636555" xml:id="foot_0">.3636847.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Tracking workflow during high-stakes resuscitation: the application of a novel clinician movement tracing tool during in situ trauma simulation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Petrosoniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Pozzobon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>White</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mcgowan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Trbovich</surname></persName>
		</author>
		<idno type="DOI">10.1136/bmjstel-2017-000300</idno>
	</analytic>
	<monogr>
		<title level="j">BMJ Simulation and Technology Enhanced Learning</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="78" to="84" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The lived body and embodied instructional practices in maritime basic safety training</title>
		<author>
			<persName><forename type="first">M</forename><surname>Viktorelius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sellberg</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12186-021-09279-z</idno>
	</analytic>
	<monogr>
		<title level="j">Vocations and Learning</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="87" to="109" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Embodied aspects of learning to be a surgeon</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Tisdell</surname></persName>
		</author>
		<idno type="DOI">10.1080/0142159X.2019.1708289</idno>
	</analytic>
	<monogr>
		<title level="j">Medical teacher</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="515" to="522" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Body pedagogics: embodied learning for the health professions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kelly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ellaway</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Scherpbier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Dornan</surname></persName>
		</author>
		<idno type="DOI">10.1111/medu.13916</idno>
	</analytic>
	<monogr>
		<title level="j">Medical education</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="page" from="967" to="977" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Hybrid Learning Spaces -A Three-Fold Evolving Perspective</title>
		<author>
			<persName><forename type="first">L</forename><surname>Eyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gil</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-88520-5_2</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="11" to="23" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">An integrative framework for sensor-based measurement of teamwork in healthcare</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Rosen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Dietz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Priebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Pronovost</surname></persName>
		</author>
		<idno type="DOI">10.1136/amiajnl-2013-002606</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="11" to="18" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Collecting sensor-generated data for assessing teamwork and individual contributions in computing student teams</title>
		<author>
			<persName><forename type="first">G</forename><surname>Dafoulas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cardoso Maia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Augusto</surname></persName>
		</author>
		<idno type="DOI">10.21125/EDULEARN.2018.2759</idno>
	</analytic>
	<monogr>
		<title level="j">EDULEARN18 Proceedings</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="11156" to="11162" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Measurement of Nontechnical Skills During Robotic-Assisted Surgery Using Sensor-Based Communication and Proximity Metrics</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Cha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Athanasiadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">E</forename><surname>Anton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Stefanidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yu</surname></persName>
		</author>
		<idno type="DOI">10.1001/JAMANETWORKOPEN.2021.32209</idno>
	</analytic>
	<monogr>
		<title level="j">JAMA Network Open</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="e2132209" to="e2132209" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Wearable sensors for pervasive healthcare management</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">O</forename><surname>Olguín</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Gloor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pentland</surname></persName>
		</author>
		<idno type="DOI">10.4108/ICST.PERVASIVEHEALTH2009.6033</idno>
	</analytic>
	<monogr>
		<title level="m">2009 3rd International Conference on Pervasive Computing Technologies for Healthcare -Pervasive Health 2009</title>
				<meeting><address><addrLine>PCTHealth</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Sensing teamwork during multi-objective optimization</title>
		<author>
			<persName><forename type="first">I</forename><surname>Winder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Delaporte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wanaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hiekata</surname></persName>
		</author>
		<idno type="DOI">10.1109/WF-IoT48130.2020.9221086</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 6th World Forum on Internet of Things (WF-IoT)</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Analysis of interactions between lecturers and students</title>
		<author>
			<persName><forename type="first">E</forename><surname>Watanabe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ozeki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kohama</surname></persName>
		</author>
		<idno type="DOI">10.1145/3170358.3170360</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th International Conference on Learning Analytics and Knowledge</title>
				<meeting>the 8th International Conference on Learning Analytics and Knowledge<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="370" to="374" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Toward capturing divergent collaboration in makerspaces using motion sensors</title>
		<author>
			<persName><forename type="first">E</forename><surname>Chng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Seyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schneider</surname></persName>
		</author>
		<idno type="DOI">10.1108/ILS-08-2020-0182</idno>
	</analytic>
	<monogr>
		<title level="j">Information and Learning Sciences</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Multimodal-multisensor affect detection</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>D'mello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bosch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1145/3107990.3107998</idno>
	</analytic>
	<monogr>
		<title level="m">The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="167" to="202" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A multimodal exploration of engineering students&apos; emotions and electrodermal activity in design activities</title>
		<author>
			<persName><forename type="first">I</forename><surname>Villanueva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">D</forename><surname>Campbell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Raikes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">G</forename><surname>Putney</surname></persName>
		</author>
		<idno type="DOI">10.1002/jee.20225</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Engineering Education</title>
		<imprint>
			<biblScope unit="volume">107</biblScope>
			<biblScope unit="page" from="414" to="441" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Raca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kidzinski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dillenbourg</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:15798760" />
		<title level="m">Translating Head Motion into Attention -Towards Processing of Student&apos;s Body-Language</title>
				<imprint>
			<publisher>EDM</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Edusense: Practical classroom sensing at scale</title>
		<author>
			<persName><forename type="first">K</forename><surname>Ahuja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xhakaj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Varga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Townsend</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Harrison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ogan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1145/3351229</idno>
	</analytic>
	<monogr>
		<title level="m">Proc. ACM Interact. Mob. Wearable Ubiquitous Technol</title>
				<meeting>ACM Interact. Mob. Wearable Ubiquitous Technol</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">3</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Sensei: sensing educational interaction, Proceedings of the</title>
		<author>
			<persName><forename type="first">N</forename><surname>Saquib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bose</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>George</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kamvar</surname></persName>
		</author>
		<idno type="DOI">10.1145/3161172</idno>
	</analytic>
	<monogr>
		<title level="m">ACM on Interactive</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Moodoo the tracker: Spatial classroom analytics for characterising teachers&apos; pedagogical approaches</title>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mangaroska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shibani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fernandez-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schulte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1007/s40593-021-00276-w</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Artificial Intelligence in Education</title>
		<imprint>
			<biblScope unit="page" from="1" to="27" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">A new era in multimodal learning analytics: Twelve core commitments to ground and grow mmla</title>
		<author>
			<persName><forename type="first">M</forename><surname>Worsley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Angelo</surname></persName>
		</author>
		<idno type="DOI">10.18608/jla.2021.7361</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Learning Analytics</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="10" to="27" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Exploratory versus Explanatory Visual Learning Analytics: Driving Teachers&apos; Attention through Educational Data Storytelling</title>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonaldo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Buckingham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chiluiza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Granda</surname></persName>
		</author>
		<author>
			<persName><surname>Conati</surname></persName>
		</author>
		<idno type="DOI">10.18608/jla.2018.53.6</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.18608/jla.2018.53.6" />
	</analytic>
	<monogr>
		<title level="j">Journal of Learning Analytics</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="72" to="97" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">From Data to Insights: A Layered Storytelling Approach for Multimodal Learning Analytics</title>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fernandez-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1145/3313831.3376148</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1145/3313831.3376148" />
	</analytic>
	<monogr>
		<title level="m">CHI &apos;20</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page">15</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Storytelling With Learner Data: Guiding Student Reflection on Multimodal Team Data</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Fernandez-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Buckingham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mangaroska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kitto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Palominos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Axisa</surname></persName>
		</author>
		<author>
			<persName><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1109/TLT.2021.3131842</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Learning Technologies</title>
		<imprint>
			<biblScope unit="page" from="1" to="14" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence in education</title>
		<author>
			<persName><forename type="first">H</forename><surname>Khosravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sadiq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Buckingham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Shum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Knight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><surname>Gasevic</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.caeai.2022.100074</idno>
	</analytic>
	<monogr>
		<title level="j">Computers Education: Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="31" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Data visualization literacy: Investigating data interpretation along the novice-expert continuum</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Maltese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Harsh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Svetina</surname></persName>
		</author>
		<ptr target="https://www.jstor.org/stable/43631889" />
	</analytic>
	<monogr>
		<title level="j">Journal of College Science Teaching</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="84" to="90" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Centering complexity in &apos;educators&apos; data literacy&apos; to support future practices in faculty development: a systematic review of the literature</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Raffaghelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Stewart</surname></persName>
		</author>
		<idno type="DOI">10.1080/13562517.2019.1696301</idno>
	</analytic>
	<monogr>
		<title level="j">Teaching in Higher Education</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="435" to="455" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Worsley</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-2163/paper5.pdf" />
		<title level="m">Multimodal Learning Analytics &apos; Past , Present , and , Potential Futures</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Understanding learning and learning design in moocs: A measurement-based interpretation</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Milligan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Griffin</surname></persName>
		</author>
		<idno type="DOI">10.18608/jla.2016.32.5</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Learning Analytics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="88" to="115" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">Stealth assessment: Measuring and supporting learning in video games</title>
		<author>
			<persName><forename type="first">V</forename><surname>Shute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ventura</surname></persName>
		</author>
		<idno type="DOI">10.7551/mitpress/9589.001.0001</idno>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Visualizing data to support judgement, inference, and decision making in learning analytics: Insights from cognitive psychology and visualization science</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Alhadad</surname></persName>
		</author>
		<idno type="DOI">10.18608/jla.2018.52.5</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Learning Analytics</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="60" to="85" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Implementing learning analytics for learning impact: Taking tools to task</title>
		<author>
			<persName><forename type="first">S</forename><surname>Knight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gibson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shibani</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.iheduc.2020.100729</idno>
	</analytic>
	<monogr>
		<title level="j">The Internet and Higher Education</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page">100729</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data</title>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1145/3290605.3300269</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the CHI</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page">16</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<author>
			<persName><forename type="first">B</forename><surname>Dykes</surname></persName>
		</author>
		<ptr target="https://hstalks.com/article/619/data-storytelling-what-it-is-and-how-it-can-be-use/" />
	</analytic>
	<monogr>
		<title level="m">Data storytelling: What it is and how it can be used to effectively communicate analysis results</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">The visual imperative: creating a visual culture of data discovery</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ryan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Kaufmann</publisher>
			<pubPlace>Morgan, Massachusetts</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Scalability, sustainability, and ethicality of multimodal learning analytics</title>
		<author>
			<persName><forename type="first">L</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gasevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1145/3506860.3506862</idno>
	</analytic>
	<monogr>
		<title level="m">LAK22: 12th International Learning Analytics and Knowledge Conference</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="13" to="23" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Multimodal data capabilities for learning: What can multimodal data tell us about learning?</title>
		<author>
			<persName><forename type="first">K</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Giannakos</surname></persName>
		</author>
		<idno type="DOI">10.1111/bjet.12993</idno>
	</analytic>
	<monogr>
		<title level="j">British Journal of Educational Technology</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="1450" to="1484" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">How can high-frequency sensors capture collaboration? a review of the empirical links between multimodal metrics and collaborative constructs</title>
		<author>
			<persName><forename type="first">B</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Chng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.3390/s21248185</idno>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page">8185</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Presentation sensei: A presentation training system using speech and image processing</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kurihara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Goto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ogata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Matsusaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Igarashi</surname></persName>
		</author>
		<idno type="DOI">10.1145/1322192.1322256</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI &apos;07</title>
				<meeting>the 9th International Conference on Multimodal Interfaces, ICMI &apos;07<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="358" to="365" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Designing Analytics for Collaboration Literacy and Student Empowerment</title>
		<author>
			<persName><forename type="first">M</forename><surname>Worsley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Melo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jang</surname></persName>
		</author>
		<idno type="DOI">10.18608/jla.2021.7242</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Learning Analytics</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="30" to="48" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Designing narrative slideshows for learning analytics</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-C</forename><surname>Pong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Qu</surname></persName>
		</author>
		<idno type="DOI">10.1109/PacificVis.2019.00036</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Pacific Visualization Symposium (PacificVis)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="237" to="246" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<title level="m" type="main">Storytelling with data: A data visualization guide for business professionals</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">N</forename><surname>Knaflic</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>John Wiley Sons</publisher>
			<pubPlace>New Jersey</pubPlace>
		</imprint>
	</monogr>
	<note>12 ed</note>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">What storytelling can do for information visualization</title>
		<author>
			<persName><forename type="first">N</forename><surname>Gershon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Page</surname></persName>
		</author>
		<idno type="DOI">10.1145/381641.381653</idno>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="31" to="37" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Storytelling: its role in information visualization</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wojtkowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wojtkowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">G</forename><surname>Wojtkowski</surname></persName>
		</author>
		<idno>doi:10.1.1. 99.4771</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of European Systems Science Congress</title>
				<meeting>European Systems Science Congress</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Narrative visualization: Telling stories with data</title>
		<author>
			<persName><forename type="first">E</forename><surname>Segel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Heer</surname></persName>
		</author>
		<idno type="DOI">10.1109/TVCG.2010.179</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Visualization and Computer Graphics</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="1139" to="1148" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<title level="m" type="main">Driving data storytelling from learning design</title>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Granda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chiluiza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Conati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1145/3170358.3170380</idno>
		<idno>3170358.3170380</idno>
		<ptr target="https://doi.org/10.1145/" />
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>ACM, Association for Computing Machinery</publisher>
			<biblScope unit="page" from="131" to="140" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Beyond the Learning Analytics Dashboard: Alternative Ways to Communicate Student Data Insights Combining Visualisation, Narrative and Storytelling</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Fernández-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Buckingham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kitto</surname></persName>
		</author>
		<author>
			<persName><surname>Martínez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1145/3506860.3506895</idno>
	</analytic>
	<monogr>
		<title level="j">LAK</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1" to="16" />
			<date type="published" when="2022">2022. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">A review of multimodal learning analytics architectures</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Shankar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">P</forename><surname>Prieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Rodríguez-Triana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ruiz-Calleja</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICALT.2018.00057</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 18th International Conference on Advanced Learning Technologies (ICALT)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="212" to="214" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Visualizing collaboration in teamwork: A multimodal learning analytics platform for non-verbal communication</title>
		<author>
			<persName><forename type="first">R</forename><surname>Noël</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Miranda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cechinel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Riquelme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">T</forename><surname>Primo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Munoz</surname></persName>
		</author>
		<idno type="DOI">10.3390/app12157499</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<monogr>
		<title level="m" type="main">Software architecture with Python : design and architect highly scalable, robust, clean, and high performance applications in Python</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Pillai</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>Packt Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">A review on data fusion in multimodal learning analytics and educational data mining</title>
		<author>
			<persName><forename type="first">W</forename><surname>Chango</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Lara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cerezo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Romero</surname></persName>
		</author>
		<idno type="DOI">10.1002/widm.1458</idno>
	</analytic>
	<monogr>
		<title level="j">Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">e1458</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Shaffer</surname></persName>
		</author>
		<title level="m">Quantitative ethnography</title>
				<imprint>
			<publisher>Cathcart Press</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">The multimodal matrix as a quantitative ethnography methodology</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-33232-7_3</idno>
		<idno>978-3-030-33232-7_3</idno>
		<ptr target="https://doi.org/10.1007/" />
	</analytic>
	<monogr>
		<title level="m">Advances in Quantitative Ethnography</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Eagan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Misfeldt</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Siebert-Evenstone</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="26" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">What Can Analytics for Teamwork Proxemics Reveal About Positioning Dynamics In Clinical Simulations?</title>
		<author>
			<persName><forename type="first">G</forename><surname>Fernandez-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kitto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>An</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1145/3449284</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the ACM on Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1" to="24" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Modelling co-located team communication from voice detection and positioning data in healthcare simulation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gasevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jaggard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wotherspoon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alfredo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1145/3506860.3506935</idno>
		<idno>doi:10.1145/3506860.3506935</idno>
		<ptr target="https://doi.org/10.1145/3506860.3506935" />
	</analytic>
	<monogr>
		<title level="m">LAK22: 12th International Learning Analytics and Knowledge Conference, LAK22</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="370" to="380" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Data storytelling editor: A teacher-centred tool for customising learning analytics dashboard narratives</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Fernández-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Kirsty Kitto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Shum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">14th Learning Analytics and Knowledge Conference (LAK &apos;24)</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<monogr>
		<title/>
		<idno type="DOI">10.1145/3636555.3636930</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
			<publisher>ACM</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">Sense of agency and user experience: Is there a link?</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bergström</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Knibbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hornbaek</surname></persName>
		</author>
		<idno type="DOI">10.1145/3490493</idno>
		<ptr target="https://doi.org/10.1145/3490493.doi:10.1145/3490493" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Comput.-Hum. Interact</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Ez-Zaouia</surname></persName>
		</author>
		<ptr target="https://hal.science/hal-02516815" />
		<title level="m">Teacher-Centered Dashboards Design Process</title>
				<meeting><address><addrLine>Frankfurt, Germany</addrLine></address></meeting>
		<imprint>
			<publisher>LAK20</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<analytic>
		<title level="a" type="main">Collaborative Learning Analytics</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Wise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Knight</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-65291-3_23</idno>
	</analytic>
	<monogr>
		<title level="m">International Handbook of Computer-Supported Collaborative Learning</title>
				<editor>
			<persName><forename type="first">U</forename><surname>Cress</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Rosé</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Wise</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Oshima</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="425" to="443" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">Distributed parallel deep learning of hierarchical extreme learning machine for multimode quality prediction with big process data</title>
		<author>
			<persName><forename type="first">L</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ge</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.engappai.2019.03.011</idno>
	</analytic>
	<monogr>
		<title level="j">Engineering Applications of Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="450" to="465" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">How do teachers use dashboards enhanced with data storytelling elements according to their data visualisation literacy skills?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pozdniakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-S</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gasevic</surname></persName>
		</author>
		<idno type="DOI">10.1145/3576050.3576063</idno>
	</analytic>
	<monogr>
		<title level="m">LAK23: 13th International Learning Analytics and Knowledge Conference, LAK2023</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="89" to="99" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<analytic>
		<title level="a" type="main">Learning analytics in supporting student agency: A systematic review</title>
		<author>
			<persName><forename type="first">D</forename><surname>Hooshyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Tammets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Aus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kollom</surname></persName>
		</author>
		<idno type="DOI">10.3390/su151813662</idno>
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b61">
	<analytic>
		<title level="a" type="main">Analyzing learners&apos; perception of indicators in student-facing analytics: A card sorting approach</title>
		<author>
			<persName><forename type="first">E</forename><surname>Villalobos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Hilliger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pérez-Sanagustín</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Celis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Broisin</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-42682-7_29</idno>
	</analytic>
	<monogr>
		<title level="m">Responsive and Sustainable Educational Futures</title>
				<editor>
			<persName><forename type="first">O</forename><surname>Viberg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Jivet</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Muñoz-Merino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Perifanou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Papathoma</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland; Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="430" to="445" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b62">
	<analytic>
		<title level="a" type="main">We learn from our mistakes&apos;: Nursing students&apos; perceptions of a productive failure simulation</title>
		<author>
			<persName><forename type="first">E</forename><surname>Palominos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Levett-Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Power</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.colegn.2022.02.006</idno>
	</analytic>
	<monogr>
		<title level="j">Collegian</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="708" to="712" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b63">
	<analytic>
		<title level="a" type="main">Huceta: A framework for human-centered embodied teamwork analytics</title>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Fernandez-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gašević</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Shum</surname></persName>
		</author>
		<idno type="DOI">10.1109/MPRV.2022.3217454</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Pervasive Computing</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="39" to="49" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b64">
	<analytic>
		<title level="a" type="main">Slade: A method for designing human-centred learning analytics systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Alfredo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Echeverria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Swiecki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gašević</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Martinez-Maldonado</surname></persName>
		</author>
		<idno type="DOI">10.1145/3636555.3636847</idno>
		<idno>doi:10.1145/</idno>
	</analytic>
	<monogr>
		<title level="m">LAK24: 14th International Learning Analytics and Knowledge Conference</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page">16</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
