=Paper= {{Paper |id=Vol-2610/xpreface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-2610/xpreface.pdf |volume=Vol-2610 }} ==None== https://ceur-ws.org/Vol-2610/xpreface.pdf
                         Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)




        CrossMMLA in practice: Collecting, annotating and analyzing
                    multimodal data across spaces

  Michail Giannakos, NTNU, Norway; Daniel Spikol, University of Malmö, Sweden; Inge
 Molenaar, Radboud University, The Netherlands; Daniele Di Mitri, Open University, The
Netherlands; Kshitij Sharma, NTNU, Norway; Xavier Ochoa, New York University, NY, USA;
                      Rawad Hammad, University of East London, UK

             ABSTRACT: Learning is a complex process that is associated with many aspects of interaction
             and cognition (e.g., hard mental operations, cognitive friction etc.) and that can take across
             diverse contexts (online, classrooms, labs, maker spaces, etc.). The complexity of this process
             and its environments means that it is likely that no single data modality can paint a complete
             picture of the learning experience, requiring multiple data streams from different sources
             and times to complement each other. The need to understand and improve learning that
             occurs in ever increasingly open, distributed, subject-specific and ubiquitous scenarios,
             require the development of multimodal and multisystem learning analytics. Following the
             tradition of CrossMMLA workshop series, the proposed workshop aims to serve as a place to
             learn about the latest advances in the design, implementation and adoption of systems that
             take into account the different modalities of human learning and the diverse settings in
             which it takes place. Apart from the necessary interchange of ideas, it is also the objective of
             this workshop to develop critical discussion, debate and co-development of ideas for
             advancing the state-of-the-art in CrossMMLA.

             Keywords: multimodal learning analytics, learning spaces, sensor data



1            BACKGROUND

The field of multimodal learning analytics (MMLA) is an emerging domain of Learning Analytics and
plays an important role in expanding Learning Analytics goal of understanding and improving
learning in all the different environments where it occurs. The challenge for research and practice in
this field is how to develop theories about the analysis of human behaviors during diverse learning
processes and to create useful tools that could that augment the capabilities of learners and
instructors in a way that is ethical and sustainable. CrossMMLA workshop will serve as a forum to
exchange ideas on how we can analyze evidence from multimodal and multisystem data and how
we can extract meaning from these increasingly fluid and complex data coming from different kinds
of transformative learning situations and how to best feedback the results of these analyses to
achieve positive transformative actions of those learning processes. CrossMMLA aims at helping
learning analytics to capture students' learning experiences across diverse learning spaces. The
challenge is to capture those interactions in a meaningful way that can be translated into actionable
insights (e.g., real-time formative assessment, post reflective reviews; Di Mitri et al., 2018,
Echeverria et al., 2019) .

MMLA uses the advances in machine learning and affordable sensor technologies (Ochoa, 2017) to
act as a virtual observer/analyst of learning activities. Additionally, this virtual nature allows MMLA
to provide new insights into learning processes that happen across multiple contexts between
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
                                                                                                                                               1
     Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                         Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



stakeholders, devices and resources (both physical and digital), which often are hard to model and
orchestrate (Scherer et al., 2012; Prieto et al., 2018). Using such technologies in combination with
machine learning, LA researchers can now perform text, speech, handwriting, sketches, gesture,
affective, or eye-gaze analysis (Donnelly et al., 2016; Blikstein & Worsley, 2016, Spikol et al., 2018),
improve the accuracy of their predictions and learned models (Giannakos et al., 2019) and provide
automated feedback to enable learner self-reflection (Ochoa et al, 2018). However, with this
increased complexity in data, new challenges also arise. Conducting the data gathering, pre-
processing, analysis, annotation and sense-making, in a way that is meaningful for learning scientists
and other stakeholders (e.g., students or teachers), still pose challenges in this emergent field (Di
Mitri et al., 2018; Sharma et al., 2019).

CrossMMLA provides participants with hands-on experience in gathering data from learning
situations using wearable apparatuses (e.g., eye-tracking glasses, wristbands), non-invasive devices
(e.g., cameras) and other technologies (in the morning half of the workshop). In addition, we will
demonstrate how to analyze/annotate such data, and how machine learning algorithms can help us
to obtain insights about the learning experience (in the afternoon half). CrossMMLA provides
opportunities, not only to learn about exciting new technologies and methods, but also to share
participants’ own practices for MMLA, and meet and collaborate with other researchers in this area.

2            CROSSMMLA HISTORY AND DEVELOPMENTS

CrossMMLA continues a recently-established, but already very consistent tradition of workshops on
MMLA and CrossLAK, organized at both EC-TEL and LAK conferences. These past events have
leveraged a variety of formats, from hands-on learning experiences and tutorials, based on
participant contributions/papers, as well as conceptual and community-building activities (which
have eventually led to the creation of a Special Interest Group within Society of Learning Analytics
Research - SOLAR CrossMMLA SIG1).

The CrossMMLA community aims to become the focal point of contributions coming from a variety
of fields (e.g., learning, HCI, data science, ubiquitous computing). Prior to the CrossMMLA event, we
launch a call for submissions that shapes the hands-on activities to be performed. The contributions
normally belong in one or more of the following categories:

1. Data gathering setups and prototypes (e.g., the use of the Multimodal Learning Hub and
   EEGlass).
2. Data analysis/annotation methods and tools (e.g., Visual Inspection Tool, coding schemas and
   “grey-box” analyses).
3. Learning activities/Pedagogical designs that could benefit from CrossMMLA techniques.
4. Examples of CrossMMLA research designs or case studies.




1
   Multimodal Learning Analytics Across Spaces Special Interest Group (SOLAR CrossMMLA SIG):
https://www.solaresearch.org/community/sigs/crossmmla-sig/

Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
                                                                                                                                               2
     Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                         Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



During the CrossMMLA events, there is a formation of teams that then engage in different
CrossMMLA projects. These teams use the aforementioned contributions to define learning
scenarios or learning activities to be performed, the research questions to be investigated through
the use of CrossMMLA, and the data gathering, annotation and analysis to be undertaken during the
workshop.

Announcements and future CrossMMLA calls are available here: http://crossmmla.org/

3            OBJECTIVES AND INTENDED OUTCOMES

It is expected that at the end of the CrossMMLA workshop, participants engage with:
      ● The state-of-the-art ideas, designs and implementations of CrossMMLA systems.
      ● Capture, analyze and report multimodal data on-the-spot.
      ● Contribute and shape the research agenda and future of CrossMMLA community.

Aside from the (intangible, but very important) learning of participants about CrossMMLA, and the
strengthening of the SoLAR Special Interest Group on CrossMMLA, the workshop also has targeted
the following two tangible outcomes:
    1. Based on the contributions of the participants we provide a catalogue of shared community
         knowledge.
    2. Based on the learning activities tested in the workshop, and the rest of the hands-on
         activities, an open “CrossMMLA dataset” will be made available to the community (through
         the SIG/Workshop website or other European open science repositories)

All contributions and materials are made available on “LAK Companion Proceedings”. Organisers are
planning to create a collaborative contribution describing the consensus reached during the
workshop. Based on the outcomes of the workshop and participants interest, similarly with previous
versions of Cross-MMLA, we will consider proposing a special issue in an international journal (e.g.,
JLA, CHB, BIT or else).

REFERENCES

Wolfe, J. M., et al., (2015). Sensation & perception (4th ed.). Sunderland, MA: Sinauer Associates.
Blikstein, P., & Worsley, M. (2016). Multimodal Learning Analytics and Education Data Mining: using
         computational technologies to measure complex learning tasks. Journal of Learning
         Analytics, 3(2), 220–238.
Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018). The Big Five: Addressing Recurrent
         Multimodal Learning Data Challenges. In The 8th International Learning Analytics &
         Knowledge Conference (pp. 420-424). SoLAR.
Donnelly, P. J., et al. (2016). Automatic teacher modeling from live classroom audio. Proceedings of
         the 2016 Conference on User Modeling Adaptation and Personalization, 45–53. ACM.
Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards Collaboration
         Translucence: Giving Meaning to Multimodal Group Data. In Proceedings of the 2019 CHI
         Conference on Human Factors in Computing Systems (p. 39). ACM.
Giannakos, M. N., Sharma, K., et al., (2019). Multimodal data as a means to understand the learning
         experience. International Journal of Information Management, 48, 108-119.
Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
                                                                                                                                              3

    Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                              Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



     Ochoa, X. (2017). Multimodal learning analytics. Handbook of Learning Analytics, 129–141.
     Ochoa, X. et al. (2018). The RAP System: Automatic Feedback of Oral Presentation Skills Using
              Multimodal Analysis and Low-cost Sensors. Proceedings of the 8th International Conference
              on Learning Analytics and Knowledge. ACM. 360-364.
     Prieto, L. P., Sharma, K., Kidzinski, Ł., Rodríguez‐Triana, M. J., & Dillenbourg, P. (2018). Multimodal
             teaching analytics: Automated extraction of orchestration graphs from wearable sensor
             data. Journal of computer assisted learning, 34(2), 193-203.
     Scherer, S., Worsley, M., & Morency, L. P. (2012). 1st international workshop on multimodal learning
             analytics. 14th ACM International Conference on Multimodal Interaction, ICMI 2012.
     Sharma, K., Papamitsiou, Z., & Giannakos, M. (2019). Building pipelines for educational data using AI
             and multimodal analytics: A “grey‐box” approach. British Journal of Educational
             Technology, 50(6), 3004-3031.
     Spikol, D., Ruffaldi, E., Dabisias, G., & Cukurova, M. (2018). Supervised machine learning in
             multimodal learning analytics for estimating success in project‐based learning. Journal of
                  Computer Assisted Learning, 34(4), 366-377.




     Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)
                                                                                                                                          4
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).