<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Address of first editor: Daniele Di Mitri DIPF</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leibniz Institute for Research and Information in Education Rostocker Str.</institution>
          <addr-line>6, 60323 Frankfurt am Main</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>3439</volume>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>© 2023 for the individual papers by the papers’ authors. Copying permitted for private and
academic purposes. Re-publication of material from this volume requires permission by the
copyright owners.
Over the last several years, Multimodal Learning Analytics (MMLA) has brought together
diverse fields that combine educational, computational, psychological, and related research
into how people learn and how this complex process can be supported with multichannel and
multimodal technology.</p>
      <p>The MMLA community strives to untangle complex learning dynamics by analysing
multimodal data from multiple devices, such as sensors, cameras, and Internet of Things tools, as
well as the expert and self-reported data. Similarly, the MMLA community uses multimodal
interfaces to explore new ways of learning that allow learners to interact with learning material
and stimulate them to use the psychomotor and afective domains of learning besides the
purely cognitive domain. The MMLA community promotes research that tries to make sense
of complex educational data that involve multiple interaction modalities, people, and learning
spaces. Understanding and optimising learning traces from the real world requires renewed
connections between technology, learning, and design; and building upon the ongoing and
previous work from Learning Analytics &amp; Knowledge and related data-driven communities.</p>
      <p>The work of the MMLA community is far from complete. MMLA needs to develop theories
about analysing human behaviours during diverse learning processes across spaces and to
create valuable tools that could augment the capabilities of learners and instructors. These tools
and practices must be designed and implemented ethically to provide value and equity for all
learners.</p>
      <p>
        The COVID-19 pandemic and the rapid global shift to online learning have heavily challenged
the research and practice for this research field and educational practices. The SOLAR’s Special
Interest Group on Multimodal Learning Analytics Across Spaces (CROSSMMLA SIG) has tried to
foster the exchange of knowledge and peer learning activities by organising the CROSSMMLA
workshop series at LAK conferences, editing the MMLA Handbook to be published soon with
Springer [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], curating various special issues in journals such as BJET or MDPI Sensors and
organising several satellite events in other conferences (MAIED workshop at AIED‘21, or the
MILeS workshop at EC-TEL‘21 and 2022).
      </p>
      <p>
        The CROSSMMLA workshop explored how multimodal learning analytics can efectively
capture students’ learning experiences across diverse learning spaces (online, in presence, in
the field) and diverse learning domains (psychomotor, cognitive, and emotional). The core
challenge of MMLA is to capture these interactions meaningfully so that they can be translated
as part of formative assessment in real-time and as post-reflective reviews [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>In this edition of the CROSSMMLA workshop, special attention was put on “leveraging
multimodal data for generating meaningful feedback”. We have invited all the prospective authors
to reflect on the possible ways their MMLA solutions and approaches can generate meaningful
feedback for teachers and learners. The workshop served as a forum to exchange ideas on how
we, as a community, can use our knowledge and experiences from CROSSMMLA to design
new tools to analyse evidence from multimodal and multichannel data. How can we extract
meaning from these increasingly fluid and complex data generated from various transformative
learning situations, and how can best feedback on these analyses’ results positively support the
learning processes.</p>
      <p>
        The dimensions and contexts of MMLA are complex and layered, providing researchers with
multiple challenges [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In the current world, research and practice are further complicated
by the necessity of remote learning that includes mixed scenarios with virtual co-located and
face-to-face learning activities. The MMLA community urgently needs to find ways to research,
design, and further develop our tools and methods to investigate this new landscape. The
workshop aims to provide a venue for actively discussing the following: How can we extract
meaning and communicate insights from multimodal data to support and provide feedback
on learning across physical and digital spaces? Researchers and education providers have
been adapting to local regulations because of the COVID-19 disruption; education has been
re-invented in several places around the globe. Therefore, the larger aim of our workshop is
to investigate what role MMLA as a community can have in supporting this adaptation in the
short term and how we can join eforts to prepare ourselves against the next disruption (in the
mid-long term).
      </p>
    </sec>
    <sec id="sec-2">
      <title>References</title>
    </sec>
    <sec id="sec-3">
      <title>Accepted contributions</title>
      <p>In the first paper, Kawashima explores the potential of using machine learning methods for
multimodal learning analytics (MMLA) and feedback generation. By integrating behavioural
measurements and content analysis of learning materials, the aim of MMLA should be to
estimate learners’ states and generate adaptive feedback. The paper discusses the recent
trend of representation learning and its potential for integrating multimodal behavioural and
contextual data. Kawashima asserts that the collaboration between learning analytics and
machine learning can lead to a new framework of feedback loops in MMLA.</p>
      <p>The second paper from Li, Majumdar, Yang and Ogata proposes a learner model-based
feedback model that leverages students’ daily life activity data to provide multi-dimensional feedback
on a contextual activity, self-direction management, and skill assessment. The feedback model is
implemented in two learning dashboards for English learning and physical activity contexts, and
the potential efects of the feedback model on student engagement and skill improvement are
demonstrated through two case studies in K-12 settings. The results indicate that K-12 students
can continuously engage in learning and physical activities and regularly take feedback with the
learning dashboard support. Future research directions include investigating the transferability
of SDS in other contexts and improving the feedback model from a learner’s lifestyle perspective.</p>
      <p>The third paper from Kwon et al. proposes an advanced ontological knowledge structure
called Knowledge Objects (KOs) to enhance the process of real-time knowledge sharing. KOs
consist of metadata linked to multiple data streams in examining a specific task. They allow
multiple data streams to be combined and analyzed in real-time. The authors present the
development of KOs as a solution to adopting modern technology and techniques for real-time
knowledge sharing. The proposed KOs were tested on a model that identifies moments in
which a refrigerator is opened, and a sink faucet is turned on, achieving 91.7% recall, 58.3%
precision, and 80% average precision.</p>
      <p>The fourth paper by Wang, Ruis and Shafer proposes a method called Qualitative Parameter
Triangulation (QPT) to address the challenges of data fusion and parameterization in
multimodal learning analytics. QPT generates optimized parameter values for event-based,
process-oriented, and connection-structured multimodal learning models. The key concept of
engaging qualitative researchers in the loop ensures interpretive alignment, providing potential
for closing feedback loops with other stakeholders in the multimodal study. Future work
includes testing the eficacy of QPT using empirical data and exploring its applicability to other
ifelds beyond learning analytics.</p>
      <p>The fith paper from Chejara et al. discusses the challenges and potential solutions of
conducting (MMLA) research in physical settings. MMLA has enabled researchers to understand
learning through a new perspective by using sensors to collect data such as audio, video,
eye-gaze, and physiological indicators. The paper presents some open challenges, such as noisy
data from the classroom, and proposes potential solutions. The authors also mention that their
solutions are the result of general engineering and not MMLA-specific. Overall, this paper
provides insights into the challenges faced by MMLA researchers and how they have addressed
them.</p>
      <p>The sixth and last paper from Schneider et al. presented a novel approach to assess human
performance and expertise levels based on sensor data. The study collected accelerometer
data of a domain expert and a novice performing tasks in semi-constrained settings. The
results showed that expert performances are smoother, contain fewer irregularities, and have
consistently uniform patterns than novice performances. This approach can be used in various
ifelds, such as sports or dance, to distinguish between experts and novices quickly.</p>
      <sec id="sec-3-1">
        <title>Machine Learning for Multimodal Learning Analytics and Feedback</title>
        <p>Hiroaki Kawashima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Modeling Feedback for Self-Direction Skills in K-12 Educational Settings with Learning
and Physical Activity Data</p>
        <p>Huiyong Li, Rwitajit Majumdar, Yuanyuan Yang, and Hiroaki Ogata . . . . . . . . . . . . . . . . . . 12
Qualitative Parameter Triangulation: A Formulated Approach to Parameterize
Multimodal Models</p>
        <p>Yeyu Wang, Andrew R. Ruis and David Williamson Shafer . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Multimodal Learning Analytics research in the wild: challenges and their potential
solutions</p>
      </sec>
      <sec id="sec-3-2">
        <title>Novices make more noise! The D&amp;K efect 2.0?</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Giannakos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Spikol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ochoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hammad</surname>
          </string-name>
          ,
          <source>The Multimodal Learning Analytics Handbook</source>
          , Springer Nature,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Di Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Specht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Drachsler</surname>
          </string-name>
          ,
          <article-title>From signals to knowledge: A conceptual model for multimodal learning analytics</article-title>
          ,
          <source>Journal of Computer Assisted Learning</source>
          <volume>34</volume>
          (
          <year>2018</year>
          )
          <fpage>338</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Echeverria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Martinez-Maldonado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Buckingham</given-names>
            <surname>Shum</surname>
          </string-name>
          ,
          <article-title>Towards collaboration translucence: Giving meaning to multimodal group data</article-title>
          ,
          <source>in: Proceedings of the 2019 chi conference on human factors in computing systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cukurova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Giannakos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Martinez-Maldonado</surname>
          </string-name>
          ,
          <article-title>The promise and challenges of multimodal learning analytics</article-title>
          ,
          <source>British Journal of Educational Technology</source>
          <volume>51</volume>
          (
          <year>2020</year>
          )
          <fpage>1441</fpage>
          -
          <lpage>1449</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>