<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Preface to the Proceedings of the Human-Centric eXplainable AI in Education Workshop (HEXED 2024)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Juan D. Pinto</string-name>
          <email>jdpinto2@illinois.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luc Paquette</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vinitra Swamy</string-name>
          <email>vinitra.swamy@epfl.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tanja Käser</string-name>
          <email>tanja.kaeser@epfl.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Qianhui Liu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lea Cohausz</string-name>
          <email>lea.cohausz@uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>École Polytechnique Fédérale de Lausanne</institution>
          ,
          <addr-line>Lausanne</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Illinois Urbana-Champaign</institution>
          ,
          <addr-line>Urbana, IL</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Mannheim</institution>
          ,
          <addr-line>Mannheim</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>2024. The workshop attracted researchers with diverse backgrounds who share the common goal of achieving greater algorithmic explainability and transparency in education research. The format of the workshop was hybrid, with atteendees both in person and online. Cristina Conati, Professor of Computer Science at the UniAttribution 4.0 International (CC BY 4.0).</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The</p>
      <p>HEXED
2024
workshop
(https://hexedworkshop.github.io) is the first
workshop
entirely
dedicated to the advancement of eXplainable AI (XAI) in
the field of education. It was held in conjunction with the</p>
    </sec>
    <sec id="sec-2">
      <title>Overview of the program</title>
      <p>The workshop program consisted of a poster session, a
keynote presentation, a panel discussion, and three working
sessions towards a cohesive vision for XAI in education.</p>
      <sec id="sec-2-1">
        <title>2.1. Poster session</title>
        <p>Eight papers were presented at the workshop’s poster
session: four accepted papers (out of 9 submissions received)
and four encore papers. The accepted papers consisted of
three research papers and one position paper, all of which
can be found within these proceedings. The encore papers
were specially invited presentations of work that has been
or is being presented elsewhere—dealing with the theme of
XAI in education—and which the organizers agreed would
contribute to the aims of the workshop. Links to the original
encore papers can be found on the workshop website.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Keynote</title>
        <p>LGOBE
(L. Cohausz)
(L. Cohausz)
versity of British Columbia, delivered a keynote address
titled “Personalized XAI”. In her presentation, she discussed
the importance of explanations in AI-driven systems, noting
that while they can be useful, they are not always wanted or
efective. She emphasized the need for carefully designed
explanations that meet the needs of specific learners. Through
Human-Centric eXplainable AI in Education Workshop (HEXED),
co⋆Workshop website: https://hexed-workshop.github.io
https://jdpinto.com (J. D. Pinto); https://vinitra.github.io (V. Swamy)
0000-0002-2972-485X (J. D. Pinto); 0000-0002-2738-3190
(L. Paquette); 0000-0002-6840-5923 (V. Swamy); 0000-0003-0672-0415
(T. Käser); 0000-0003-1765-1216 (Q. Liu); 0000-0002-6164-3988
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License</p>
        <p>CEUR</p>
        <p>ceur-ws.org
examples from her research, she demonstrated how a system
that provides explanations that are personalized to learners’
personality traits and cognitive skills can have a positive
impact on explanation efectiveness, which can in turn afect
learning outcomes and learner perceptions.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Working sessions</title>
        <p>During the workshop’s three working sessions, we intended
to provoke researchers’ thoughts and discussions about XAI
in education through guiding questions such as “what kind
of experience do you have with XAI”, “what barriers exist to
successfully using XAI in education”, and “what challenges
does the XAI in education community need to address”. We
briefly summarize the outcome here.</p>
        <p>Workshop participants agreed that XAI can help improve
model performance, address model biases, help students
learn better, and support student metacognition. The target
audience of explanations was a repeated theme throughout
the working sessions, and can consist of developers of the
model, users of the system (such as teachers and students),
and other stakeholders (such as school administrators, legal
teams, and parents).</p>
        <p>Participants also brought up the need to understand XAI
as part of a larger pipeline rather than as a standalone goal.
This suggests that the efective design, use, and explanation
of AI models in education needs to involve humans as part
of the process. There was also discussion about the
possibility of triangulating explanations with multiple sources of
knowledge (e.g. check that they are plausible based on the
nature of the data and human intuition) to ensure accuracy
and intelligibility.</p>
        <p>In terms of moving the field forward, the participants
agreed that it is necessary for the community to use a
common vocabulary of XAI as well as to advance explainability
evaluation metrics in their studies—though sound
evaluation approaches will depend heavily on context. Participants
also discussed the possibility of using simulated students in
XAI studies in education. This thread was further explored
with a discussion on the need for robust ways to validate
simulations and the challenge of designing simulated
interventions. Finally, there was some discussion about the
potential for using large language models (LLMs) to further
the goals of explainability, though there was some
apprehension due to the inherent opaqueness of LLMs themselves.
2.4. Panel
The HEXED 2024 workshop included a panel
discussion with three researchers: Lea Cohausz (University of
Mannheim), Jakub Kuzilek (Humboldt University of Berlin),
and Juan Pinto (University of Illinois Urbana-Champaign).
This format provided the opportunity to learn more about
the work and views of these specific researchers, with
questions from a moderator and the audience. The discussion
included topics such as successful uses of XAI in
education, open challenges for the field, and existing barriers for
broader adoption of XAI.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.5. Working diagram</title>
        <p>With the goal of better understanding the current state of the
ifeld and some future directions, participants contributed
to a collaborative visual map of the things discussed during
the workshop. This working diagram (created using the
Miro platform) allowed participants to make connections
between concepts, questions, and prior studies. We hope
to continue building on this preliminary ontology so that it
can serve as a foundation for future work.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Program committee</title>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>We would like to acknowledge the efort that went into
making this workshop a success. Thank you to all the
members of the program committee, contributors, our keynote
speaker, and authors. We would also like to thank all those
who attended the workshop, in person or remotely, for the
wonderful discussions and insights they provided.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>