<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multimodal, Afective and Interactive eXplainable AI Workshop (MAI-XAI 2025) co-located with the 28th European Conference on Artificial Intelligence 25-30 October 2025 (ECAI 2025)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Philipp Cimiano</string-name>
          <email>cimiano@cit-ec.uni-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fosca Giannotti</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tim Miller</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barbara Hammer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alejandro Catalá Bolos</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Flach</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jose M. Alonso-Moral</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Bologna, Italy</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Bielefeld University</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Scuola Normale Superiore</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>The University of Queensland Australia</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Bristol</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This collection comprises the papers presented at the 2nd Workshop on Multimodal, Afective and Interactive eXplainable AI (MAI-XAI), collocated with the European Conference on Artificial Intelligence (ECAI). The workshop attempts to ofer researchers and practitioners the opportunity to identify new promising research directions on XAI along the above-mentioned lines, focusing on how to provide “natural explanations”. The workshop explores three pillars toward creating more natural explanations: i) Multimodal XAI, ii) Afective XAI and iii) Interactive XAI.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Preface</title>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The field of eXplainable Artificial Intelligence (XAI) is concerned with developing methods that make
the decisions / predictions by machine learned models accessible and understandable to diferent
stakeholders, ranging from machine learning experts to lay users. An important goal is to design systems
in a human-centered manner, ensuring that explanations are efective in enhancing the understanding
of human users about the model and empower them to perform an appropriate action.</p>
      <p>Yet, the current state of the art in XAI is limited in this respect. Many studies in the field of
XAI are concerned with evaluating technology in an intrinsic fashion regarding measures such as
validity, proximity, etc., that tell us little about the actual efectiveness of explanations from an
enduser perspective. Further, there is a lack of methods that allow for the interactively tailoring of
explanations to the (evolving) needs of explainees as well as to measure the efectiveness of the provided
explanations in terms of enhancing user understanding. The MAI-XAI workshop focuses on improving
the efectiveness of explanations by moving to “natural” explanations that are more accessible to a
non-technical audience. Natural explanations leverage multiple modalities (e.g., text, speech, visual,
tabular, etc.) to select the form of presentation of an explanation that most suits the context and the
explanatory needs of an explainee. XAI systems providing natural explanations might react to afective
aspects and emotions to for example, identify dissatisfaction with an explanation and react accordingly.
Finally, XAI systems should be able to efectively interact with the user to move from one-shot static</p>
      <p>CEUR</p>
      <p>ceur-ws.org
explanations to dynamically adapted explanations that can be informed by the reactions or feedback of
a user during the interaction.</p>
      <p>We aim to ofer researchers and practitioners the opportunity to identify new promising research
directions on XAI along the above-mentioned lines, focusing on how to provide “natural explanations”.
Attendees are encouraged to present case studies of real-world applications where XAI has been
successfully applied, emphasizing the practical benefits and challenges encountered. The workshop
explores three pillars toward creating more natural explanations: i) Multimodal XAI, ii) Afective XAI
and iii) Interactive XAI.</p>
      <p>Multimodal XAI Multi-modality is demanded at the level of both data and models. Multi-modality
requires dealing properly with structured and non-structured heterogeneous data (i.e., tabular data,
text, images, sound, video, etc.). Multi-modal explanations must be customizable and easy to adapt not
only to either user preferences or user needs but also adaptable to diferent communication channels
in the form of natural phenotropics multi-lingual human–machine interactions. Nonetheless, most
existing resources are developed ad-hoc for specific applications, usually considering only one or two
modalities, being hard to combine, reuse and recycle in a human-centred and sustainable way..
Afective XAI The extent to which XAI systems should be equipped with abilities to detect and
express human emotions remains an open question. Some researchers have hypothesized that including
an afective component might increase the predictability of systems and help users in reasoning about
the causality of systems and predictions. The technical challenges for the systems developed within the
afective computing spectrum are related to multimodal natural language processing, such as sentiment
analysis tools that use natural language processing and text analysis in addition to emotion detection
from signals and modalities, including gestures, posture, facial information, heart rate, electrodermal
activity, voice, speech rate, pitch, and intensity.</p>
      <p>Interactive XAI Beyond regarding an explainee as a passive receiver of an (adapted) explanation,
previous research has proposed that explainees should have a more active role, being able to actively
co-shape the explanation in an interactive manner. However, there has been little emphasis so far on
methods that adapt the explanation dynamically to a user’s needs by evaluating whether the user has
understood the explanation. We, therefore, need novel methods to better identify the information needs
of a user as well as novel methods to measure the degree to which a user has understood the explanation,
both in order to adapt the explanation further as well as to determine whether the explanation has been
successful.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Submission, Reviewing and Selection Process</title>
      <p>The workshop received 18 submissions covering the three main topics mentioned in the call for papers.
All the submissions received at least two reviews. Out of these papers, 9 were selected for presentation
at the workshop, yielding an acceptance rate of 50%.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Invited Speakers</title>
    </sec>
    <sec id="sec-5">
      <title>4. Program Committee</title>
      <p>The workshop will feature two high-profile invited speakers: Anna Monreale (University of Pisa) and
Francesca Toni (Imperial College London).</p>
      <p>We would like to express our gratitude to the following program committee members for their help in
reviewing papers and in compiling an exciting program:</p>
    </sec>
    <sec id="sec-6">
      <title>Proceedings Chair:</title>
      <p>Olivia Sánchez-Graillet, Bielefeld University</p>
    </sec>
    <sec id="sec-7">
      <title>5. Acknowledgments</title>
      <p>The workshop is sponsored by the SAIL network, the Cognitive Interaction Technology Center (CITEC)
at Bielefeld University, and the XAI4SOC project (PID2021-123152OB-C21 funded by MCIN/AEI/
10.13039/501100011033 and by “ESF Investing in your future”).</p>
      <p>The co-organizers of Bielefeld University acknowledge funding from the SAIL project, which is funded
by the state of North Rhine-Westphalia, as well as from the TRR 318 “Constructing Explainability”
project, which is funded by the Deutsche Forschungsgemeinschaft (DFG).</p>
      <p>USC´s co-organizers acknowledge the support of the Galician Ministry of Culture, Education,
Professional Training and University (grants ED431G2023/04 and ED431C2022/19) and the European Regional
Development Fund (ERDF). This work has received funding from the European Union’s Horizon Europe
research and innovation programme under Grant Agreement No. 101134894 (SOSFood) and is also
supported by the Spanish Ministry of Science and Innovation (MCIN/AEI/10.13039/501100011033/) with
grants PID2021-123152OB-C21, PID2024-157680NB-I00 and CNS2024-154915.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>