<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Timotheus Kampik</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Rago</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kristijonas Čyras</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oana Cocarascu</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Independent Researcher</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>King's College London</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Umeå University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Research on intelligent systems that can explain their inferences and decisions to (human and machine) users has emerged as an important subfield of Artificial Intelligence (AI). In this context, the interest in symbolic and hybrid approaches to AI and their ability to facilitate explainable and trustworthy reasoning and decision-making - often in combination with machine learning algorithms - is increasing. Computational argumentation is considered a particularly promising paradigm for facilitating explainable AI (XAI). This trend is reflected by the fact that many researchers who study argumentation have started to i) apply argumentation as a method of explainable reasoning; ii) combine argumentation with other subfields of AI, such as knowledge representation and reasoning (KR) and machine learning (ML), to facilitate the latter's explainability; iii) study explainability properties of argumentation; iv) move from explainable AI to contestable AI, by facilitating not only explainability but also human intervention. Given the substantial interest in these diferent facets of argumentative XAI, this workshop aims at providing a forum for focused discussions of the recent developments on the topic. These workshop proceedings feature six peer-reviewed papers (out of eight submitted), covering diverse perspectives on argumentative explainability. The works' topics range from the semantics of structured bipolar argumentation, via applications of argumentative explainability to e.g. recommendation systems, to argumentation and conversational AI. In addition, the proceedings feature a brief write-up of challenges in applied ArgXAI research, based on a discussion session from the previous edition of ArgXAI. We thank all the authors of the submitted papers, our keynote speaker Srdjan Vesic, as well as the reviewers of the submitted papers (listed below) for making the 3 edition of the workshop a success.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Timotheus Kampik et al. CEUR Workshop Proceedings
Program Committee
Kristijonas Čyras used Claude Sonnet 4.5 to prepare the proceedings HTML page.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>