<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Timotheus Kampik</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kristijonas Čyras</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Rago</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oana Cocarascu</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ericsson Inc.</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Imperial</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>King's College London</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Umeå University, Sweden &amp; SAP Signavio</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Research on intelligent systems that can explain their inferences and decisions to (human and machine) users has emerged as an important subfield of Artificial Intelligence (AI). In this context, the interest in symbolic and hybrid approaches to AI and their ability to facilitate explainable and trustworthy reasoning and decision-making - often in combination with machine learning algorithms - is increasing. Computational argumentation is considered a particularly promising paradigm for facilitating explainable AI (XAI). This trend is reflected by the fact that many researchers who study argumentation have started to i) apply argumentation as a method of explainable reasoning; ii) combine argumentation with other subfields of AI, such as knowledge representation and reasoning (KR) and machine learning (ML), to facilitate the latter's explainability; iii) study explainability properties of argumentation; iv) move from explainable AI to contestable AI, by facilitating not only explainability but also human intervention. Given the substantial interest in these diferent facets of argumentative XAI, this workshop aims at providing a forum for focused discussions of the recent developments on the topic. These workshop proceedings feature five papers (out of eight submitted), covering diverse perspectives on argumentative explainability. The works' topics range from formal argumentation dialogues, via applications of argumentative explainability to e.g. image classification, to human factors. We thank all the authors of the submitted papers, our keynote speaker AnneMarie Borg, as well as the reviewers of the submitted papers (listed below) for making the 2 edition of the workshop a success.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Timotheus Kampik et al. CEUR Workshop Proceedings
Program Committee
• Gianvincenzo Alfano
• Lars Bengel
• Elfia Bezou Vrakatseli
• Lydia Blümel
• Davide Ceolin
• Jesse Heyninck
• Loan Ho
• Jieting Luo
• Mariela Morveli Espinoza
• Alison R. Panisson
• Theodore Patkos
• Guilherme Paulino-Passos
• Nico Potyka
• Nikolaos Spanoudakis
• Srdjan Vesic
• Madeleine Waller
• Andreas Xydis
• Xiang Yin
• Yun Zhou
• Timotheus Kampik
• Kristijonas Čyras
• Antonio Rago
• Oana Cocarascu</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>