<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Interactive Explanations for Contestable AI⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesca Toni</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Imperial College London</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>AI has become pervasive in recent years, but state-of-the-art approaches mostly neglect the need for AI systems to be contestable. Contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In contrast, there has been little attention in AI to suggest how contestability requirements can be met computationally. Contestability requires dynamic (human-machine or machine-machine) decision-making processes, whereas much of the current AI landscape is tailored to static AIs - thus the need to accommodate contestability will require a radical rethinking. In this talk I will argue that computational forms of contestable AI will require forms of explainability whereby machines and humans can interact, and that computational argumentation can support the needed interactive explainability for contestability.</p>
      </abstract>
    </article-meta>
  </front>
  <body />
  <back>
    <ref-list />
  </back>
</article>