<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles, USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Margaret Burnett</string-name>
          <email>burnett@eecs.oregonstate.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department Name Oregon State University Corvallis</institution>
          ,
          <addr-line>Oregon</addr-line>
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p />
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>How can the field of Explainable AI (XAI) get from where we are
now, explaining some aspects of AI fairly well, to where we need
to be—explaining AI fairly and well? In this keynote, I’ll talk about
three critical challenges to our field, focusing especially on the
third of these: explaining AI fairly.</p>
    </sec>
    <sec id="sec-2">
      <title>CCS CONCEPTS</title>
      <p>• Computing methodologies → Intelligent agents; •
Humancentered computing → Human-Computer Interaction (HCI)
Explainable AI; explaining to diverse populations; biased
explanations; XAI challenges
ACM Reference format:
Explainable AI (XAI) has started experiencing explosive growth,
echoing the explosive growth that has preceded it of AI becoming
used for practical purposes that impact the general public. This
spread of AI into the world outside of research labs brings with it
pressures and requirements that many of us have perhaps not
thought about deeply enough.</p>
      <p>In this keynote address, I will explain why I think we have a
long way to go before we’ll be able to achieve our long-term goal:
to explain AI well.</p>
      <p>One way to characterize our current state is that we’re doing
“fairly well”, doing some explaining of some things. In a sense,
this is reasonable: the field is young, and still finding its way.</p>
      <p>However, moving forward demands progress in (at least) three
areas.</p>
      <p>(1) How we go about XAI research: Explainable AI cannot
succeed if the only research foundations brought to bear on it are
AI foundations. Likewise, it cannot succeed if the only
foundations used are from psychology, education, etc. Thus, a
challenge for our emerging field is how to conduct XAI research
in a truly effective multi-disciplinary fashion, that is based on not
only what we can make algorithms do, but also on solid,
wellfounded principles of explaining the complex ideas behind the
algorithms to real people. Fortunately, a few researchers have
started to build such foundations.</p>
      <p>(2) What we can succeed at explaining: So far, we as a field are
doing a certain amount of cherry picking as to what we explain.
We tend to choose what to explain by what we can figure out how
to explain—but we are leaving too much out. One urgent case in
point is the societal and legal need to explain fairness properties
of AI systems.</p>
      <p>The above challenges are important, but the field is already
becoming aware of them. Thus, this keynote will focus mostly on
the third challenge, namely:</p>
      <p>(3) Who we can explain to. Who are the people we’ve even
tried to explain AI to, so far? What are the societal implications
of who we explain to well and who we do not?</p>
      <p>Our field has not even begun to consider this question. In this
keynote I’ll discuss why we have to explain to populations to
whom we’ve given little thought—diverse people in many
dimensions, including gender diversity, cognitive diversity, and
age diversity.</p>
      <p>Addressing all of these challenges is necessary before we can
claim to explain AI fairly and well.</p>
    </sec>
    <sec id="sec-3">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work has been supported in part by DARPA
#N66001-17-24030 and NSF #1528061. Any opinions, findings and conclusions
or recommendations expressed are the authors’ and do not
necessarily reflect the views of NSF, DARPA, the Army Research
Office, or the US government.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>