<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>From Interactive Machine Learning to Explainable Artificial Intelligence (ex-AI)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andreas Holzinger</string-name>
          <email>andreas.holzinger@medunigraz.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Medical University of Graz</institution>
          ,
          <addr-line>Graz</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this part, we focus on the role of humans in state-of-theart decision systems. Thereby, we go beyond interactive machine learning to explainable artificial intelligence. How can this be realized? How can we include humans into the automated decision process and how can me measure their intelligence? To answer these questions, we will talk about different terms like interaction, reflection and discuss the underlying principles of intelligence and cognition. In the next part, we provide fundamentals to measure and evaluate human intelligence with biometric technologies, sensor arrays and affective computing to measure emotion and stress. The tutorial concludes with a discussion on ethical, legal and social issues of explainable AI systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body />
  <back>
    <ref-list />
  </back>
</article>