<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Recent Trends in XAI: A Broad Overview on current Approaches, Methodologies and Interactions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jakob M. Schoenborn</string-name>
          <email>schoenborn@uni-hildesheim.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Klaus-Dieter Altho↵</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>German Research Center for Artificial Intelligence (DFKI) Trippstadter Str.</institution>
          <addr-line>122 67663 Kaiserslautern</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Hildesheim Samelsonplatz 1 31141 Hildesheim</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The definition of an explainable artificial intelligence heavily depends on the use-case, whether one is focusing on the technical knowledge-management component [30, 33, 37, 43] or rather the more social interaction including speech acts and conversations [27, 31, 33]. Since the uprising debate of the unknown outcome on the development of AI in general using Deep Learning [4, 34, 35, 44] and recent legal restrictions (for example the GDPR [19]), the need on developing an explainable AI is rapidly increasing, especially since the last two years. Additionally, the goal to increase the users trust towards AI has still to be achieved. Thus, this contribution aims to provide an overview on the current topics especially since 2018 with a focus on case-based explanations3 up until today.</p>
      </abstract>
      <kwd-group>
        <kwd>Explanation</kwd>
        <kwd>XAI</kwd>
        <kwd>Framework</kwd>
        <kwd>Case-Based Explanation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Seemingly any general discussion on artificial intelligence contains at least some
sort of statement that explainable artificial intelligence (XAI) will be a crucial
component in future systems [
        <xref ref-type="bibr" rid="ref15 ref17 ref21 ref23 ref8 ref9">8, 9, 15, 17, 21, 23</xref>
        ]. These mentions are usually on
a rather general level - without becoming too specific on how an explanation can
actually be automatically generated by any kind of algorithm or methodology
[
        <xref ref-type="bibr" rid="ref13 ref27">13, 27, 33, 43</xref>
        ]. This is also reflected by the very small amount of practically used
and evaluated systems. However, some approaches seem to be promising, e. g.:
Black Box Explanations through Transparent Approximations (BETA) [28],
Local Interpretable Model-Agnostic Explanations (LIME) [38] and additive models
with pairwise interactions (GAMs) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] (see [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]). Depending on the point of view,
3 For surveys before 2018, we refer to the interested reader to [
        <xref ref-type="bibr" rid="ref1 ref10 ref2 ref24">1, 2, 10, 24, 46</xref>
        ]
di↵ erent systems can be considered as the “first” XAI. One among those in terms
of “making sense” as defined by Schank [40] is SWALE [41]. Others might argue
that explainable AI has always been a part in developing an AI - thus
referring to expert systems in general, ranging back to Weizenbaums ELIZA in 1966
[45]. However, the common goal has not changed: to create a component that
understands and makes sense of the underlying data in a certain context [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
This goal is financially supported by the European commission by investing an
additional 1.5 billion EUR (a total of 20 billion EUR by 2020) “and more than
20 billion euro per year from public and private investments over the following
decade” [
        <xref ref-type="bibr" rid="ref12 ref23">12, 23</xref>
        ]. For this survey we define XAI in the following way:
“An explainable artificial intelligence enables an user to learn a transparent,
relevant and justified information at the right time using an appropriate size.”
Each approach dealing with explanations has two common tasks as an
explanation foundation: A knowledge management and maintenance task to solve and
an appropriate interface to socially interact with the user, even on a one-sided
level, by providing an explanation. Whichever methodology is used to solve these
tasks, the cited works have shown that each of these can be used in multiple
domains and thus can learn from each other.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <sec id="sec-2-1">
        <title>Results of previous surveys</title>
        <p>
          The movement from the di↵ erent fields of AI to XAI can also be proven by
the rising numbers of surveys on XAI. Holzinger [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], Doˇsiloviˇc [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], and Adadi
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] surveyed (among others, e. g. [
          <xref ref-type="bibr" rid="ref1 ref20">1, 20, 46</xref>
          ]) in 2018 the current trends of XAI
and how to move from black-box machine learning to glass-box XAI. Adadi et
al. provided a comprehensive overview on key related concepts of XAI with a
schematic view of XAI related concepts [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The authors motivate on di↵ erent
explanation goals and how they are used in certain domains: to control, to
improve, to discover, to justify.
        </p>
        <p>
          Holzinger motivates di↵ erent trend indicators, i. e. multiple global industrial
companies using AI - especially in recommendation systems, since the overall
goal still remains to convince the user to buy another suitable product or to
remain longer on their website by recommending other similar series to watch
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. Other indicators are funding (as motivated and cited in the introduction) as
well as conferences. The interest on the Neural Information Processing Systems
(NIPS, now name changed to NeurIPS) conference kept its projected success on
the expected amount of participants by selling out all available tickets within
11 minutes [32]. The 35th international conference on machine learning (ICML)
scored a similar success with selling out before the end of the submission
deadline. The main problems on XAI as mentioned by Holzinger [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and to some
extent also by Doˇsiloviˇc [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] (with the addition of ethical and quality-of-life
implications), trust, privacy, and security remain to be the core problems as of today.
As the arising problems with the general data protection regulation (GDPR) are
well-discussed (i. e. [
          <xref ref-type="bibr" rid="ref19 ref24">19, 24</xref>
          ]). Additionally, there is no known technical possibility
to solve the decision on when an uploaded file, e.g., a video, image, or piece of
literature, is copyright-protected which has recently been discussed in Germany
[39, 42], XAI could decrease the false-positive rate and provide a first step into
the right direction.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Current research by domain</title>
        <p>
          Fig. 1 depicts a list of recent publications since 2018 with a size of at least five
pages. The most publications are applicable to multiple domains (e. g. machine
learning in medical domain), but are listed only in one category - depending on
its focus. This choice was made to illustrate the manifold, di↵ erent domains, in
which XAI currently experiences development. The spike in Machine Learning is
not surprising (Deep Learning is considered to be part of Machine Learning), due
to its recent success and popularity as illustrated in Fig. 2. Especially this year in
2019, almost every conference dealing with AI is either advertising XAI as their
main theme or have at least one workshop attached to it [
          <xref ref-type="bibr" rid="ref14 ref25 ref26 ref7">7, 14, 25, 26, 36</xref>
          ]. Fig.
1 is expected to drastically change at the end of this year after the proceedings
on the XAI centring conferences and workshops have been published, possibly
also with the addition of novel domains.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Machine Learning</title>
      <p>
        The success of the probabilistic and statistical approaches of machine learning
during the last five years is undeniable [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and led to an increased interest in
machine learning in general (see Fig. 2). But still, the decisions driven by these
algorithms are mainly black-box approaches which are critical regarding trust
of the user in how the decision has been made, especially in the medical
domain and for decision support systems in general [
        <xref ref-type="bibr" rid="ref18 ref24 ref5">5, 18, 24</xref>
        ]. This is one of the
key-challenges which are recently targeted by machine learners, which is
furthermore illustrated by presenting a few exemplary suggestions (without claim
of completeness).
      </p>
      <p>
        Ghosal et al. combined their deep convolutional neural network for leaf
image classification with an additional explanation phase [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. An image of a leaf is
used as input and will be further analyzed throughout multiple feature maps to
process the image. These identifications and classifications (of a diagnosis, i. e.
herbicide injury or septoria brown spot) are then further used in combination
to present the given solution to the user, based on the identified pairs of
features and diagnosis. This is basically the same argumentation structure as an
expert would use as well (explaining the diagnosis based on the identified visual
features). The authors envision that this approach could be extended to animal
and human diseases [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], but the acceptance proofed by an user-centring study
is missing - even though the approach seems to be very promising. To provide
visual explanations can also be found in the recent case-based reasoning approach
by Lamy et al [29] in the medical domain (identifying breast cancer). It should
be further investigated to combine these approaches, if the proposed
methodology should be extended to the medical domain, since the reuse of knowledge by
treating the pairs of features and diagnosis can easily be treated as a case.
      </p>
      <p>Another explainable machine learning application in the medical domain is
provided by Lundberg et al. to support anesthesiologists in predicting the
possibility of hypoxaemia during surgery [30]. To achieve this, 20+ static features
(age, BMI, ...) and 45 dynamic features are used in real-time to build a predictive
model of hypoxaemia events. These features are color-encoded (pink for increased
risk, green for decreased risk) and are further combined mathematically to
calculate the size of the prediction window, which supports the anaesthesiologist
by knowing which attributes of the patient and procedure contributed to the
current risk [30]. Each feature contains a certain range of impact (i. e. weight)
and will be treated accordingly while building the explanation. The
explanation itself is a real-time graph containing each relevant feature and its impact.
One might argue that this can not be considered as an explanation, but during
a surgery, there is no time for reading/listening to a textual representation of
an explanation but rather the relevant information needs to be understandable
on first sight. The approach has been evaluated against practicing
anesthesiologists and achieves superior performance when predicting hypoxaemia risk from
electronically recorded intraoperative data [30].</p>
      <p>
        The last presented approach is a result of the further development of the
mentioned GAM [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to understand how data scientists understand machine
learning models by Hohman et al, since they “...have di↵ erent reasons to interpret
models and tailor explanations for specific audiences...” [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Here, explanations
are divided into six classes, local instance explanations, instance explanation
comparisons, counterfactuals, nearest neighbors, reigns of error and feature
importance. The distinction is important since depending on the actual use-case
one class might be a better fit than the other. To decide which class is used,
GAMs (generalized additive models [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]) are used and smoothed by shape
functions fi. The generated explanation itself is presented as a waterfall chart for two
data instances and reflects the impact of each attribute in its current domain
and use-case (here: housing). The approach has been evaluated by 12 selected
professional data scientists (out of 33 replies on 200 invitations). It needs to
be highlighted, that the target audience is proficient in terms of artifical- and
explainable AI - thus, they are more likely to understand a models domain and
how the model work within this domain. Nevertheless, the participants agreed
on an enjoyable and easy to use experience [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Case-Based Explanation</title>
      <p>Case-Based Reasoning (CBR) is a methodology that reuses knowledge of
previous occured situations. In the medical domain, a case can consist of the
symptoms of a sickness and the corresponding solution to cure the sickness of a patient.
For another patient with similar symptoms, the CBR cycle proposes the most
similar case to the new patient and thus can be justified. Nevertheless, some
patients might argue that even the most similar case is not similar enough due
to the individuality of a human being and the high domain complexity.
Nevertheless, the possibility to use CBR as a baseline approach and build explanations
upon it has received new attention due to the general interest in XAI.</p>
      <p>
        Due to the lack of reasoning why the proposed case is the most similar case,
Lamy et al added a visual interface for an explanation on which decision (and
thus which therapy) is better for a breast cancer patient [29]. In the healthcare
domain, after most of the symptoms have been raised, usually only a few
reasonable options are left to be considered. For each option, a dimension will be
opened and the symptoms and their values are weighted accordingly. Whenever
a query has been issued to the system, the most similar cases are retrieved. The
visual interface uses these data and chooses the most relevant attributes. These
will be displayed and ordered in a way so that the user can comprehend the
influence to the proposed solutions (the case number next to the query) as
proposed in Fig. 3. As mentioned earlier, this is a very similar presentation to the
approach in machine learning by Ghosal et al [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The approach has been tested
on three public datasets and the size of the case base has been limited to 315
cases. It remains to be surveyed how the approach would fare on real datasets
and especially a larger case base to increase its proficiency.
      </p>
      <p>
        Eisenstadt et al deployed explanation patterns using an agent-based system
module within a case-based assistance framework to provide human
understandable insights on the system behavior in the architectural domain [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The
underlying data sets are semantic fingerprints of MetisCBR, for example the room
count as unconnected vertices in a graph so that the position of the rooms are
also known. To model the accessibility of two rooms, these vertices can be
connected via a corresponding edge where the edge also represents a possibility to
move from one room to another (e. g through a door). These fingerprints then
are used to create an explanation using explanation patterns (see Fig. 4) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>These two works have been picked as examples for the two current situations
which most case-based explanation systems are facing: The first is to manage
and combine manifold known and measured attributes in such a way that the
user gains trust and can cognitively understand the systems choice of one of
many possible medical treatments. Whilst most companies do have the ability
to measure attributes, the most natural approach is to use fossiled explanations
or canned explanations as suggested back then by R. Schank [41]. Nevertheless
the challenge remain to loosen up the rather static nature of predefined
templates to generate an explanation on the fly which is a di cult problem to solve,
given the huge possibility of valid and invalid combinations which have to be
identified. Most approaches identified in this survey using CBE are focusing
on using CBR with focus of learning the users feedback as an explanation to
the given problem. The other challenge which current approaches are rightfully
face is to implement explanations into areas where no explanations have been
given before. As stated before, the cold start problem can to some extent be
avoided by using the knowledge about how to structure the foundation of an
explanation-aware system (knowledge management, social interaction), but the
domain knowledge still needs to be connected to the explanation component
which might be di cult depending on which architecture has been used.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>As the literature review hints, there are multiple manifold distinct domains in
which XAI can be used. Most of these domains have interfaces to benefit from
each other. It remains open to find a general valid formulation on what an
explanation actually is - despite the e↵ orts of defining it. This remains to be changed
for each specific situation a possible user is currently in. But the lack of
formalism is probably not even the problem. The main goal is still to explain a given
decision to the user individually and the individuality of each user increases the
complexity by a large margin. A lot of recent approaches and implementations
seem to be promising, yet, actual user-centring results on conducted case-studies
are missing. These would be very interesting to measure the acceptance of users
and to adjust the direction of developing an XAI accordingly. However, moving
from black-box decision making to a glass-box decision making is a step in the
right direction to support the acceptance of AI in the everyday life, including
the introduction of IT in schools and other social areas to enable a larger group
of people in using the advantages of (X)AI.
28. Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Interpretable &amp; Explorable</p>
      <p>Approximations of Black Box Models. arXiv:1707.01154 [cs]. (2017).
29. Lamy, J.-B., Sekar, B., Guezennec, G., Bouaud, J., Sroussi, B.: Explainable
artificial intelligence for breast cancer: A visual case-based reasoning approach. Artificial
Intelligence in Medicine. 94, 4253 (2019).
30. Lundberg, S.M., Nair, B., Vavilala, M.S., Horibe, M., Eisses, M.J., Adams, T.,
Liston, D.E., Low, D.K.-W., Newman, S.-F., Kim, J., Lee, S.-I.: Explainable
machine-learning predictions for the prevention of hypoxaemia during surgery.
Nature Biomedical Engineering. 2, 749760 (2018).
31. Madumal, P., Miller, T., Vetere, F., Sonenberg, L.: Towards a Grounded Dialog</p>
      <p>Model for Explainable Artificial Intelligence. arXiv:1806.08055 [cs]. (2018).
32. Synced: NIPS Tickets Sell Out in Less Than 12 Minutes.
https://medium.com/syncedreview/nips-tickets-sell-out-in-less-than-12-minutese3aab37ab36a. Last access: 04/23/2018. 2018.
33. Mittelstadt, B., Russell, C., Wachter, S.: Explaining Explanations in AI.
Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT⇤ ’19.
279288 (2019).
34. Musk, E., Isaacson, W.: Elon Musk: Artificial Intelligence Could Wipe Out
Humanity. URL:
https://sagaciousnewsnetwork.com/elon-musk-artificial-intelligencecould-wipe-out-humanty/. Last access: 04/22/2019. 2014.
35. Musk, E.: AI “vastly more risky than North Korea” URL:
https://twitter.com/elonmusk/status/896166762361704450. Last access:
04/22/2019. 2017.
36. NeurIPS-19: NeurIPS 2019 Expo Workshop: Fairness
and Explainability: From ideation to implementation.
https://nips.cc/Expo/Conferences/2018/Schedule?workshop id=5. Last access:
04/24/2019.
37. Nushi, B., Kamar, E., Horvitz, E.: Towards Accountable AI: Hybrid
Human</p>
      <p>Machine Analyses for Characterizing System Failure. 10. 2018
38. Peltola, T.: Local Interpretable Model-agnostic Explanations of Bayesian
Predictive Models via Kullback-Leibler Projections. arXiv:1810.02678 [cs, stat]. (2018).
39. Reda, J.: Uno cial consolidated version: trialogue outcome. Article 13 + related
definition. 2019.
40. Schank, R. C.: Explanation Patterns: Understanding Mechanical and Creatively.</p>
      <p>L. Erlbaum Assoc. Inc., Hillsdale, NJ, USA. 1986.
41. Schank, R.C., Leake, D.B.: Creativity and learning in a case-based explainer.
Artificial Intelligence. 40, 353385 (1989).</p>
      <p>Investigating the solution space for online iterative explanation in goal reasoning
agents. AI Commun. 31(2): 213-233 (2018)
42. Vincent, J.: Europes controversial overhaul of online copyright receives
final approval.
https://www.theverge.com/2019/3/26/18280726/europe-copyrightdirective. Last access: 04/22/2019. 2019.
43. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing Theory-Driven User-Centric</p>
      <p>Explainable AI. 15. 2019.
44. Waltl, B., Vogl, R.: Explainable Artificial Intelligence - The new frontier in legal
informatics. 10. 2018.
45. Weizenbaum, J.: ELIZA–A Computer Program For the Study of Natural Language</p>
      <p>Communication Between Man and Machine. Communications of the ACM. 7. 1966.
46. Zhang, Y., Chen, X.: Explainable Recommendation: A Survey and New
Perspectives. arXiv:1804.11192 [cs]. (2018).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abdul</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vermeulen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kankanhalli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda</article-title>
          .
          <source>In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI 18</source>
          . pp.
          <fpage>118</fpage>
          . ACM Press, Montreal QC,
          <string-name>
            <surname>Canada</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://doi.org/10.1145/3173574.3174156.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Adadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berrada</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)</article-title>
          .
          <source>IEEE Access</source>
          .
          <volume>6</volume>
          ,
          <issue>5213852160</issue>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/ACCESS.
          <year>2018</year>
          .
          <volume>2870052</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>3. The AlphaStar team: AlphaStar: Mastering the Real-Time Strategy Game StarCraft II</article-title>
          . https://deepmind.com/blog/alphastar
          <article-title>-mastering-real-time-strategy-gamestarcraft-ii/</article-title>
          . Last access:
          <volume>04</volume>
          /22/
          <year>2019</year>
          . 24
          <string-name>
            <surname>January</surname>
          </string-name>
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cellan-Jones</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Stephen Hawking warns artificial intelligence could end mankind</article-title>
          . URL: https://www.bbc.com/news/technology-30290540. Last access:
          <volume>04</volume>
          /22/
          <year>2019</year>
          .
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Binder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montavon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Mu¨ller, K.-R.,
          <string-name>
            <surname>Samek</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Layer-Wise Relevance Propagation for Deep Neural Network Architectures</article-title>
          . In: Kim,
          <string-name>
            <given-names>K.J.</given-names>
            and
            <surname>Joukov</surname>
          </string-name>
          , N. (eds.) Information Science and Applications (ICISA)
          <year>2016</year>
          . pp.
          <fpage>913922</fpage>
          . Springer Singapore, Singapore (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Caruana</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gehrke</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sturm</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elhadad</surname>
          </string-name>
          , N.:
          <article-title>Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission</article-title>
          .
          <source>In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15</source>
          . pp.
          <fpage>17211730</fpage>
          . ACM Press, Sydney,
          <string-name>
            <given-names>NSW</given-names>
            ,
            <surname>Australia</surname>
          </string-name>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>CD-MAKE-</surname>
          </string-name>
          19
          <source>: Cross Domain Conference for Machine Learning and Knowledge Extraction. CD-MAKE 2019 Workshop on explainable Artificial Intelligence</source>
          . https://cd-make.net/special-sessions/make-explainable-ai/. Last access:
          <volume>04</volume>
          /24/
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Choo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Visual Analytics for Explainable Deep Learning</article-title>
          .
          <source>IEEE Computer Graphics and Applications</source>
          .
          <volume>38</volume>
          ,
          <issue>8492</issue>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Conati</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porayska-Pomsta</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mavrikis</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling</article-title>
          . arXiv:
          <year>1807</year>
          .00154 [cs]. (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Doˇsilovi´c,
          <string-name>
            <surname>F. K.</surname>
          </string-name>
          , Brˇci´c,
          <string-name>
            <surname>M.</surname>
          </string-name>
          , Hlupi´c, N.:
          <article-title>Explainable artificial intelligence: A survey</article-title>
          .
          <source>41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)</source>
          ,
          <year>Opatija</year>
          ,
          <year>2018</year>
          , pp.
          <fpage>0210</fpage>
          -
          <lpage>0215</lpage>
          .
          <year>2018</year>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Eisenstadt</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Espinoza-Stapelfeld</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikyas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Altho↵ ,
          <string-name>
            <surname>K.-D.</surname>
          </string-name>
          :
          <article-title>Explainable Distributed Case-Based Support Systems: Patterns for Enhancement and Validation of Design Recommendations</article-title>
          . In: Cox,
          <string-name>
            <given-names>M.T.</given-names>
            ,
            <surname>Funk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            , and
            <surname>Begum</surname>
          </string-name>
          , S. (eds.)
          <source>Case-Based Reasoning Research and Development</source>
          . pp.
          <fpage>7894</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -01081-2
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. European Commission:
          <article-title>Artificial intelligence</article-title>
          . https://ec.europa.eu/commission/- news/artificial-intelligence
          <string-name>
            <surname>-</surname>
          </string-name>
          2018
          <source>-dec-07 en. Last access: 04/22/2019. Published 7 December</source>
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Escalante</surname>
            ,
            <given-names>H.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guyon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Escalera</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jacques</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madadi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baro</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ayache</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viegas</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gucluturk</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guclu</surname>
            , U., van Gerven, M.A.J.,
            <given-names>van Lier</given-names>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          :
          <article-title>Design of an explainable machine learning challenge for video interviews</article-title>
          .
          <source>In: 2017 International Joint Conference on Neural Networks (IJCNN)</source>
          . pp.
          <fpage>36883695</fpage>
          . IEEE, Anchorage,
          <string-name>
            <surname>AK</surname>
          </string-name>
          , USA (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. EXTRAAMAS-19: EXplainable TRansparent Autonomous Agents and
          <string-name>
            <given-names>MultiAgent</given-names>
            <surname>Systems</surname>
          </string-name>
          . https://extraamas.ehealth.hevs.ch/index.html. Last access:
          <volume>04</volume>
          /24/
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gandhi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Explainable Artificial Intelligence. https://www.kdnuggets.com/
          <year>2019</year>
          /01/explainable-ai.html. Last access:
          <volume>04</volume>
          /22/
          <year>2019</year>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ghosal</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blystone</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ganapathysubramanian</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sarkar</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>An explainable deep machine vision framework for plant stress phenotyping</article-title>
          .
          <source>Proceedings of the National Academy of Sciences. 115</source>
          ,
          <issue>46134618</issue>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Pagel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Portmann</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vey</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , (
          <year>2018</year>
          ).
          <article-title>Cognitive Computing - Teil 2</article-title>
          .
          <string-name>
            <surname>Informatik</surname>
            <given-names>Spektrum</given-names>
          </string-name>
          : Vol.
          <volume>41</volume>
          , No. 2. Berlin Heidelberg: Springer-Verlag. (S.
          <fpage>81</fpage>
          -
          <lpage>84</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Goebel</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chander</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lecue</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Akata</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stumpf</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kieseberg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          : The New 42? In: Holzinger,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Kieseberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tjoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.M.</given-names>
            , and
            <surname>Weippl</surname>
          </string-name>
          , E. (eds.)
          <source>Machine Learning and Knowledge Extraction</source>
          . pp.
          <fpage>295303</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <source>Clearance Domain. ICCBR-18 Workshop Proceedings. XCBR: Case-Based Reasoning for the Explanation of Intelligent Systems</source>
          .
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Goodman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flaxman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>European Union Regulations on Algorithmic DecisionMaking and a “Right to Explanation”</article-title>
          .
          <source>AI Magazine</source>
          .
          <volume>38</volume>
          ,
          <issue>50</issue>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Guidotti</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruggieri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannotti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A Survey Of Methods For Explaining Black Box Models</article-title>
          . arXiv:
          <year>1802</year>
          .
          <year>01933</year>
          [cs]. (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Gunning</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <source>Explainable Artificial Intelligence (XAI)</source>
          .
          <volume>36</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Hohman</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Head</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caruana</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>DeLine</given-names>
            , R.,
            <surname>Drucker</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.M.:</surname>
          </string-name>
          <article-title>Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models</article-title>
          .
          <volume>13</volume>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , (
          <year>2018</year>
          ).
          <article-title>Explainable AI (ex-AI)</article-title>
          .
          <source>Informatik Spektrum</source>
          : Vol.
          <volume>41</volume>
          , No. 2. Berlin Heidelberg: Springer-Verlag. (S.
          <fpage>138</fpage>
          -
          <lpage>143</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kieseberg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weippl</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tjoa</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          :
          <article-title>Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI</article-title>
          . In: Holzinger,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Kieseberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Tjoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.M.</given-names>
            , and
            <surname>Weippl</surname>
          </string-name>
          , E. (eds.)
          <source>Machine Learning and Knowledge Extraction</source>
          . pp.
          <fpage>18</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25. ICCBR-19
          <source>: 27th International conference on case-based reasoning, September</source>
          <volume>8</volume>
          - 12 in Otzenhausen, Germany.
          <article-title>The theme for ICCBR 2019 is Explainable AI</article-title>
          . http://iccbr2019.com/.
          <source>Last access: 04/24/2019. IJCAI-ECAI-18</source>
          , Stockholm, Sweden,
          <source>July 13-19</source>
          ,
          <year>2018</year>
          . XCBR:
          <article-title>First Workshop on case-based reasoning for the explanation of intelligent systems</article-title>
          . http://gaia.fdi.ucm.es/events/xcbr/. Last access:
          <volume>04</volume>
          /24/
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26. ISWC-10
          <source>: SEMEX 2019: 1st Workshop on Semantic Explainability colocated with the 18th International Semantic Web Conference (ISWC</source>
          <year>2019</year>
          ). https://scdemo.techfak.uni-bielefeld.de/semex2019/. Last access:
          <volume>04</volume>
          /24/
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Kirsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Explain to whom? Putting the User in the Center of Explainable AI</article-title>
          .
          <source>Proceedings of the First International Workshop on Comprehensibility and Explanation in AI</source>
          and
          <article-title>ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA</article-title>
          <year>2017</year>
          ),
          <year>2017</year>
          , Bari, Italy.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>