<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>What Does Explainable AI Really Mean? A New Conceptualization of Perspectives</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Derek Doran</string-name>
          <email>derek.doran@wright.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sarah Schulz</string-name>
          <email>sarah.schulz@ims.uni-stuttgart.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tarek R. Besold</string-name>
          <email>tarek-r.besold@city.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science City, University of London</institution>
          ,
          <addr-line>London</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Computer Science &amp; Engineering, Kno.e.sis Research Center Wright State University</institution>
          ,
          <addr-line>Dayton, Ohio</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Institute for Natural Language Processing University of Stuttgart</institution>
          ,
          <addr-line>Stuttgart</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We characterize three notions of explainable AI that cut across research elds: opaque systems that o er no insight into its algorithmic mechanisms; interpretable systems where users can mathematically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing di erences in how work on explainable AI is positioned in various elds. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as nal step of the generative process.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        If you were held accountable for the decision of a machine in contexts that have
nancial, safety, security, or personal rami cations to an individual, would you
blindly trust its decision? How can we hold accountable Arti cial Intelligence
(AI) systems that make decisions on possibly unethical grounds, e.g. when they
predict a person's weight and health by their social media images [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or the
world region they are from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] as part of a downstream determination about
their future, like when they will quit their job [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], commit a crime [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], or could
be radicalized into terrorism [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]? It is hard to imagine a person who would feel
comfortable in blindly agreeing with a system's decision in such highly
consequential and ethical situations without a deep understanding of the decision
making rationale of the system. To achieve complete trustworthiness and an
evaluation of the ethical and moral standards of a machine [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], detailed
\explanations" of AI decisions seem necessary. Such explanations should provide
insight into the rationale the AI uses to draw a conclusion. Yet many analysts
indeed blindly `accept' the outcome of an AI, whether by necessity or by choice.
Copyright © 2018 for this paper by its authors. Copying permitted for private and academic purposes.
      </p>
      <p>To overcome this dangerous practice, it is prudent for an AI to provide not only
an output, but also a human understandable explanation that expresses the
rationale of the machine. Analysts can turn to such explanations to evaluate if a
decision is reached by rational arguments and does not incorporate reasoning
steps con icting with ethical or legal norms.</p>
      <p>
        But what constitutes an explanation? The Oxford English dictionary has
no entry for the term `explainable', but has one for explanation: A statement
or account that makes something clear; a reason or justi cation given for an
action or belief. Do present systems that claim to make `explainable' decisions
really provide explanations? Those who argue yes may point to Machine Learning
(ML) algorithms that produce rules about data features to establish a classi
cation decision, such as those learned by decision trees [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Others suggest that
rich visualizations or text supplied along with a decision, as is often done in
deep learning for computer vision [
        <xref ref-type="bibr" rid="ref16 ref5 ref6">16,5,6</xref>
        ], o er su cient information to draw
an explanation of why a particular decision was reached. Yet \rules" merely
shed light into how, not why, decisions are made, and supplementary artifacts
of learning systems (e.g. annotations and visualizations) require human-driven
post processing under their own line of reasoning. The variety of ways
\explanations" are currently handled is well summarized by Lipton [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] when he states
that \the term interpretability holds no agreed upon meaning, and yet machine
learning conferences frequently publish papers which wield the term in a
quasimathematical way". He goes on to call for engagement in the formulation of
problems and their de nitions to organize and advance explainable AI research.
In this position paper, we respond to Lipton's call by proposing various \types"
of explainable AI that cut across many elds of AI.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2 Existing Perspectives in Explainable AI</title>
      <p>ACL As stated by Lipton, terms like
in1:4 CONGIPSSCI terpretability are used in research
pa1:2 ICCV/ECCV pers, despite the lack of a clear and
ecyn 1 qwuidaenltyifyshatrheids doebsneritviaotnio.nI,n
woredesrutgoeruq 0:8 gest a corpus-based analysis of
releferm 0:6 vant terms across research
communiT 0:4 ties which strongly rely on ML-driven
0:2 methodologies. The goal of this
anal0 2007 2008 2009 2010 201Y1ea20r12 2013 2014 2015 2016 eyvsaisncise toof geaxinplaininsaigbhiltistyinatocrothsse
rAeIl-related research communities and to
Fig. 1: Normalized corpus frequency of detect how each elds de nes notions
\explain" or \explanation". of explainability. We carried out an
experiment over corpora of papers from the computer vision, NLP, connectionist,
and symbolic reasoning communities. We base our analysis on corpus statistics
compiled from the proceedings of conferences where researchers employ, inter
alia, ML techniques to approach their research objectives: the Annual Meeting
of the Association for Computational Linguistics (ACL), the Annual Conference
on Neural Information Processing Systems (NIPS), the International/European
Conference on Computer Vision (ICCV/ECCV), and the Annual Conference of
the Cognitive Science Society (COGSCI). The corpora include proceedings from
2007 to 2016. This allows us to observe trends regarding the use of words related
to various concepts and the scope these concepts take. We perform a shallow
linguistic search for what we henceforth call \explanation terms". We apply
simple substring matches such as \explain" or \explanation" for explainability,
\interpret" for interpretability and \compreh" for comprehensibility.
\Explanation terms" serve as an approximation to aspects of explainability. Frequencies
are normalized, making them comparable between years and conferences.</p>
      <p>The normalized frequencies for explanation terms are plotted in Figure 1.
We omit frequency plots for interpretation and comprehension terms because
they exhibit a similar pattern. The frequency level of explainability concepts for
COGSCI is signi cantly above those of the other corpora. This could be due
to the fact that Cognitive Science explicitly aims to explain the mind and its
processes, in many cases leading to qualitatively di erent research questions,
corresponding terminology, and actively used vocabularies. The NIPS corpus
hints at an emphasis of explainability in 2008 and a slight increase in interest
in this concept in 2016 in the connectionist community. To better understand
how consistent topics and ideas around explainability are across elds, we also
analyze the context of its mentions. Word clouds shown in Figure 2 are a simple
method to gain an intuition about the composition of a concept and its semantic
contents by highlighting the important words related to it. Important words are
de ned as words that appear within a 20 words window of a mention of an
explanation term with a frequency highly above average4.</p>
      <p>All communities focus on the explainability of a model but there is a
difference between the nature of models in Cognitive Science and the other elds.
The COGSCI corpus mentions a participant, a task and an e ect whereas the
other communities focus on a model and what constitutes data in their elds. It
is not surprising that the neural modeling and NLP communities show a large
overlap in their usage of explainability since there is an overlap in the research
communities as well. We note further di erences across the three ML
communities compared to COGSCI. In the ACL corpus, explainability is often paired
with words like features, examples, and words, which could suggest an emphasis
on using examples to demonstrate the decision making of NLP decisions and
the relevance of particular features. In the NIPS corpus, explainability is more
closely tied to methods, algorithms, and results suggesting a desire to
establish explanations about how neural systems translate inputs to outputs. The
ICCV/ECCV falls between the ACL and NIPS corpus in the sense that it pairs
explainability with data (images) and features (objects) like ACL, but may also
tie the notion to how algorithms use (using) images to generate outputs.</p>
      <p>The corpus analysis establishes some di erences in how various AI
communities approach the concept of explainability. In particular, we note that the term
4 Word clouds are generated with the word-cloud package (http://amueller.github.</p>
      <p>io/word_cloud/index.html).
is sometimes used to help probe the mechanisms of ML systems (e.g. we seek an
interpretation of how the system works), and other times to relate explanations
to particular inputs and examples (e.g. we want to comprehend how an input
was mapped to an output). We use these observations to develop the following
notions, also illustrated in Figure 3:</p>
      <p>Opaque systems. A system where the mechanisms mapping inputs to
outputs are invisible to the user. It can be seen as an oracle that makes predictions
over an input, without indicating how and why predictions are made. Opaque
systems emerge, for instance, when closed-source AI is licensed by an
organization, where the licensor does not want to reveal the workings of its proprietary
AI. Similarly, systems relying on genuine \black box" approaches, for which
inspection of the algorithm or implementation does not give insight into the
system's actual reasoning from inputs to corresponding outputs, are classi ed as
opaque.</p>
      <p>Interpretable systems. A system where a user cannot only see, but also
study and understand how inputs are mathematically mapped to outputs. This
implies model transparency, and requires a level of understanding of the
technical details of the mapping. A regression model can be interpreted by comparing
covariate weights to realize the relative importance of each feature to the
mapping. SVMs and other linear classi ers are interpretable insofar as data classes
are de ned by their location relative to decision boundaries. But the action of
deep neural networks, where input features may be automatically learned and
transformed through non-linearities, is unlikely to be interpretable by most users.</p>
      <p>
        Comprehensible systems. A comprehensible system emits symbols along
with its output (echoing Michie's strong and ultra-strong machine learning [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]).
These symbols (most often words, but also visualizations, etc.) allow the user
to relate properties of the inputs to their output. The user is responsible for
compiling and comprehending the symbols, relying on her own implicit form of
knowledge and reasoning about them. This makes comprehensibility a graded
notion, with the degree of a system's comprehensibility corresponding to the
relative ease or di culty of the compilation and comprehension. The required
implicit form of knowledge on the side of the user is often an implicit
cognitive \intuition" about how the input, the symbols, and the output relate to each
other. Taking the image in Figure 3 as example, it is intuitive to think that users
will comprehend the symbols by noting that they represent objects observed in
the image, and that the objects may be related to each other as items often seen
in a factory. Di erent users may have di erent tolerances in their
comprehension: some may be willing to draw arbitrary relationships between objects while
others would only be satis ed under a highly constrained set of assumptions.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>De ning Notions of Explainability</title>
      <p>
        The arrows in Figure 3 suggest that comprehensible and interpretable systems
each are improvements over opaque systems. The notions of comprehension and
interpretation are separate: while interpretation requires transparency in the
underlying mechanisms of a system, a comprehensible one can be opaque while
emitting symbols a user can reason over. Regression models, support vector
machines, decision trees, ANOVAs, and data clustering (assuming a kernel that
is itself interpretable) are common examples of interpretable models. High
dimensional data visualizations like t-SNE [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and receptive eld visualization on
convolutional neural networks [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] are examples of comprehensible models.
      </p>
      <p>It is important that research in both interpretable and comprehensible
systems continue forward. This is because, depending on the user's background and
her purpose of employing an AI model, one type is preferable to another. As a
real-life example of this, most people think of a doctor as a kind of black box
that transforms symptoms and test results into a diagnosis. Without providing
information about the way medical tests and evaluations work, doctors deliver
a diagnosis to a patient by explaining high-level indicators revealed in the tests
(i.e. system symbols). Thus, when facing a patient, the physician should be like
a comprehensible model. When interacting with other doctors and medical sta ,
however, the doctor may be like an interpretable model: She can sketch a
technical line of connecting patient symptoms and test results to a particular diagnosis.
Other doctors and sta can interpret a diagnosis in the same way that an analyst
can interpret an ML model, ensuring that the conclusions drawn are supported
by reasonable evaluation functions and weight values for the evidence presented.</p>
      <p>
        Explainable system traits. Often discussed alongside explainable AI are
the external traits such systems should exhibit. These traits are seen as so
important that some authors argue an AI system is not `explainable' if it does not
support them [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Figure 4 presents such traits and conveys their dependence on
not only the learning model but also the user. For example, explainable AI should
instill con dence and trust that the model operates accurately. Yet the
perception of trust is moderated by a user's internal bias for or against AI systems, and
their past experiences with their use. Safety, ethicality, and fairness are traits
that can only be evaluated by a user's understanding of societal standards and
by her ability to reason about emitted symbols or mathematical actions. Present
day systems fortunately leave this reasoning to the user, keeping a person as a
stopgap preventing unethical or unfair recommendations from being acted upon.
      </p>
      <p>We also note that \completeness" is not an explicit trait, and might not even
be desirable as such. Continuing with the doctor example from above, it may
be desirable for a system to present a simpli ed (in the sense of incomplete,
as opposed to abstracted) `explanation' similar to a doctor using a patient's
incomplete and possibly not entirely accurate preconceptions in explaining a
complex diagnosis, or even sparing the patient especially worrisome details which
might not be relevant for the subsequent treatment.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Truly Explainable AI Should Integrate Reasoning</title>
      <p>Interpretable and comprehensible models encompass much of the present work
in explainable AI. Yet we argue that both approaches are lacking in their ability
to formulate, for the user, a line of reasoning that explains the decision making
process of a model using human-understandable features of the input data.
Reasoning is a critical step in formulating an explanation about why or how some
event has occurred (see, e.g., Figure 5). Leaving explanation generation to
human analysts can be dangerous since, depending on their background knowledge
about the data and its domain, di erent explanations about why a model makes
a decision may be deduced. Interpretive and comprehensible models thus enable
explanations of decisions, but do not yield explanations themselves.</p>
      <p>
        E orts in neural-symbolic integration [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] aim to develop methods which might
enable explicit automated reasoning over model properties and decision factors
by extracting symbolic rules from connectionist models. Combining their results
with work investigating factors in uencing the human comprehensibility of
representation formats and reasoning approaches [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] might pave the way towards
systems e ectively providing full explanations of their own to their users.
Acknowledgement. The authors thank the Schloss Dagstuhl { Leibniz
Center for Informatics and organizers and participants of Dagstuhl Seminar 17192
on Human-Like Neural-Symbolic Computing for providing the environment to
develop the ideas in this paper. This work is partially supported by a Schloss
Dagstuhl travel grant and by the Ohio Federal Research Network. Parts of the
work have been carried out at the Digital Media Lab of the University of Bremen.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Al</given-names>
            <surname>Hasan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Chaoji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Salem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Zaki</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Link prediction using supervised learning</article-title>
          . In: Wkshp.
          <article-title>on link analysis, counter-terrorism and security (</article-title>
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khosla</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliva</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torralba</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Network Dissection: Quantifying Interpretability of Deep Visual Representations</article-title>
          .
          <source>In: Proc. of Computer Vision</source>
          and Pattern
          <string-name>
            <surname>Recognition</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Garcez</surname>
          </string-name>
          , A.d.,
          <string-name>
            <surname>Besold</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Raedt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , Foldiak,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Hitzler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Icard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            , Kuhnberger, K.U.,
            <surname>Lamb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.C.</given-names>
            ,
            <surname>Miikkulainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Silver</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.L.</surname>
          </string-name>
          :
          <article-title>Neural-symbolic learning and reasoning: contributions and challenges</article-title>
          .
          <source>In: Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches</source>
          , Stanford (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Gerber</surname>
            ,
            <given-names>M.S.:</given-names>
          </string-name>
          <article-title>Predicting crime using Twitter and kernel density estimation</article-title>
          .
          <source>Decision Support Systems</source>
          <volume>61</volume>
          ,
          <fpage>115</fpage>
          {
          <fpage>125</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Karpathy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fei-Fei</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Densecap: Fully convolutional localization networks for dense captioning</article-title>
          .
          <source>In: Proc. of Computer Vision and Pattern Recognition</source>
          . pp.
          <volume>4565</volume>
          {
          <issue>4574</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Karpathy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fei-Fei</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Deep visual-semantic alignments for generating image descriptions</article-title>
          .
          <source>In: Proc. of Conference on Computer Vision and Pattern Recognition</source>
          . pp.
          <volume>3128</volume>
          {
          <issue>3137</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Katti</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Can you tell where in India I am from? Comparing humans and computers on ne-grained race face classi cation</article-title>
          .
          <source>arXiv preprint arXiv:1703.07595</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kocabey</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Camurcu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            O i,
            <given-names>F.</given-names>
            ,
            <surname>Aytar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Marin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Torralba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Weber</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          :
          <article-title>Face-to-bmi: Using computer vision to infer body mass index on social media</article-title>
          .
          <source>arXiv preprint arXiv:1703.03156</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Lipton</surname>
            ,
            <given-names>Z.C.</given-names>
          </string-name>
          :
          <article-title>The mythos of model interpretability</article-title>
          .
          <source>Workshop on Human Interpretability in Machine Learning</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Maaten</surname>
          </string-name>
          , L.v.d.,
          <string-name>
            <surname>Hinton</surname>
          </string-name>
          , G.:
          <article-title>Visualizing data using t-SNE</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          <volume>9</volume>
          ,
          <issue>2579</issue>
          {
          <fpage>2605</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Michie</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>: Machine learning in the next ve years</article-title>
          .
          <source>In: Proc. of the Third European Working Session on Learning</source>
          . pp.
          <volume>107</volume>
          {
          <fpage>122</fpage>
          .
          <string-name>
            <surname>Pitman</surname>
          </string-name>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Saradhi</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palshikar</surname>
            ,
            <given-names>G.K.</given-names>
          </string-name>
          :
          <article-title>Employee churn prediction</article-title>
          .
          <source>Expert Systems with Applications</source>
          <volume>38</volume>
          (
          <issue>3</issue>
          ),
          <year>1999</year>
          {
          <year>2006</year>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Schmid</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeller</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Besold</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tamaddoni-Nezhad</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muggleton</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>How Does Predicate Invention A ect Human Comprehensibility? Inductive Logic Programming: ILP 2016 Revised Selected Papers pp</article-title>
          .
          <volume>52</volume>
          {
          <issue>67</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Sha</surname>
            <given-names>q</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.Z.</given-names>
            ,
            <surname>Erman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.X.</given-names>
            ,
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          :
          <article-title>Understanding the impact of network dynamics on mobile video user engagement</article-title>
          .
          <source>In: ACM SIGMETRICS Performance Evaluation Review</source>
          . pp.
          <volume>367</volume>
          {
          <issue>379</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Skirpan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yeh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Designing a Moral Compass for the Future of Computer Vision using Speculative Analysis</article-title>
          .
          <source>In: Proc. of Computer Vision</source>
          and Pattern
          <string-name>
            <surname>Recognition</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>You</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luo</surname>
          </string-name>
          , J.:
          <article-title>Image captioning with semantic attention</article-title>
          .
          <source>In: Proc. of Computer Vision and Pattern Recognition</source>
          . pp.
          <volume>4651</volume>
          {
          <issue>4659</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>