<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Position: The Case Against Case-Based Explanation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jonathan Dodge</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Oregon State University, 1148 Kelley Engineering Center</institution>
          ,
          <addr-line>Corvallis, OR</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Case-based explanation (CBE) goes by many names, and in this paper I will argue why we should tend towards alternative choices when designing XAI systems. My argumentation for the stated claim rests on four broad points: (1) people seem to dislike CBE; (2) CBE relies on weak semantic linkage; (3) CBE is epistemically outmatched; (4) CBE is restrictive. This paper expounds on these arguments and concludes with thoughts about characteristics of possible alternatives.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Social Aspects of AI</kwd>
        <kwd>Social Aspects of Explanation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        What is case-based explanation? Broadly speaking,
casebased explanation is the notion of providing example(s)
from the training data which are “similar” to the instance
being explained, for some definition of similar. I have
deployed case-based explanation [
        <xref ref-type="bibr" rid="ref3">1</xref>
        ], as have other
researchers (e.g., [
        <xref ref-type="bibr" rid="ref4">2</xref>
        ]). Another name for the same strategy
is example-based explanation (e.g., [
        <xref ref-type="bibr" rid="ref5 ref6">3, 4</xref>
        ]). For the
rest of this paper, we will use CBE to refer to case-based
explanation and aliases.
      </p>
      <p>To ground our discussion, here are examples of CBE:</p>
      <sec id="sec-1-1">
        <title>Ex.#1: “The training set contained 10 individuals</title>
        <p>
          identical to Iliana; 6 of them reofended (60%)” [
          <xref ref-type="bibr" rid="ref3">1</xref>
          ]
Ex.#2: “This decision was based on thousands of
similar cases.For example, a similar case to yours is
a previous customer, Claire. She was 38 years old
with 18 years of driving experience, drove 850 miles
per month, occasionally exceeded the speed limit,
and 25% of her trips took place at night. Claire was
involved in one accident in the following year.” [
          <xref ref-type="bibr" rid="ref4">2</xref>
          ]
        </p>
        <sec id="sec-1-1-1">
          <title>Linear or Logistic Regression</title>
        </sec>
        <sec id="sec-1-1-2">
          <title>Decision Trees or Random Forests</title>
          <p>Gradient Boosting Machines
59.5</p>
          <p>80.3
74.1
39.6 Convolutional NNs
28.7 Bayesian Things
27.6 Dense NNs
26.7 Recurrent NNs
17.1 Transformers</p>
        </sec>
        <sec id="sec-1-1-3">
          <title>7.6 Generative Adversarial Networks</title>
        </sec>
        <sec id="sec-1-1-4">
          <title>5.8 Evolutionary Approaches 0 3.3 2.4</title>
        </sec>
        <sec id="sec-1-1-5">
          <title>Other</title>
        </sec>
        <sec id="sec-1-1-6">
          <title>None 25 50 75</title>
          <p>≫</p>
          <p>Thus, explanations by perturbation (e.g., Sensitivity from</p>
          <p>
            With examples in hand, I’d like to draw a distinction Binns [
            <xref ref-type="bibr" rid="ref4">2</xref>
            ]) are a form of case-based reasoning. However,
between CBE and case-based reasoning, a more general perturbation is not a form of CBE, which only consists
process. Case-based reasoning comes very naturally to of Retrieval and Reuse.
people, for example, Sarkar, et al. [
            <xref ref-type="bibr" rid="ref7">5</xref>
            ] reports partici- -Nearest-Neighbors (-NN, see [
            <xref ref-type="bibr" rid="ref9">7</xref>
            ], Chapter 5) is not
pants informally describing it when asked how the sys- a commonly used classifier, as Figure 1 shows, but it
tem worked. Aamodt and Plaza [
            <xref ref-type="bibr" rid="ref8">6</xref>
            ] describe case-based is one of the few circumstances when CBE is sound1.
reasoning as being based on a cycle of four processes: Kulesza et al. [
            <xref ref-type="bibr" rid="ref1">8</xref>
            ] proposed the taxonomy of soundness
Retrieve, Reuse, Revise, Retain. They write: and completeness, reflecting whether an explanation
“What we refer to as typical case-based methods also tells “the whole truth (completeness) and nothing but
has another characteristic property: They are able to the truth (soundness).” Those authors describe a possible
modify, or adapt, a retrieved solution when applied consequence of low soundness to be “reduced trust in
in a diferent problem solving context.” explanations.”
          </p>
          <p>Conjecture: It is a bad idea to deploy Case-Based
Explanations when not using -Nearest-Neighbors, or similar.</p>
          <p>1Disclaimer: other criticisms raised in the paper may still apply
when using CBE for -NN—but at least it is sound then.</p>
          <p>
            propriateness, fair process perception, and (in the
loans case) deservedness, consistently compared to
sensitivity based styles and occasionally compared to
other styles. This is an effect primarily observed, like
most efects in the quantitative part of our study, in
the within subject study design, indicating that the
act of comparison in a particular scenario is
important for these diferences to become apparent.
Casebased explanations seem to have the most
consistent negative impact on justice perceptions when
presented alongside alternative explanation styles.” [
            <xref ref-type="bibr" rid="ref4">2</xref>
            ]
          </p>
          <p>
            When we created a program to generate explanations
following the same four templates, we observed the same
result—CBE was the worst of the bunch [
            <xref ref-type="bibr" rid="ref3">1</xref>
            ]. More
specifically, in those authors’ words:
“As we found in the quantitative results, case-based
explanation was judged to be the least fair—and
the qualitative results provided reasons. First, some
found it to provide little information about how the
algorithm arrives at a conclusion. Second, the
number of identical cases and the percentage of cases
supporting the decision are often considered too small
to justify the decision—“ It was unfair for the
defendant because she was compared to only 22 other
identical individuals... not to mention that only a
little over 50% reoffended.” (CR-61). This
observation is consistent with Binns et al. [
            <xref ref-type="bibr" rid="ref4">2</xref>
            ], however, our
work is based on the actual output of a ML model
trained on a real dataset—allowing us to empirically
show a limitation of case-based explanation4.” [
            <xref ref-type="bibr" rid="ref3">1</xref>
            ]
          </p>
          <p>
            Low soundness alone might be enough earn a “bad
idea” label. One consequence of such labelling is that
sometimes bad ideas should remain unused, as Correll [
            <xref ref-type="bibr" rid="ref2">9</xref>
            ]
ably argues in the context of glyphs. The argumentation
there can be applied to explanations, and would start with
the assumption that there is a set of  explanations that
are the established standards. Now, introducing the  +
1ℎ with appropriate comparison to standard techniques
costs increasingly large amounts of experimentation as 
increases. Thus, all researchers benefit if the community
occasionally weeds out “bad ideas.”
          </p>
          <p>Some readers may ask, “Why do you care enough to
write this paper?” I was presenting an invited talk and
during Q&amp;A had the following exchange while answering
a question on the explanation types just presented:</p>
        </sec>
      </sec>
      <sec id="sec-1-2">
        <title>Guest (me): “...I think case-based explanation is gen</title>
        <p>erally a very bad idea. Part of the reason I say this is,
ifrst of all, Binns, when they researched case-based
explanation, they found it to be the least preferred.
We also found that [result]. And then also it has
these issues you kind of saw, with that self-refuting
explanation. The reason that is happening is because
it is not sound. The classifier isn’t a K-means
clusterer or something, and so when your explanation
is assuming that structure to the classifier, you are
lying to people”
Interviewer (Ali El-Sharif): “You just shocked me, by
the way. I always thought case-based explanations
were more intuitive, that they presented something
logical2.”</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>1. Users seem to dislike CBE</title>
      <p>
        As mentioned, Binns, et al. [
        <xref ref-type="bibr" rid="ref4">2</xref>
        ] found CBE to be the worst
of their four proposed templates for textual explanation
(Demographic, Sensitivity, Input-Influence, and Case).
      </p>
      <p>More specifically, in those authors’ words:
“Tukey’s post-hoc paired tests showed that
casebased explanations result in lower perceptions of
ap2https://www.youtube.com/watch?v=bD5_q2t-4S8, timestamp
≈ 37:00</p>
      <p>
        3https://royalsociety.org/-/media/policy/projects/
explainable-ai/AI-and-interpretability-policy-briefing.pdf
Afterwards, I decided that perhaps these thoughts were
worth developing further and writing about it. As
negative about CBE as I was at the time of this exchange, the More recently, van der Waa, et al. [
        <xref ref-type="bibr" rid="ref6">4</xref>
        ] also found
negamore I investigated, the more problems I found. As an ex- tive results for CBE, in those authors’ words:
ample, CBE is one of the suggested explanation strategies
found in Table 1 in the Royal Society of London’s policy
briefing 3; meaning that briefers have been, and will be,
instructing policymakers that CBE is satisfactory. I do
not think it is satisfactory, so here are my four reasons
for the stated conjecture; let’s see if I can shock you too.
“Our results show that rule-based explanations have
a small positive efect on system understanding,
whereas both rule- and example-based explanations
seem to persuade users in following the advice even
when incorrect. Neither explanation improves task
performance compared to no explanation. This can
be explained by the fact that both explanation styles
only provide details relevant for a single decision, not
the underlying rationale or causality. These results
show the importance of user evaluations in assessing
the current assumptions and intuitions on efective
explanations.” [
        <xref ref-type="bibr" rid="ref6">4</xref>
        ]
Here we see a study that found, essentially, no efect
from CBE—other than misleading users. Their final point
about the importance of user evaluation is well taken,
but not widely applied.
      </p>
      <p>
        4“We found that 16% of the test data exhibited the failure mode of
contradicting the claim (&lt; 50% of individuals with identical features
share label).” [
        <xref ref-type="bibr" rid="ref3">1</xref>
        ]
?
?
?
?
      </p>
      <p>B
B</p>
      <p>A
A</p>
      <p>?
C C</p>
      <p>A</p>
      <p>B
C</p>
      <p>
        As evidence that few researchers conduct user eval- the neighborhood contains  items, regardless of label;
uations, consider that Keane and Kenny [10] surveyed (3) expand until the neighborhood contains  items, of a
1,102 papers on “post-hoc explanation by example” be- particular label (as in [
        <xref ref-type="bibr" rid="ref5">3</xref>
        ]).
fore concluding: So, here the reader asks, what’s the problem with
distance? Well, distance only means something up until it
“In all the papers we examined we found less than doesn’t. Figure 2 illustrates that the same unit distance
a handful (i.e., &lt;5) that performed any adequate has drastically diferent semantic meaning depending
user testing of the proposal that cases improved the where in feature space one considers the displacement.
interpretability of models; this gap needs to be recti- Concretely, in one case moving a distance of the
neighifed.” [10] borhood radius will never change the decision, while in
Thus, if an XAI system designer faces a choice between the other case, it may.
      </p>
      <p>CBE and an alternative, to the extent we have evidence The distance problem only gets worse if we assume
at all, most of it seems to suggest that users will prefer the feature space is impoverished, which is typically true
the alternative. of decisions involving humans and many other
applications which violate the closed-world assumption5.</p>
      <p>It seems probable that every explanation style will
suf2. CBE Relies on Weak Semantic fer when deployed under violation of this assumption.</p>
      <p>Linkage However, it also seems that CBE will fare particularly
badly—due to reliance on distance. Suppose my current
Under the hood, CBE relies on some notion of distance, representation uses  dimensional features, and a
(posas Sarkar et al. [11] explain: sibly theoretical) perfect representation uses  + 
dimensions. To convert between the representations can
“Since in order to explain the -NN algorithm’s be- be viewed as a vector space projection. However,
projechavior it sufices to represent proximity, rather than tions can mangle distances when not done carefully6, as
variation along any particular dimension, we sac- Figure 3 illustrates.
rifice concrete interpretations of the spatial axes in To be concrete, consider using CBE in a domain like
favor of expressing “nearness” and “farness”.” chess. This might make perfect sense, because the
domain is fully captured by the representation. As a result,
one might expect nearby points in feature space to have
a strong semantic linkage, e.g., the same board has the
same good actions available, a similar board might be a
successor, predecessor, or sibling.</p>
      <p>To contrast, consider Example #2 and implications
of the closed world assumption. The explanation
contains only some information about driving history, while
one could imagine many other features being useful to
predict insurance costs, such as risk tolerance, alcohol
consumption, etc. This is not to say that such
information should be obtained, merely that we should be wary
of over-relying on distance when we know in advance
that the features are incomplete. The reason is that the
strength of the semantic linkage of distance decreases as
the feature set gets further from a total representation.</p>
    </sec>
    <sec id="sec-3">
      <title>3. CBE is Epistemically</title>
    </sec>
    <sec id="sec-4">
      <title>Outmatched</title>
      <p>Burdens of Proof
The spectrum
of certainty</p>
      <p>Absolute Certainty
Beyond a Reasonable Doubt
Clear and Convincing Evidence
Preponderance of the Evidence
Probable Cause
Reasonable Suspicion</p>
      <p>Mere Speculation</p>
      <p>Due to unsoundness, CBE can generate self-refuting meanings with an example, based on a tale about bank
explanations—a form of epistemic mismatch. Footnote 4 robber Willie Sutton [14], and informed by definitions
hints at this efect, Dodge and Burnett [ 12] previously found in Sørmo, et al [15]:
argued this more thoroughly (see Figure 2 in that
paper). To briefly restate that argument: weak evidence Willie Sutton, a bank robber, was asked why he
occurs when the neighborhood around the instance to robbed banks, and his response was “That’s where
be explained contains mixed examples (“I labelled this an the money is.” This description just ofers a fact
, 50% of nearby things are s.” ); contradictions occur about the context of banks—notably the story does
when it contains few or no examples of the same label not include the explainer. An example
transpar(“I labelled this an , 10% of nearby things are s.” ). ent answer might be: “I wanted the money in the</p>
      <p>By epistemic outmatch, I refer to the strength of the bank.” The latter example contextualizes the
acclaim far exceeding “burden of proof.” An assessor con- tion with respect to why the actor performed it,
suming explanations might be trying to determine “If it is explaining how the system reached an answer [15].
ift for the purpose” [13]. Here, this is a fairly strong claim, Note that to justify that action, the explanation
and so the legal regime might want a high burden of would need to ofer a reason as to why it is good
proof, depicted in Figure 4. Does consuming CBE confer for Willie to want to rob banks.
“absolute certainty”, akin to a mathematical proof?</p>
      <p>CBE is definitely not proof. To clarify, CBE is essen- Some explanations do not change based on the instance
tially a form of “proof-by-example,” which is a known to be explained, and so are called global explanations.
logical fallacy7. Instead of as “proof-by-example”, could As a result of being static in this way, it seems
impossiCBE be considered disproof-by-counterexample, which ble for a single global explanation to ofer transparency
is a valid proof technique8? Occasionally, yes; however, into every decision. However, while this can be taken
many ML/AI systems do not support such formalism, as as evidence that being a local explanation is a necessary
they are statistical machines. In particular, even a 99.9% condition for transparency, it is not suficient. In
particuaccurate classifier will mishandle specific instances. Even lar, CBE is a local explanation since each instance will
the existence of many such examples does not disprove have diferent neighbors.
anything—if there are appropriately many more correctly Thus, since CBE merely describes the decision, ofering
handled instances. little transparency to aford the assessor introspection</p>
      <p>
        If CBE isn’t proof, and instead is merely evidence, what on the system, it amounts to weak evidence attempting
kind of evidence is it? It describes the decision, providing to support a strong claim. Case dismissed!
potentially important context; but does not justify it, as
stated by van der Waa et al. [
        <xref ref-type="bibr" rid="ref6">4</xref>
        ]. Because these words 4. CBE Is Restrictive
have fairly broad meanings, let me clarify my intended
7https://en.wikipedia.org/wiki/Proof_by_example
8https://en.wikipedia.org/wiki/Counterexample
      </p>
      <p>CBE requires two properties to be true of the training data.</p>
      <p>First, the training data must still be accessible; second,
the explainer must be allowed to present training data
to the user, possibly in an anonymized form. ML/AI
techniques make predictions in many domains where
one or the other criteria is violated, which means there
are constraints for when researchers can deploy CBE
responsibly.</p>
      <p>Accessing training data is easiest when the training
data is tabular. Tabular data stands in contrast with
unstructured data:
“A very naive definition for unstructured data is
anything that cannot be put up into traditional
rowcolumn or tabular database. The common examples
of unstructured data are text or document based data,
network or graph data, image data, video, audio,
web-based logs, sensor data, etc.9”
Concretely, when OpenAI trained GPT-3 [16], they input
a vast quantity of unstructured text data—and so the
training data would not be tabular. In situation like these,
XAI system designers cannot always rely on the training
data being cheaply accessible, if at all.</p>
      <p>
        Our second criterion, presenting training data to
assessors, is easiest when the training data is not private. For
example, suppose a robot is delivering objects and that
Alice is the current package recipient, but the robot botches
the delivery somehow. Should Alice be able to request
information about previous deliveries? How about if
Alice were instead a developer? The main goal of privacy
preserving machine learning (see e.g., [17]) is to prevent
leakage of private information. CBE stands in tension
with that, because even if the explanation is anonymized, NOT reoffend
it may provide enough features (e.g., “a combination of
gender, race, birth date, geographic indicator, and other
descriptors” 10) to allow “indirect identification”.
we saw three other alternative textual explanation
templates from Binns, et al. [
        <xref ref-type="bibr" rid="ref4">2</xref>
        ] which appear superior to
CBE: Demographic, Input-Influence, and Sensitivity.
Demographic explanation attempts to use so much of (3)
as to insinuate the location of (1). As an example of a
fragment of this style:
54\% of those with 0 juvenile priors did
NOT reoffend
25\% of those with &gt;0 juvenile priors did
Meanwhile, Input-Influence explanations directly
characterize (1), as we see in this following fragment (features
positively correlated to reofending are shown in +,
negative with -, and the strength of the correlation given by
the number of symbols; 0 means no efect).
      </p>
    </sec>
    <sec id="sec-5">
      <title>What Should We Do Instead?</title>
      <p>Having concluded my argumentation for the conjecture,
the reader may ask what they should use instead?
Personally, I like the visualization strategies, (e.g., search
trees, charts, saliency maps), but recognize that not
everyone does. Returning to my geometric framing, there
are essentially three entities available for explanation:
1. the decision boundary;
2. the instance to be explained;
3. the training data.</p>
      <p>Some explanations use the entities directly, or
characterize the relationship between them, or some mixture.</p>
      <p>CBE relies heavily on the relationship between (2) and
(3), and essentially does not use (1). In the introduction
9https://medium.com/analytics-vidhya/
sql-to-nosql-4dd15ab121b0
10https://www.dol.gov/general/ppii
priors count :
0 (------)
1 to 3 (---)
4 to 6 (0)
7 to 10 (+++)
&gt;10 (++++)
Last, as shown in Figure 5, Sensitivity focuses on the
relationship between (1) and (2):
If the individual was age ‘‘30 to 39’’
they would have been predicted as
likely to reoffend
If the individual had priors count
‘‘7 to 10’’ they would have been predicted as
likely to reoffend</p>
      <p>In conclusion, decision boundaries may be hard to
characterize, but neglecting boundaries in explanation
seems to expose consumers to possibly falling prey to
logical fallacy.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This material is based upon work supported by DARPA
#N66001-17-2-4030 and joint support by NSF and
USDANIFA under #2021-67021-35344.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kulesza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burnett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Kwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. K.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <article-title>Too much, too little, or just right? ways explanations impact end users' mental models</article-title>
          ,
          <source>in: 2013 IEEE Symposium on Visual Languages and Human Centric Computing</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          . doi:
          <volume>10</volume>
          . 1109/VLHCC.
          <year>2013</year>
          .
          <volume>6645235</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Correll</surname>
          </string-name>
          ,
          <article-title>Ross-chernof glyphs or: How do we kill bad ideas in visualization?</article-title>
          ,
          <source>in: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA '18</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2018</year>
          , pp.
          <year>alt05</year>
          :
          <fpage>1</fpage>
          -
          <lpage>alt05</lpage>
          :
          <fpage>10</fpage>
          . URL: http://doi.acm.
          <source>org/10</source>
          .1145/3170427.3188398. doi:
          <volume>10</volume>
          .1145/3170427.3188398.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dodge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. K. E.</given-names>
            <surname>Bellamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dugan</surname>
          </string-name>
          ,
          <article-title>Explaining models: An empirical study of how explanations impact fairness judgment</article-title>
          ,
          <source>in: Proceedings of the 24th International Conference on Intelligent User Interfaces</source>
          ,
          <source>IUI '19</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2019</year>
          , pp.
          <fpage>275</fpage>
          -
          <lpage>285</lpage>
          . [10]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Keane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Kenny</surname>
          </string-name>
          ,
          <source>How case-based reasonURL:</source>
          http://doi.acm.
          <source>org/10.1145/3301275.3302310. ing explains neural networks: A theoretical analydoi:10.1145/3301275</source>
          .3302310.
          <article-title>sis of xai using post-hoc explanation-by-example</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Van Kleek</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Veale</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Lyngs</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Zhao, from a survey of ann-cbr twin-systems</article-title>
          , in: K. Bach, N. Shadbolt, '
          <article-title>it's reducing a human being to a per- C</article-title>
          . Marling (Eds.),
          <source>Case-Based Reasoning Research</source>
          centage':
          <article-title>Perceptions of justice in algorithmic deci-</article-title>
          and
          <string-name>
            <surname>Development</surname>
          </string-name>
          , Springer International Publishsions,
          <source>in: Proceedings of the 2018 CHI Conference ing, Cham</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>171</lpage>
          .
          <article-title>on Human Factors in Computing Systems</article-title>
          , CHI '
          <volume>18</volume>
          , [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jamnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Blackwell</surname>
          </string-name>
          , M. Spott, InACM, New York, NY, USA,
          <year>2018</year>
          , pp.
          <volume>377</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>377</lpage>
          :
          <fpage>14</fpage>
          .
          <article-title>teractive visual machine learning in spreadsheets</article-title>
          , URL: http://doi.acm.
          <source>org/10</source>
          .1145/3173574.3173951. in: Visual Languages and
          <string-name>
            <surname>Human-Centric Comdoi</surname>
          </string-name>
          :
          <volume>10</volume>
          .1145/3173574.3173951.
          <string-name>
            <surname>puting</surname>
          </string-name>
          (VL/HCC),
          <source>2015 IEEE Symposium on, IEEE,</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jongejan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Holbrook</surname>
          </string-name>
          ,
          <source>The efects of 2015</source>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>163</lpage>
          . URL: http://ieeexplore.ieee.org
          <article-title>/ example-based explanations in a machine learn- xpl/articleDetails</article-title>
          .jsp?arnumber=7357211. doi:10. ing interface,
          <source>in: Proceedings of the 24th Interna- 1109/VLHCC</source>
          .
          <year>2015</year>
          .
          <volume>7357211</volume>
          . tional Conference on
          <article-title>Intelligent User Interfaces</article-title>
          ,
          <source>IUI</source>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dodge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burnett</surname>
          </string-name>
          , Position: We Can Measure '
          <fpage>19</fpage>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2019</year>
          , pp.
          <fpage>258</fpage>
          -
          <lpage>262</lpage>
          .
          <article-title>XAI Explanations Better with “Templates”</article-title>
          , in: IUI URL: http://doi.acm.
          <source>org/10</source>
          .1145/3301275.3302289.
          <string-name>
            <surname>Workshops</surname>
          </string-name>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1145/3301275.3302289. [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hambling</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. van Goethem</surname>
          </string-name>
          ,
          <article-title>User acceptance test-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. van der</given-names>
            <surname>Waa</surname>
          </string-name>
          , E. Nieuwburg,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Cremers, ing: a step-by-step guide, BCS Learning and DeM</article-title>
          . Neerincx,
          <article-title>Evaluating xai: A comparison velopment</article-title>
          ,
          <source>Swindon</source>
          ,
          <year>2013</year>
          . URL: http://cds.cern.ch/
          <article-title>of rule-based and example-based explanations</article-title>
          ,
          <source>record/1619552. Artificial Intelligence</source>
          <volume>291</volume>
          (
          <year>2021</year>
          )
          <article-title>103404</article-title>
          . URL: [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Temple</surname>
          </string-name>
          ,
          <article-title>The contrast theory of why-questions</article-title>
          , https://www.sciencedirect.com/science/article/pii/ Philosophy of Science 55 (
          <year>1988</year>
          )
          <fpage>141</fpage>
          -
          <lpage>151</lpage>
          . URL: http: S0004370220301533. doi:https://doi.org/10. //www.jstor.org/stable/187825. 1016/j.artint.
          <year>2020</year>
          .
          <volume>103404</volume>
          . [15]
          <string-name>
            <given-names>F.</given-names>
            <surname>Sørmo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cassens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aamodt</surname>
          </string-name>
          , Explanation in
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Blackwell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jamnik</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Spott, case-based reasoning-perspectives and goals, ArtiTeach and try: A simple interaction tech-</article-title>
          ifcial
          <source>Intelligence Review</source>
          <volume>24</volume>
          (
          <year>2005</year>
          )
          <fpage>109</fpage>
          -
          <lpage>143</lpage>
          .
          <article-title>nique for exploratory data modelling by end [16] OpenAI</article-title>
          , Openai api faq,
          <year>2020</year>
          . URL: https://openai. users, in: Visual Languages and Human- com/blog/openai-api/.
          <source>Centric Computing (VL/HCC)</source>
          ,
          <year>2014</year>
          IEEE [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baracaldo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <article-title>Privacy-preserving Symposium on</article-title>
          , IEEE,
          <year>2014</year>
          , pp.
          <fpage>53</fpage>
          -
          <lpage>56</lpage>
          . URL:
          <article-title>machine learning: Methods, challenges</article-title>
          and direchttps://www.cl.cam.ac.uk/~as2006/files/sarkar_ tions,
          <source>CoRR abs/2108</source>
          .04417 (
          <year>2021</year>
          ).
          <article-title>URL: https: 2014_teach_try</article-title>
          .pdfhttp://ieeexplore.ieee.org/ //arxiv.org/abs/2108.04417. lpdocs/epic03/wrapper.htm?arnumber=6883022. doi:
          <volume>10</volume>
          .1109/VLHCC.
          <year>2014</year>
          .
          <volume>6883022</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aamodt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <article-title>Case-based reasoning: Foundational issues, methodological variations, and system approaches</article-title>
          ,
          <source>AI</source>
          communications
          <volume>7</volume>
          (
          <year>1994</year>
          )
          <fpage>39</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Norvig</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence - a modern approach, 2nd edition</article-title>
          , in: Prentice Hall series in artificial intelligence,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>