<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Testing Most Distant Neighbor (MDN) Variants for Semi-Factual Explanations in XAI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Saugat Aryal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mark T. Keane</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Insight Centre for Data Analytics</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science, University College Dublin</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <fpage>40</fpage>
      <lpage>48</lpage>
      <abstract>
        <p>Recently, Semi-factual explanations have gained popularity in the eXplainable AI (XAI) community. They provide “even if" justifications to indicate what key input features could change without changing the outcome. Although several methods have now been proposed to compute semi-factuals, the instance-based, Most Distant Neighbors (MDN) method has emerged in recent comprehensive tests to be quite competitive, even though was originally proposed as a naive benchmark. However, on some metrics MDN comes bottom of the class (e.g., sparsity). In this paper, we explore nine variants of the MDN method to determine whether its performance on some metrics can be improved relative to older methods by performing comprehensive tests on key metrics. The results show that there are MDN variants that perform better on some key metrics, but that some of the historical methods still do better.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;XAI</kwd>
        <kwd>XCBR</kwd>
        <kwd>Semi-Factual</kwd>
        <kwd>Most Distant Neighbor</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Kenny &amp; Keane [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] advanced a recent counterfactual model that happened as a side-efect to generate
useful semi-factuals as well. They pointed out that they were merely echoing very early XAI research
in CBR, a literature that advanced a suite of models such as the Local Region model [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], and KLEOR
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] which had several variants using diferent similarity measures. Aryal &amp; Keane [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] reviewed the
history of this work and identified a rapid expansion of semi-factual work in both Machine Learning
(e.g., augmentation, reject options) and XAI (e.g., explanation, interpretability). They also pointed
semifactuals lacked a naive model, by analogy to Nearest Unlike Neighbours (NUNs) for counterfactuals;
NUNs are known data-points in a contrasting class to the query class that are, perhaps, the simplest
counterfactual model one can design. Aryal &amp; Keane argued that the equivalent entity to NUNs for
semi-factuals would be Most Distant Neighbors (MDNs); namely, known data-points in the query class
that are the furthest instances from a given query along its dimensions.
      </p>
      <p>
        The underlying principle behind MDN is to balance a trade-of between two components: one that
maximizes key feature-value diferences and another that minimizes the number or sparsity of non-key
features. As such, MDNs inherently realizes most of the computational desiderata for a “good”
semifactual explanation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Moreover, although it was proposed as a naive benchmark against which more
sophisticated models could be compared; this model turned out to be very competitive compared to
historical CBR methods [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These findings position MDNs as a promising direction for further research.
The model, however, has its own caveats that warrant further exploration.
      </p>
      <p>The mean ranks of benchmark semi-factual methods reported by Aryal &amp; Keane shown in Figure 2
indicates the overall success of these methods. The methods were evaluated on key evaluation metrics
that attempt to assess semi-factuals for their computational characteristics (detailed in Section 4.1). The
results suggest that MDN is able to find semi-factuals that are farthest away from the query in both
feature (Query-to-SF Distance) and instance space (Query-to-SF kNN). However, it falls behind on three
measures: distance to the query’s class distribution (SF to Query-Class Distance), distance to the NUN
(SF-to-NUN Distance) and Sparsity. It is desirable that semi-factual lies close to the data-manifold of
query class. However, since MDN is finding the most distant semi-factual it may as well be selecting
outliers lying on the edge of distribution making it far from the manifold. Similarly, semi-factuals
are expected to lie close to the decision boundary and hence, close to the NUN. However, since MDN
could be identifying outliers, they could lie farther from decision space and NUN. Finally, fewer feature
diferences is desired between query and semi-factual. However, MDN could be internally prioritising
more on maximizing the feature-value diference at the cost of higher feature diferences to yield poor
sparsity results. Moreover, when measuring sparsity, the continuous features are considered "same"
if the values fall within the range obtained by using a threshold of "20% of standard deviation". The
threshold represents a hyperparameter which is empirically selected and lacks suficient experiments.</p>
      <p>These shortcomings motivate the present study, which aims to address the limitations of MDN and
improve their overall performance. As such, this work advances the research on MDNs for semi-factuals
and makes several novel contributions:
• Introduction of two new MDN variants optimized to overcome its initial limitations.
• A comprehensive analysis of MDN variants in diferent threshold settings along with historic</p>
      <p>CBR methods.
• A thorough discussion on the use of MDNs for Semi-factual explanations and directions for future
development in this area.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        MDNs are instance-based semi-factual explanation methods, in that, they use an existing datapoint
to explain the query’s prediction. They were recently compared with XCBR methods that are also
inherently instance-based (KLEOR-variants [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and Local-Region [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]) in the benchmarking study by
Aryal &amp; Keane [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In this section, we formally introduce the original MDN, KLEOR and Local-Region
methods used in this study.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Most Distant Neighbor (MDN)</title>
        <p>
          Aryal &amp; Keane [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] proposed a novel method, MDN which reflects many of the desiderata for a
semifactual explanation. MDN works by finding an instance in the query-class which is farther away from
it on some key-feature(s) while also being similar to it on other non-key features. As such, it finds the
most distant neighbor of the query to be deemed as the semi-factual.
        </p>
        <p>The algorithm first partitions the query-class instances into two sets: Higher and Lower Set based
on the feature-values being higher or lower than the query along the selected feature-dimension. It
then uses a custom distance function, Semi-Factual Scoring (sfs) which scores the candidate instances
based on two components: extremity of feature-values on selected key-dimension and sparsity across
remaining dimensions, as follows:
sfs(, ,  ) =
where S is Higher/Lower Set and  ∈ , dif() gives the feature-value diference on key-feature, f, and
dif () is the maximum feature-value diference for that key-feature in the Higher/Lower Set, same()
measures the number of similar non-key features between q and x, F is the total number of features.
The instance with the highest sfs score along each dimension is selected as the best-feature-MDN
independently. Finally, the best of the best-feature-MDNs across all dimensions is selected to be the
semi-factual for the query as shown in Fig 3.
(1)
(2)
SFMDN(, ) = arg max sfs()</p>
        <p>
          ∈
2.2. Knowledge-Light based Explanation-Oriented Retrieval (KLEOR)
In the CBR-era of a fortiori reasoning, Cummins &amp; Bridge [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] proposed a similarity-based approach to
retrieve such explanations. They compute similarity between the instances in the query-class and the
Nearest Unlike Neighbor (NUN) to find the best semi-factual for a given query. The method relies on
the intuition that a semi-factual to a query is more similar to its NUN and hence use them as a guide.
KLEOR has three diferent variants based on how the decision boundary partitions the feature space.
        </p>
        <p>The first variant, Sim-Miss, selects a query-class instance which is most similar to the NUN to be the
semi-factual as:</p>
        <p>SFSim-Miss(, ,  ) = arg max (, )

(3)
where q is the query, x is the instance,  represents the set of all instances in the same class as
the query, nun is the NUN, and Sim computes the Euclidean similarity in the feature space. It is the
most naive variant as it reasons that the decision boundary neatly divides the feature space. The
second variant, Global-Sim method considers complex decision boundaries which induces discontinuous
feature-spaces. Hence, it imposes an additional similarity constraint which requires the semi-factual to
lie between q and nun as:</p>
        <p>SFGlobal-Sim(, ,  ) = arg max (, ) + (, ) &gt; (, )

(4)</p>
        <p>Finally, the third variant, Attr-Sim is a sophisticated version of Global-Sim in that it considers all the
feature-dimensions. It computes similarities across each feature-attribute, ensuring that the semi-factual
lies between the q and nun across the majority of features:</p>
        <p>SFAttr-Sim(, ,  ) = arg max (, ) + max [(, ) &gt; (, )] (5)
 
where  is the feature-dimension set and  is a feature-attribute.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.3. Local-Region Model</title>
        <p>
          Nugent et al. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] proposed another method for obtaining such explanation called Local-Region.
This method analyses the local region around the query using a surrogate model, specifically logistic
regression model. The surrogate model is built using subset of instances surrounding the query and
hence essentially captures the local decision-space around it (akin to LIME [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]). The locally-trained
logistic regression model is then used to select a nearest neighbor with marginal-probability to be
identified as the semi-factual explanation as:
        </p>
        <p>SFLocal-Region(,  ) = arg min ()

where, N is the set of candidate neighbors and LR() is the local logistic regression model providing the
probability score.</p>
        <p>The intuition behind this approach is that a good semi-factual explanation should lie locally close to
query but also as distant from it and near to its local decision boundary.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. MDN Variants</title>
      <p>The original naive MDN (henceforth, MDNv1) had few limitations even though it showed competitive
results along some measures. The model uses a custom sfs() function to score each candidate instance
based on their relative distance as well as closeness to the query. The scoring function however, can be
modified to fine-tune the interplay between these two entities to optimize the behaviour of MDNv1.
Along this line, we introduce two new MDN variants which uses novel scoring functions.</p>
      <sec id="sec-3-1">
        <title>3.1. Sparse-MDNs</title>
        <p>
          The baseline MDNv1 constitutes of two components aiming to find the furthest instance from query
on some key-feature value while also being similar on other features. The two objectives are equally
weighted such that the scoring function balances both the criteria to find the best semi-factual. However,
since, MDNv1 performed relatively poorly in the sparsity metric (see Figure 5 in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and Figure 2), we
propose Sparse-MDNs (MDNv2), which prioritises the similarity between non-key features. Essentially,
we introduce a regularizer in the original sfs() function, which penalizes the algorithm for finding
semi-factuals with higher feature-diferences, thus promoting sparse explanations. We modify the
scoring function by weighing it with the proportion of features that are "not same" between the query
and the instance. Hence, instances with higher number of similar features will be assigned high scores
to obtain sparse MDNs.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Dist-MDNs</title>
        <p>In both MDNv1 and MDNv2, the similarity between non-key features between query and instances
using same() involves a direct comparison of their values. Specifically, the function checks if the values
are identical in case of categorical features, while the continuous features are considered same if they
fall within a predefined threshold range. However, it is not always straight-forward to determine the
optimal threshold and it may vary across diferent features. Hence, we propose Dist-MDNs (MDNv3)
where we modify the scoring function to compute similarity directly in the feature space as:
sfs3(, ) =
dif ( ,  ) * dist( ,  )
where dist() computes the 2-norm distance and q and x represents the query and instance
with only non-key features (i.e excluding the key-feature in consideration) respectively. The final
semi-factual is selected based on the highest scoring function as:</p>
        <p>1 dif ( ,  )
sfs2(, ,  ) =  − (, ) * ( dif ( ,  )
+
SFMDNv2(, ) = arg max sfs2()</p>
        <p>∈
SFMDNv3(, ) = arg max sfs3()</p>
        <p>∈
(6)
(7)
(8)
(9)
(10)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Setup</title>
      <sec id="sec-4-1">
        <title>4.1. Evaluation Measures</title>
        <p>
          We conduct a thorough comprehensive analysis of MDN variants (MDNv1, MDNv2 and MDNv3) along
with the historic CBR methods (KLEOR and Local-Region) on benchmark metrics and datasets.
We adopt the metrics proposed by Aryal &amp; Keane [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] to evaluate the relative performance of the
semi-factual methods.
        </p>
        <p>
          • Query-to-SF Distance. The 2-norm distance from the query to the semi-factual where higher
scores are better as the semi-factual is preferred to be farther from the query.
• Query-to-SF kNN (%). It measures the amount of instances lying between query and semi-factual
with respect to the total instances expressed as percentage. The higher scores are preferred again
as the semi-factual is expected to be furthest instance from the query.
• SF-to-Query-Class Distance. It is a distributional measure which leverages Mahalanobis
distance [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] to compute the closeness of semi-factual to the query’s class distribution. Mahalanobis
distance considers the variances and correlations between features to provide a more accurate
measure of distance in the multi-dimensional space. Lower values are preferred which indicates
that the semi-factual lies close to the query-class manifold.
• SF-to-NUN Distance. The 2-norm distance from semi-factual to the NUN, where lower scores
are better as the semi-factual is closer to the class boundary.
• Sparsity (%). It measures the percentage of semi-factuals obtained by the methods with
single feature-diference with the query. Higher scores is desired for semi-factuals to be easily
comprehended [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Method</title>
        <p>We implement MDNv1 and MDNv2 with diferent threshold values  ∈ [20, 40, 60, 80] to
comprehensively analyze their impact. The KLEOR variants were implemented using 3-NN model. The surrogate
model in the Local-Region model was built using a minimum of 200 instances from each class. All the 13
methods were evaluated on 5 metrics across 7 benchmark tabular datasets commonly used in the XAI;
namely, Adult Income (D1), Blood Alcohol (D2), Default Credit Card (D3), Diabetes (D4), German Credit
(D5), HELOC (D6) and Lending Club (D7). We performed leave-one-out cross-validation to evaluate
each method on each dataset. All the experiments were performed in Python 3.9 environment on a
Ubuntu 23.10 server with AMD EPYC 7443 24-Core Processor.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Results &amp; Discussion</title>
        <p>The scores for 9 MDN variants, 3 KLEOR-methods and Local-Region on 5 metrics across 7 datasets is
shown in Table 1 and 2. Figure 4 summarizes the overall performance of the methods as mean ranks.
The results indicate that the new MDN variants (MDN_v2 and MDNv_3) show improved performance
on the initial limitations of MDNv_1. Specifically, they achieve better scores in SF-to-Query-Class
Distance, SF-to-NUN Distance and Sparsity measures. However, this improvement comes at the expense
of their diminishing performance in Query-to-SF Distance and Query-to-SF kNN metrics. Nevertheless,
MDNv2_20 achieved the highest rank overall aggregated across all the metrics and datasets closely
followed by MDNv1_20. The older, Attr-Sim emerged as the third best overall and the best among the
CBR methods.</p>
        <p>MDNv2 and MDNv3 did not show any significant improvements on Query-to-SF Distance and
Queryto-SF kNN(%) measures where MDNv1 still scored the best. However, they outperformed KLEOR-variants
and showed competitive results on par with Local-Region. The KLEOR methods scored the best on
SF-to-Query-Class Distance and SF-to-NUN Distance metrics. MDNv2 and MDNv3, however showed
improved results on these measures relative to original MDNv1. In similar vein, the new variants,
specifically, MDNv2_20 and MDNv3 again performed best on Sparsity metric with improved results
compared to MDNv1 while also outscoring KLEOR methods. The thresholds exhibited a subtle impact
across all the measures. It can be observed that lower thresholds (20 and 40) tend to optimize their
relative scores, whereas, higher thresholds (60 and 80) have the diminishing efect.</p>
        <p>In summary, MDNs and its variants show best results in majority of the metrics while the historical
methods still perform better on others.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>Semi-factual explanations are gaining considerable attention as a new paradigm in the XAI research.
The recently proposed MDNs have shown promising results to serve as strong candidate, although they
have their own limitations. In this work, we thoroughly analyzed nine diferent variants of MDNs along
with four historic CBR-based methods on key evaluation metrics. The results suggest that MDNs could
be optimized to improve their original performance, however still falls behind the old methods on some
metrics. Nevertheless, MDNs for semi-factual explanations is a promising area of further research.
SF-to-NUN Distance (↓)</p>
      <p>D2</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work has emerged from research conducted with the financial support of Science Foundation
Ireland (SFI) to the Insight Centre for Data Analytics under Grant Number 12/RC/2289 P2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Aryal</surname>
          </string-name>
          , M. T. Keane,
          <article-title>Even if explanations: Prior work, desiderata &amp; benchmarks for semi-factual xai</article-title>
          ,
          <source>in: IJCAI-23</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>6526</fpage>
          -
          <lpage>6535</lpage>
          . URL: https://doi.org/10.24963/ijcai.
          <year>2023</year>
          /732. doi:
          <volume>10</volume>
          .24963/ ijcai.
          <year>2023</year>
          /732.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kenny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>The utility of “even if” semifactual explanation to optimise positive outcomes</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>36</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Kenny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Keane</surname>
          </string-name>
          ,
          <article-title>On generating plausible counterfactual and semi-factual explanations for deep learning</article-title>
          ,
          <source>in: Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>11575</fpage>
          -
          <lpage>11585</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Keane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Kenny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Delaney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Smyth</surname>
          </string-name>
          ,
          <article-title>If only we had better counterfactual explanations</article-title>
          ,
          <source>in: Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI-21)</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Boonsanong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hoang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Hines</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Dickerson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations and algorithmic recourses for machine learning: a review, arXiv preprint</article-title>
          arXiv:
          <year>2010</year>
          .
          <volume>10596</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vats</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohammed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pedersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wiratunga</surname>
          </string-name>
          ,
          <article-title>This changes to that: Combining causal and non-causal explanations to generate disease progression in capsule endoscopy</article-title>
          ,
          <source>arXiv preprint arXiv:2212.02506</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Artelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hammer</surname>
          </string-name>
          ,
          <article-title>"even if</article-title>
          ...
          <article-title>"-diverse semifactual explanations of reject</article-title>
          ,
          <source>arXiv preprint arXiv:2207</source>
          .
          <year>01898</year>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mertes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Karle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Weitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schlagowski</surname>
          </string-name>
          , E. André,
          <article-title>Alterfactual explanations-the relevance of irrelevance for explaining ai systems</article-title>
          ,
          <source>arXiv preprint arXiv:2207.09374</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Mac</given-names>
            <surname>Namee</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>A rationale-centric framework for human-in-the-loop machine learning</article-title>
          ,
          <source>arXiv preprint arXiv:2203.12918</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Nugent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Doyle</surname>
          </string-name>
          ,
          <article-title>The best way to instil confidence is by being right</article-title>
          ,
          <source>in: International Conference on Case-Based Reasoning</source>
          , Springer,
          <year>2005</year>
          , pp.
          <fpage>368</fpage>
          -
          <lpage>381</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cummins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bridge</surname>
          </string-name>
          ,
          <article-title>Kleor: A knowledge lite approach to explanation oriented retrieval</article-title>
          ,
          <source>Computing and Informatics</source>
          <volume>25</volume>
          (
          <year>2006</year>
          )
          <fpage>173</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Nugent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Doyle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          ,
          <article-title>Gaining insight through case-based explanation</article-title>
          ,
          <source>Journal of Intelligent Info Systems</source>
          <volume>32</volume>
          (
          <year>2009</year>
          )
          <fpage>267</fpage>
          -
          <lpage>295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>" why should i trust you?" explaining the predictions of any classifier</article-title>
          ,
          <source>in: Proceedings of the 22nd ACM SIGKDD-16</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Chandra</surname>
          </string-name>
          , et al.,
          <article-title>On the generalised distance in statistics</article-title>
          ,
          <source>in: Proceedings of the National Institute of Sciences of India</source>
          , volume
          <volume>2</volume>
          ,
          <year>1936</year>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Keane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Smyth</surname>
          </string-name>
          ,
          <article-title>Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable ai (xai)</article-title>
          ,
          <source>in: Proceedings of the 28th International Conference on Case-Based Reasoning (ICCBR-20)</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>163</fpage>
          -
          <lpage>178</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>