<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Two-Stage Algorithm for Cost-Eficient Multi-instance Counterfactual Explanations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>André Artelt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Gregoriades</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Management, Entrepreneurship and Digital Business, Cyprus University of Technology</institution>
          ,
          <addr-line>30 Arch. Kyprianos Str., 3036 Limassol</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Technology, Bielefeld University</institution>
          ,
          <addr-line>Inspiration 1, 33615 Bielefeld</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Cyprus</institution>
          ,
          <addr-line>Panepistimiou 1, 2109 Aglantzia, Nicosia</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Counterfactual explanations constitute among the most popular methods for analyzing black-box systems since they can recommend cost-eficient and actionable changes to the input of a system to obtain the desired system output. While most of the existing counterfactual methods explain a single instance, several real-world problems, such as customer satisfaction, require the identification of a single counterfactual that can satisfy multiple instances (e.g. customers) simultaneously. To address this limitation, in this work, we propose a flexible two-stage algorithm for finding groups of instances and computing cost-eficient multi-instance counterfactual explanations. The paper presents the algorithm and its performance against popular alternatives through a comparative evaluation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;XAI</kwd>
        <kwd>Counterfactual Explanations</kwd>
        <kwd>Multi-instance Counterfactuals</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Recently an increasing number of Artificial Intelligence (AI) systems have been applied to
important problems, such as medical image classification [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and many more. Although these
systems show impressive performance when used on experimental data, they are still imperfect
when applied to real-world problems, and in some cases can cause harm to humans due to
biases embedded in their logic [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Therefore, transparency of such systems is of paramount
importance, since it assists in understanding their logic and thus allows decision-makers to
decide where and how it is safe to deploy them [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The importance of transparency is also
stressed at EU level, with recent regulations such as the AI act [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] making explicit reference
to the need for explainability. The field of explainability is not new and focuses on answering
"why" a system behaves in a certain way. Recently the term eXplainable AI (XAI) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] has
been coined, which boosted the popularity of the field and led to the introduction of many
diferent XAI methods in diferent domains [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. One of the most popular types of explanation
methods are counterfactual explanations [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which mimic the way humans seek explanations [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
By definition, a counterfactual explanation provides actionable recommendations on how to
change a predictive system’s output in some desired way – e.g. how to change a rejected loan
application into an accepted one. In many business problems where AI systems are deployed
such as customer repurchase prediction (how to make customers buy again from a firm), and
employee attrition (how to prevent employees from leaving the organisation), the decision
maker is not only interested in explaining a single instance of the predictive system but a group
of instances – e.g. how to prevent many employees from quitting, instead of only one. To
address such use cases, the concept of multi-instance counterfactual explanations has been
recently introduced [9, 10]. Here, the aim is to identify a single explanation of how to change
the system’s output for a group of instances simultaneously. Because of the novelty of this
concept, many issues still exist – in particular, how to identify groups of instances for which
cost-eficient multi-instance counterfactual explanations can be computed. Our contributions:
In this work, we formalize and investigate the problem of finding cost-eficient counterfactual
explanations for groups of instances (multi-instance). Based on our formal analysis, we propose
a model (data-agnostic) two-stage algorithm for computing such multi-instance counterfactual
explanations.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Foundations</title>
      <p>
        A counterfactual explanation (often just called counterfactual) proposes cost-efective and
actionable changes to the features of a given input instance of a model such that its prediction
changes to the desired output. Because counterfactuals mimic human explanations [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], they
constitute among the most popular explanation methods and a favorable choice in many practical
problems [11]. The computation of counterfactual explanations involves the consideration
of two important aspects [
        <xref ref-type="bibr" rid="ref7">7, 12</xref>
        ]: 1) the contrasting property, which requires that the stated
changes indeed alter the output of the system, and 2) the cost of the counterfactual, which
defines the dificulty and efort it takes to realise the explanation (i.e. recommendations) in the
real world.
      </p>
      <p>Definition 1 (Counterfactual Explanation). Assume a prediction function ℎ :  →  is given.
Computing a counterfactual explanation ⃗cf ∈  for a given instance ⃗orig ∈  is phrased as the
following optimization problem: arg min ℓ (︀ ℎ(⃗orig ⊕ ⃗cf), cf)︀ +  ·  (⃗cf) where ℓ(· ) denotes a
⃗cf ∈ 
loss function that penalizes deviation of the output ℎ(⃗orig ⊕ ⃗cf) from the requested output cf,
 (· ) states the cost of ⃗cf, and  &gt; 0 denotes the regularization strength.</p>
      <p>The symbol ⊕ denotes the application/execution of the counterfactual ⃗cf to the original
instance ⃗orig – i.e. for  = R this reduces to (⃗cf) = (⃗orig) + (⃗cf), and to (⃗cf) = (⃗cf)
in the case of categorical features. Also note that the cost of the counterfactual, here modeled
by  (· ), is highly domain and often use-case specific [ 13], with -norm being the default.
Furthermore, there exist additional relevant aspects such as plausibility [14], robustness [15, 16],
fairness [17], etc. The basic formalization Definition 1, however, is still very popular and widely
used in practice [11, 13].</p>
      <p>Most existing methods do not address the case where we have to assign the same actions
to multiple instances simultaneously [9] – i.e. explain more than a single instance within a
single counterfactual [18, 9]. In contrast to Definition 1, a multi-instance counterfactual states a
single change ⃗cf that alters the output of a classifier ℎ :  →  for many instances ⃗ ∈ 
simultaneously. While multi-instance counterfactuals are formalized slightly diferently in the
literature [10, 18, 9, 19], related work agrees that the same two properties as in Definition 1
must be considered: 1) The cost of the explanation ⃗cf 2) an extension of the contrasting property
from Definition 1: the explanation  should be valid for all/many instances in a given set of
instances . In this work, we formalize a multi-instance counterfactuals explanation as follows:
Definition 2 (Multi-instance Counterfactual Explanation). Let ℎ :  →  denote a prediction
function, and let  be a set of labeled instances with the same prediction  ∈  under ℎ(· ) – i.e.
ℎ(⃗) =  ∀⃗ ∈ . We call all pareto-optimal solutions ⃗cf to the following multi-objective
optimization problem "multi-instance counterfactuals":
min (︀  (⃗cf) , ℓ(ℎ(⃗1 ⊕ ⃗cf), cf), . . . , ℓ(ℎ(⃗|| ⊕ ⃗cf), cf)︀)
⃗cf ∈</p>
      <p>Note that the main diference to Definition 1 is that the contrasting property leads to multiple
objects (i.e. one objective for each instance in ) – i.e. the change ⃗cf must be valid for all (or as
many as possible) instances in .</p>
      <p>Related Work One of the earliest works that addresses this problem proposes a counterfactual
explanation tree [9], which assigns counterfactuals to the instances at the leaves of a decision
tree, derived from  – i.e. each leaf is interpreted as a group. This method groups instances and
computes multi-instance counterfactuals in a single step. While this might be beneficial in some
scenarios, it also constitutes a limitation since the user cannot customize the groupings and also
lacks any formal guarantees due to the use of a heuristic (local search) in the implementation.
In general, a large part of existing work for multi-instance counterfactuals can be interpreted
as summarizing or aggregating individual counterfactual explanations [10, 18, 20, 21]. For
instance, in [10], multi-instance counterfactuals are generated by first computing individual
counterfactuals and then applying a sampling strategy to select the one that satisfies most
instances from a given set, for which a multi-instance counterfactual is requested. In previous
work [19], multi-instance counterfactuals are implemented utilizing convex programming for
linear classifiers only. However, those methods assume that a grouping is already given and
also often sufer from poor performance (e.g. low coverage and correctness). A related branch
of research is counterfactual robustness with respect to input changes [22, 23, 15]. Robust
counterfactuals [22] should not change for similar instances – i.e. those robust counterfactuals
would constitute multi-instance counterfactuals for their local neighborhood in data space.
However, if instances are too diferent from each other, robust counterfactuals do not provide a
solution to the multi-instance counterfactual explanation problem.</p>
    </sec>
    <sec id="sec-3">
      <title>3. A Two-Stage Algorithm for Multi-instance Counterfactuals</title>
      <p>As stated in Definition 2, a multi-instance counterfactual states changes ⃗cf that are valid for a
set of instances . While in some scenarios, the  might be given a priori, in other scenarios
it might be more flexible and require finding groups along with cost-eficient multi-instance
counterfactuals. For instance, business owners might be interested in identifying groups of
customers along with recommendations on how to improve their repurchase intentions. In
these cases, it is important to identify large groups of instances for which cost-eficient
multiinstance counterfactuals (Definition 2) can be computed. We formalize this as a multi-objective
optimization problem as stated in Problem 1.
counterfactuals (Definition 2) exists:
Problem 1. For a classifier ℎ :</p>
      <p>→  and a set of instances  ⊂ 
,  ∈ , we are looking for  partitions  of the instances such that cost-eficient multi-instance
 with ℎ(⃗) =  ∀⃗ ∈
s.t. ⋃︁</p>
      <p>min 
max (︀ |1|, . . . , | |︀)
 = ,
 ∩  = ∅ ∀ ̸= 
min (︀  (⃗cf1), . . . ,  (⃗cf )
︀)
min (︀ ℓ(ℎ(⃗ ⊕ ⃗cf), cf) ⃗ ∈ , . . . , 
︀)
(1a)
(1b)</p>
      <p>In this work we study Problem 1 and propose the following process for computing
multiinstance counterfactuals: Stage 1) Finding a grouping of instances and then Stage 2) Computing
multi-instance counterfactual explanations for each of those groups – by this, we aim to reduce
the efect of outliers on the cost of the final multi-instance counterfactuals.</p>
      <p>Stage 1- Grouping of Instances. For this task, a naive approach would have been to group
the instances based on their spatial similarity/distances – e.g. by using a clustering method
such as k-means. However, because counterfactuals are known not to be robust with respect to
large changes in the input [15], this approach is likely to fail. Furthermore, such an approach
cost-eficient counterfactuals – we empirically confirm this in the experiments in Section 4.
does not take into account any knowledge about the cost  (· ), which is necessary to compute
Lemma 1. Assume a linear binary classifier ℎ : R
a given set of instances ⃗ ∈ R we denote their counterfactual explanation (Definition 1) as ⃗cf.
If ∀  ̸=  : ⃗cf⊤⃗cf = ‖⃗cf‖2 · ‖ ⃗cf ‖2, then the cost  (· ) of the multi-instance counterfactual ⃗cf
→ {0, 1} and  (· ) = ‖·‖ . Furthermore, for
(Definition 2) is given as  (⃗cf) = max  (⃗cf)</p>
      <p />
      <p>Lemma 1 states that if the individual counterfactuals all have the same direction, then a
multi-instance counterfactual not only exists but we can also state a tight upper bound on
its cost. Although Lemma 1 is stated for a linear classifier, it can also be applied to arbitrary
classifiers that can be approximated locally by a linear classifier. This suggests that groups of
instances where the individual counterfactuals (Definition 1) point in similar directions are good
candidates for which cost-eficient multi-instance counterfactuals (Definition 2) might exist.
We, therefore, propose to 1) compute single counterfactuals (Definition 1) for each instance, and
then 2) cluster those into groups based on their direction (i.e. based on their cosine similarity)
– optionally, in addition, one could also cluster in a second step according to their amount of
change. In the remainder of this work, we limit ourselves to minimizing the number of changes
– i.e. we cluster only based on the direction of the individual counterfactuals. The number of
groups (i.e. clusters) might be given by the user or might be determined automatically, e.g.
using the Elbow method. The complete procedure is described in Algorithm 1.
Algorithm 1 Grouping of Instances For Cost-Eficient Multi-instance Counterfactuals
Input: Instances ⃗ with the same prediction ℎ(⃗) = ⃗, counterfactual generation CFℎ(· )
Output: Grouping of instances
1: {⃗cf = CFℎ(⃗)}
2: for Diferent number of clusters do
◁ Compute a counterfactual ⃗cf for each instance ⃗</p>
      <p>◁ Optimize number of clusters if requested/needed
3: Cluster with d(⃗cf, ⃗cf ) = ‖⃗c⃗fc‖f⊤2‖⃗⃗cfcf‖2 ◁ Cluster based on the directions of ⃗cf
4: Sub-cluster with d(⃗cf, ⃗cf ) = ‖ (⃗cf) −  (⃗cf )‖2 ◁ Cluster based on the cost  (⃗cf)
5: end for
Stage 2- Computing Multi-instance Counterfactuals Given a group of instances, we
can use any existing method from the literature for computing multi-instance counterfactuals
(Definition 2). However, because existing model/domain-agnostic methods are limited and often
show sub-optimal performance with respect to correctness, we propose an evolutionary method
for solving Definition 2. This not only constitutes a model/domain-agnostic method but also a
very flexible solution since additional constraints can be easily introduced. Our evolutionary
method is an instance of the classic ( +  ) genetic algorithm [24] and can handle all types of
variables. In order to guarantee the feasibility of the final multi-instance counterfactual ⃗cf for
the given problem domain, we construct the set of feasible changes for each feature of numerical
variables as follows – assuming non-negativity which can be achieved by adding a constant:
 =   − min{(⃗ )} and  =   − max{(⃗ )} , where   and   denote the maximum and
minimum feasible value of the -th feature, and the final set of feasible changes is then given as
[, ]. These sets are used when computing mutations in our evolutionary algorithm of existing
individuals during the optimization. Furthermore, we merge the objectives in Eq. (1b) into a
single objective as follows: a⃗rcgf ∈min  (⃗cf) +  · ∑︀⃗ ∈  ℓ(ℎ(⃗ ⊕ ⃗cf), cf) where the cost  (⃗cf)
is defined as:  (⃗cf) = ∑︀  ((⃗cf)) where  ((⃗cf)) = |(⃗cf)| or1 if -feature is categorical.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>The following experiments are conducted to showcase the application and merits of the proposed
method. All experiments and the proof of Lemma 1 are publicly available on GitHub1.</p>
      <p>We consider two datasets that have been used in other work on multi-instance
counterfactuals [19, 9]: The IBM human resource attrition dataset [25] (Attr.) containing 35 features for 1467
unique employees. The Law school data set [26] (Law) containing 20798 law school admission
records, each described by 12 attributes.</p>
      <p>Setup. We empirically compare our proposed evolutionary algorithm (denoted by EA) from
Section 3, against two methods [10, 9], which similarly to our method are also model and
data-agnostic. In this context, we evaluate two properties of the computed multi-instance
counterfactual explanations, and the results are shown in Table 1 and Table 2. The properties
1https://github.com/andreArtelt/TwoStageMultiinstCFs</p>
      <sec id="sec-4-1">
        <title>Correctness (in percentage) of the generated multi-instance counterfactuals. We report the mean and</title>
        <p>variance (over all folds) rounded to two decimal points – larger numbers are better.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Data</title>
      </sec>
      <sec id="sec-4-3">
        <title>Attr. Law</title>
      </sec>
      <sec id="sec-4-4">
        <title>Method</title>
      </sec>
      <sec id="sec-4-5">
        <title>EA [Ours]</title>
      </sec>
      <sec id="sec-4-6">
        <title>Warren et al. [10]</title>
      </sec>
      <sec id="sec-4-7">
        <title>Kanamori et al. [9]</title>
      </sec>
      <sec id="sec-4-8">
        <title>EA [Ours]</title>
      </sec>
      <sec id="sec-4-9">
        <title>Warren et al. [10]</title>
      </sec>
      <sec id="sec-4-10">
        <title>Kanamori et al. [9]</title>
        <p>No Clustering ↑
are: 1) The correctness, i.e. evaluating for how many samples the explanation is correct:
︁(</p>
        <p>ℎ(⃗ ⊕ ⃗cf) = cf︁) – Table 1. 2) The cost  (· ) expressed as the number of changed
features, i.e.  (⃗cf) = ∑︀ 1</p>
        <p>︁( (⃗cf) ̸= 0︁) - Table 2. All experiments were done using a five-fold
cross-validation and we report the mean and variance of the results over all folds. An XGBoost
classifier is fitted to the training set and all negatively classified instances (i.e.
ℎ(⃗) = 0) from
the test-set define the set  used by the multi-instance counterfactual method. We compute a
multi-instance counterfactual for the entire set of selected instances , and also cluster (using
DBSCAN) the set  in two diferent ways: 1) Clustering based on the individual counterfactuals
as proposed in Algorithm 1 using the cosine-similarity, and for comparison 2) clustering based
on the individual instances ⃗ using the Euclidean distance.</p>
        <p>Results &amp; Discussion. From the results, we observe that our proposed method achieves
excellent performance (with respect to correctness and cost) across all settings. The method by
Warren et al [10] often struggles to find correct multi-instance counterfactuals (i.e.
counterfactuals that cover as many as possible instances), also their method almost always produces
multi-instance counterfactuals that use all available features and therefore have a higher cost
if implemented in practice. The method by Kanamori et al. [9] often achieves a competitive
performance and yields the most cost-eficient solutions while sacrificing correctness – however,
this method automatically creates additional sub-groups and is therefore dificult to compare
to the other methods. Furthermore, we observe that our proposed clustering of individual
counterfactuals in Algorithm 1 often improves significantly the correctness and the cost of the
computed multi-instance counterfactuals. The results demonstrate the merits of our proposed
two-stage algorithm.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion &amp; Summary</title>
      <p>In this work, we proposed a flexible two-stage algorithm for finding groups of instances for
which we can compute cost-eficient multi-instance counterfactual explanations. Our proposed
algorithm groups instances so that the single multi-instance counterfactual for each group
is as simple as possible (i.e. cost eficient). From the empirical evaluation of the method, we
conclude that our proposed algorithm (the grouping and the proposed evolutionary method)
has either superior or competitive performance compared to existing methods for computing
multi-instance counterfactual explanations. The main limitation of our method is that it sufers
from the necessity of computing single counterfactuals for each instance and this impacts its
computational performance. Therefore, as part of our future work, we will investigate how
to improve computational performance through the use of approximations of counterfactuals,
such as gradients.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments References</title>
      <p>This research was supported by the Ministry of Culture and Science NRW (Germany) as part of
the Lamarr Fellow Network. This publication reflects the views of the authors only.
[9] K. Kanamori, T. Takagi, K. Kobayashi, Y. Ike, Counterfactual explanation trees: Transparent
and consistent actionable recourse with decision trees, in: AISTATS 2022, 2022. URL:
https://proceedings.mlr.press/v151/kanamori22a.html.
[10] G. Warren, M. T. Keane, C. Gueret, E. Delaney, Explaining groups of instances
counterfactually for xai: A use case, algorithm and user study for group-counterfactuals,
arXiv:2303.09297 (2023).
[11] S. Verma, J. Dickerson, K. Hines, Counterfactual explanations for machine learning: A
review, 2020. arXiv:2010.10596.
[12] M. T. Keane, B. Smyth, Good counterfactuals and where to find them: A case-based
technique for generating counterfactuals for explainable ai (xai), in: ICCBR, 2020.
[13] R. Guidotti, Counterfactual explanations and how to find them: literature review and
benchmarking, Data Mining and Knowledge Discovery (2022) 1–55. doi:10.1007/
s10618-022-00831-6.
[14] R. Poyiadzi, K. Sokol, R. Santos-Rodriguez, T. De Bie, P. Flach, Face: Feasible and actionable
counterfactual explanations, Association for Computing Machinery, New York, NY, USA,
2020. doi:10.1145/3375627.3375850.
[15] A. Artelt, V. Vaquet, R. Velioglu, F. Hinder, J. Brinkrolf, M. Schilling, B. Hammer, Evaluating
robustness of counterfactual explanations, in: IEEE SSCI, 2021. doi:10.1109/SSCI50451.
2021.9660058.
[16] D. Slack, A. Hilgard, H. Lakkaraju, S. Singh, Counterfactual explanations can be
manipulated, Advances in Neural Information Processing Systems 34 (2021) 62–75.
[17] A. Artelt, B. Hammer, "explain it in the same way!" – model-agnostic group fairness of
counterfactual explanations, in: IJCAI Workshop on XAI, 2023. URL: https://sites.google.
com/view/xai2023.
[18] D. Ley, S. Mishra, D. Magazzeni, GLOBE-CE: A translation based approach for global
counterfactual explanations 202 (2023) 19315–19342. URL: https://proceedings.mlr.press/
v202/ley23a.html.
[19] A. Artelt, A. Gregoriades, "how to make them stay?": Diverse counterfactual explanations
of employee attrition, in: ICEIS, 2023. doi:10.5220/0011961300003467.
[20] K. Rawal, H. Lakkaraju, Beyond individualized recourse: Interpretable and interactive
summaries of actionable recourses, in: NeurIPS, 2020. URL: https://proceedings.neurips.
cc/paper/2020/hash/8ee7730e97c67473a424ccfef49ab20-Abstract.html.
[21] G. Plumb, J. Terhorst, S. Sankararaman, A. Talwalkar, Explaining groups of points in
low-dimensional representations, in: ICML 2020, volume 119, PMLR, 2020, pp. 7762–7771.</p>
      <p>URL: http://proceedings.mlr.press/v119/plumb20a.html.
[22] F. Leofante, N. Potyka, Promoting counterfactual robustness through diversity, in:
Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, 2024, pp. 21322–21330.
[23] R. Dominguez-Olmedo, A. H. Karimi, B. Schölkopf, On the adversarial robustness of causal
algorithmic recourse, in: ICML, 2022.
[24] C. R. Reeves, Genetic algorithms, Handbook of metaheuristics (2010) 109–139.
[25] IBM, Ibm hr analytics employee, https://www.kaggle.com/pavansubhasht/
ibm-hr-analytics-attrition-dataset, 2020.
[26] L. F. Wightman, Lsac national longitudinal bar passage study. lsac research report series.
(1998).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Knackstedt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hassanpour</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities</article-title>
          ,
          <source>Computers in biology and medicine 127</source>
          (
          <year>2020</year>
          )
          <fpage>104065</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Ferrer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Van</given-names>
            <surname>Nuenen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Such</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Coté</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Criado</surname>
          </string-name>
          ,
          <article-title>Bias and discrimination in ai: a cross-disciplinary perspective</article-title>
          ,
          <source>IEEE Technology and Society Magazine</source>
          <volume>40</volume>
          (
          <year>2021</year>
          )
          <fpage>72</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Larsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Heintz</surname>
          </string-name>
          , Transparency in artificial intelligence,
          <source>Internet Policy Review</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <article-title>Proposal for a regulation laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dwivedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Naik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singhal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Omer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shah</surname>
          </string-name>
          , G. Morgan, et al.,
          <article-title>Explainable ai (xai): Core ideas, techniques, and solutions</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>55</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mccoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Rawat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sadler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Amant</surname>
          </string-name>
          ,
          <article-title>Recent advances in trustworthy explainable artificial intelligence: Status, challenges and perspectives</article-title>
          ,
          <source>IEEE Transactions on Artificial Intelligence</source>
          <volume>1</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wachter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mittelstadt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Counterfactual explanations without opening the black box: Automated decisions and the gdpr</article-title>
          ,
          <source>Harv. JL &amp; Tech. 31</source>
          (
          <year>2017</year>
          )
          <fpage>841</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R. M. J.</given-names>
            <surname>Byrne</surname>
          </string-name>
          ,
          <article-title>Counterfactuals in explainable artificial intelligence (xai): Evidence from human reasoning</article-title>
          ,
          <source>in: IJCAI-19</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6276</fpage>
          -
          <lpage>6282</lpage>
          . doi:
          <volume>10</volume>
          .24963/IJCAI.
          <year>2019</year>
          /876.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>