<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ensemble approaches for Graph Counterfactual Explanations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mario Alfonso Prado-Romero</string-name>
          <email>marioalfonso.prado@gssi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bardh Prenkaj</string-name>
          <email>prenkaj@di.uniroma1.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Stilo</string-name>
          <email>giovanni.stilo@univaq.it</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Celi</string-name>
          <email>alessandro.celi@univaq.it</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ernesto Estevanell-Valladares</string-name>
          <email>ernesto.estevanell@matcom.uh.cu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Alejandro Valdés-Pérez</string-name>
          <email>daniel.valdes@matcom.uh.cu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Gran Sasso Science Institute</institution>
          ,
          <addr-line>L'Aquila</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sapienza University of Rome</institution>
          ,
          <addr-line>Rome</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Havana</institution>
          ,
          <addr-line>Havana</addr-line>
          ,
          <country country="CU">Cuba</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of L'Aquila</institution>
          ,
          <addr-line>L'Aquila</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model's decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Counterfactual Explanations</kwd>
        <kwd>Ensemble</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays, Machine Learning (ML) methods are fundamental to several tools in diferent
application domains. In domains, like health or finance, understanding the decision process is
of paramount importance. On the opposite, the predictions made by black-box systems, due
to their nature, are hardly understandable, preventing their broad adoption. To overcome this
limitation, explanation methods were developed to give insight into how the ML model has
taken a specific decision for a given case/instance [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Since their creation, Graph Neural Networks (GNNs) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] have attracted the interest of the ML
community because they allow leveraging the advantages of Deep Neural Networks (DNN) on
graph data. However, this also means that GNNs behave as black boxes. Given the particularities
of graph data, many explanation techniques have had to be developed specifically for GNNs.
In particular, Graph Counterfactual Explanations (GCE) are one of the possible explanation
types in the Graph Learning domain. A counterfactual explanation answers the question: what
changes should I make to the input to obtain a diferent output? GCE techniques can be helpful to
discover, for example, i) molecular compounds similar in specific desired properties [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] or ii)
new insights into the interplay of diferent brain regions for certain diseases [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Existing works
on GCE diverge mainly in the problem definition, application domain, test data, and evaluation
metrics [5]. Most of them do not compare against other counterfactual explanation techniques
in the literature, making it challenging to promote the advancement of this research field.
      </p>
      <p>According to the works proposed in the literature, ensemble learning and explainability
have been jointly exploited to explain the decision of ensemble models. Contrarily, in this
work we focus on ensemble mechanisms on the explainers, thus, improving the quality of
explanations. Moreover, the aggregation of the explanations produced in the ensemble might
lead to new explanations encapsulating the ones before the fusion. Additionally, we extend
the classic multi-instance learning explanation of several instances to multiple single-instance
explanations. Here, the predictor is trained on single instances, and the explainer takes in input
a set of graphs coupled with their predictions to provide a single common explanation.</p>
      <p>Overall, multiple single-instance explanation strategies provide a final counterfactual
explanation valid for all the input instances and their corresponding predictions. In this way, the
produced counterfactual explanation can be used to identify substructures shared by the input
graph instances. Moreover, one can pinpoint the specialized structures of the counterfactual
and input graph instances that produce the predicted outcomes.</p>
      <p>We can summarise the contributions of this work as follows:
• we provide a formal discussion of ensembles for Single-Instance Explanations;
• we provide a formal discussion of ensembles for Multiple-Instance Explanations;
• we introduce and formalise a new Multiple Single-Instance Explainability problem;
• we provide a formal discussion of ensembles for Multiple Single-Instance Explainability;
The rest of this paper is organised as follows. Section 2 briefly discusses background concepts
and the related work. In Section 3, we discuss single-instance ensemble approaches and present
our multiple single-instance explanations in detail. Section 4 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and related work</title>
      <p>Before delving into the details of ensembles for GCE methods, we provide the reader with a
thorough discussion of the background of ensemble learning, graph neural networks (GNNs),
and model interpretability via counterfactual explanations.</p>
      <p>Ensemble learning (EL) [6] refers to algorithms that rely on the aggregation of base models to
obtain more accurate prediction models. EL comes in three variants: i.e. bagging [7], boosting
[8], and stacking [9]. Briefly, bagging consists of sampling with replacement  diferent views
of the same dataset to train  base models. The predictions of the base models get aggregated
via a majority voting consensus function [10]. Boosting consists of iterative algorithms that
train the next base model according to the misclassified instances from the previous model.
Stacking considers heterogeneous weak learners and learns to combine them using a meta-model
diferently to the deterministic combination used in bagging/boosting.</p>
      <p>Generally, a neural network has interconnected layers of neurons that propagate information
to the next layer [11]. Similarly, a GNN learns to transform the attributes of a specific graph
, which typically maintains its proximity characteristics.Connectivity is essential in graph
structures because it induces the embedding function to map similar nodes together. To exploit
the connectivity in a graph , GNN can rely on three main strategies: i.e. random walks [12],
message passing [13, 14], and graph convolutional networks (GCNs) [15].</p>
      <p>Considering the black-box nature of deep learning systems, non-specialists are interested in
understanding what is happening under the hood. The European AI regulation [16] suggests that
interpretability creates safer and digital environments, and encourages privacy, trustworthiness,
and fairness. Guidotti [17] addressed counterfactual explainability (CE). Consider an automatic
systems rejecting a bank loan requests. We need to answer the question, "what should change
such that the loan request gets accepted?". Counterfactual examples provide suggestions on the
steps to take for the system to change its decision. As a specialisation of CE, graph counterfactual
explanation (GCE) methods answer the question, "how should the input graph or its components
(e.g. vertices, edges) change to obtain a diferent outcome?" .</p>
      <p>
        Connectivity is crucial in many graph role problems. In biochemistry, neurobiology, ecology,
and engineering, graph substructures are highly related to their functionalities [18]. Additionally,
the neighborhood of a specific vertex is essential to determine its classifications. Therefore,
most explanation methods designed for vectors, tables, and images cannot be applied to graphs.
Instead, specific strategies have been devised. Wu et al. [ 19] train a learnable soft-mask matrix
to mask the features of vertices/edges in the input graph while keeping the same class. The
unmasked features are are the counterfactual explanations. Lucic et al. [20] explore a binary
perturbation matrix to sparsify the input graph’s adjacency matrix. The authors generate the
counterfactual example with the smallest distance according to the input graph. Wellawatte et
al. [21] identify similar counterfactual molecules, by selecting a small number of these using
clustering and Tanimoto similarity. Similarly, Numeroso et al. [
        <xref ref-type="bibr" rid="ref3">3, 22</xref>
        ] use reinforcement learning
to generate counterfactual examples given an input molecule. Bajaj et al. [23] find decision
regions for each class. Then, based on the boundaries of the regions, they produce subsets of
the input graph edges as counterfactuals. Abrate [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] rely on a bidirectional search heuristic
such that the resulting counterfactual has minimal changes from the original graph in brain
networks.
      </p>
      <p>Ensembles of counterfactual explanations have been explored in [24] to boost weak explainers
and combine them into a single more robust one explaining a single input instance. To the best
of our knowledge, ensembles for graph counterfactual explainability have not received any
attention in the literature of eXplainable AI (XAI). Moreover, diferently from what discussed in
Section 3.2, the literature focuses on ensembles of single-instance explainability.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Ensembles in Graph Counterfactual Explanations</title>
      <p>The discussion of diverse theories to explain a phenomenon is paramount in science. The main
goal is to analyse the strengths and weaknesses of each theory explaining the phenomenon
and reach a consensus on the best explanation. Likewise, the AI community has successfully
used ensembles of ML models for many years. However, existing works on the intersection
of EL and Explainability are mainly focused on explaining the decisions of ensemble models
[25]. Conversely, in this work, we propose to use ensemble mechanisms on the explainers
with the aim of improving the explanations’ quality and/or providing a brand new explanation.
Recall that GCE is usually a new graph or a set of actions to transform the input graph into the
counterfactual graph. Let  be an explanation that belongs to an explanation space  ,  the set
of all possible graphs, and ℳ the set of all GNN predictors. Below, we formally define what an
explanation is in two diferent explainability scenarios, namely, single-instance (see Definition
3.1) and multi-instance (see Definition 3.3).</p>
      <p>Definition 3.1. Let  be the set of classes,  ∈  be the input graph, Φ ∈ ℳ :  →− 
the prediction model, and ℰ :  × ℳ →−  an explanation method. Then  = ℰ (, Φ) is a
single-instance explanation if Φ( ) ̸= Φ( ).</p>
      <p>Definition 3.2. Let  be the set of classes,  = {1, 2, . . . , } the input set of graphs,
with  ∈ , Φ ∈ ℳ : 2 →−  a prediction model, and ℰ : 2 × ℳ →−  an explanation
method. Then  = ℰ (, Φ) is considered a multi-instance explanation if Φ( ) ̸= Φ( ).</p>
      <p>Hereafter we discuss - considering a graph classification task, remarking that the provided
definitions can be easily adapted to the other tasks of the GCE domain (i.e. vertex and edge
classification) - the ensembles of explainers on the two aforementioned scenarios (i.e.
singleinstance and multi-instance) to then introducing (in Section 3.2.2) a new one that was not
considered before in the literature.</p>
      <sec id="sec-3-1">
        <title>3.1. Ensembles for Single-Instance Explanations</title>
        <p>As in other Learning tasks - e.g. clustering, anomaly detection - ensembles of explainers can
be successfully used to improve the performance of the explanations [24]. In Figure 1, we
present a bagging ensemble pipeline for the single-instance explanations scenario. The single
instance and the trained ML model are the input for several base explainers (those may vary the
algorithm and/or the hyper-parameters settings). The produced explanations are then combined
by an aggregation phase to output the final explanation.</p>
        <p>According to [24], the produced counterfactual explanation are more robust than the one
produced by a single explainer by increasing the stability of the method and reducing the
variability of its quality. In the graph domain, the aggregation phase might leverage quantitative
metrics (like Graph Edit Distance, Fidelity and Sparsity - see [5] for a more complete discussion)
to drive the selection or the generation of the best counterfactual explanation. Additionally, it
can adopt voting-like mechanism to promote the most common actions (i.e. adding/removing
vertices/edges from the original graph) that lead towards the generation of the counterfactual
graph.</p>
        <p>Graph Instance
Single-Instance</p>
        <p>MLModel
Graph</p>
        <p>Predic on
Bag 1
Graph
Instance 1
Graph
Instance 2
Graph
Instance k</p>
        <p>Explainer
Bag1
Predic on
Mul -instance
MLModel</p>
        <p>Explana on
of Bag1
Explainer 1</p>
        <p>Explana on 1
Explainer 3</p>
        <p>Explana on 3
Graph
Instance 1
Graph
Instance 2
Graph
Instance k</p>
        <p>Set
liI-tsLcSgaeeennndo
M
M
l</p>
        <p>Predic on
Instance1
Predic on
Instance2
Predic on</p>
        <p>Instancek
Explainer 2</p>
        <p>Explana on 2</p>
        <p>Aggrega on</p>
        <p>Final
Explana on</p>
        <p>(b) Multiple single-instance explanations of a set of  instances
and their predictions. The ensemble mechanism is
highlighted in the zoom.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Multiple Instance Explainability vs Multiple Single-Instance</title>
      </sec>
      <sec id="sec-3-3">
        <title>Explainability</title>
        <p>In this Section, we discuss a new explainability scenario, namely Multiple Single-Instance
Explainability (MSIE). First, we introduce Multiple Instance Explainability and then delve into
providing more details on MSIE. For both scenarios, we guide the reader on how to incorporate
the ensemble mechanisms thereon.</p>
        <sec id="sec-3-3-1">
          <title>3.2.1. Multiple Instance Explainability</title>
          <p>As introduced in [26], Multiple Instance Learning (MIL) generalizes the traditional data
representation allowing individual data samples ℬ1, ℬ2, · · · (called bags) to be represented as a set
of multiple -dimensional feature vectors ℬ = {1, 2, · · · } s.t.  ∈ R (called instances). In
supervised classification, each bag is associated with a single value (e.g.  ∈ {− 1, +1} in the
binary classification case). Thus, the derived MIL Explainability field must be seen as a special
case of a single-instance explainability where the input is a bag of graphs with a corresponding
class (or classes in the multi-label setting) for the entire bag. Thus, the main diference with the
single-instance scenario resides in how the predictor is trained, and in the characteristics of the
explainer.</p>
          <p>Figure 2a clarifies the explainability of multi-instance learning as proposed in the literature.
Notice that the bag of  graphs is passed as input to the explainer and the prediction model.
The prediction model, in this scenario, is trained to predict the class of the entire bag. Finally,
the explainer produces an explanation of the whole bag considering its prediction.</p>
          <p>The literature has attempted to tackle the multi-instance factual explanation problem in [27]
by identifying instances responsible for positive bag prediction. In general, the problem has
received less attention than the single-instance explanation problem. Besides, generating
multiinstance counterfactual explanations still remains a completely unexplored field, especially in
the graph domain. Therefore, being an extension of single-instance explainability, MIL can
exploit ensembles to produce several explanations for the bag and, afterwards, combine them
into a single one, as shown in Section 3.1.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>3.2.2. Multiple Single-Instance Explainability</title>
          <p>Keeping in mind the MIL scenario, we notice that a new explainability problem can be identified,
namely multiple single-instances explainability. In the multiple single-instances
explainability scenario the predictor is trained on single-instances (unlike how it happens in the MIL).
Additionally, the explainer takes as input a set of graphs coupled with their predictions to
provide a common explanation. Multiple single-instance explanation strategies can be used as
an example in the chemical domain. Considering the following setting: we have two diferent
drugs 1 = (1, 1) and 2 = (2, 2) that cure disease  1 and  2, respectively. Suppose that
1 and 2 share a common substructure ∆ (that we highlight here for the discussion but that it
is not part of the input of the problem). Now, say that we want to get an overall counterfactual
explanation for both 1, 2 and  1,  2. Thus, the counterfactual example 3 might be a drug
which shares the same substructure ∆ , although it cures disease  3. In particular, Figure 3
illustrates three diferent drugs 1 of the same class (i.e. bigunaide). Here, 1 is the compound
corresponding to buformin which is a metabolic antiviral that inhibits the mTOR pathway
used by influenza [ 28] and and Middle East respiratory syndrome-related coronavirus [29].
2 corresponds to phenformin2 that helped improve glycemic control by regulating insulin
sensitivity aiding in treating diabetes. Lastly, 3 corresponds to menformin which is applicable
to polycystic ovary syndrome treatments [30]. Notice that the highlighted substructure ∆ is
shared among the three drugs described above. Hence, the counterfactual explanation 3 is
useful for a pharmacologist/chemist to identify the chemical bonds that specialize the three drugs
1, 2, and 3 in being efective when treating the previously stated complications. Moreover,
notice that metaformin has the least amount of changes w.r.t. buformin and phenformin when
considering the core substructure ∆ . The common substructure ∆ , in this scenario, represents
the biguanide class that encapsulates 1, 2, and 3 as oral antihyperglycemic drugs used for
diabetes mellitus or prediabetes. The other attached chemical structures in each drug create
additional curative properties and characterising side efects.</p>
          <p>Figure 2b summarizes the explaining task for multiple single-instance where each input
graph and its prediction are color-coded to convey their coupling. There, the single-instance
predictor takes each graph to produce a diferent prediction. The entire set of graphs, their
predictions, and the used model, are the input of a multiple single-instance explainer. The main
idea is that the explainer needs to provide a counterfactual explanation which is valid for all
the instances and their predictions. Intuitively, a counterfactual explanation, in this task, is a
graph similar to the input graphs while having a diferent prediction regarding the ones of the
input set. Formally:
Definition 3.3. Let  be the set of classes,  = {1, 2, . . . , } the input set of graphs,
with  ∈ , Φ ∈ ℳ :  →−  the prediction model, and ℰ : 2 × ℳ →−  the
explanation method. Then  = ℰ (, Φ) is considered a multiple single-instance explanation if
Φ( ) ∈/  ({Φ( ) |  ∈ }) where  : 2 →− 2.</p>
          <p>The definition above does not provide a specific implementation of the function  that can
be customised according to the application domain, and the "strictness" of counterfactuality
one needs to have. For example,  can be the identity function of the original set of predictions.
In this way, Φ( ) needs to be diferent to all the predicted classes ( {Φ( ) |  ∈ }).</p>
          <p>Similarly to what discussed for MIL, multiple single-instance explainability can rely on
ensembles to engender explanations for the graphs in . As shown in Figure 2b, we want to
have an explanation of all the instances considered singularly and not as a group, as it happens in
MIL. In this regard, in the ensemble we might have at least the same number () of explanations
as the number of elements in . These explanations then need to be summarized together to
1We, hereby, declare that the description of these drugs is entirely for illustration purposes, and we do not claim
to be experts in pharmacology.</p>
          <p>2This drug is currently out of market production due to recurrent side efects of lactic acidosis in human trials.
produce a single one that clarifies the entire set  as shown before. Notice that the ensemble
mechanism might also follow a multi-level fashion where several summarization phases take
part between two consecutive levels as in [31].</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>Counterfactual explanation techniques can sufer from issues such as a lack of stability of
the results, with respect to the input, and undesired variance in the results. Ensembles of
counterfactual explainers have been proposed to tackle these problems [17]. However, no
similar approach has been used for counterfactual explanations in the graph domain. In this
work, we proposed to use ensembles of GCEs to provide new types of explanations and improve
existing ones.</p>
      <p>We provided a generic pipeline for producing single-instance GCEs using an ensemble of
explainers. In addition to the classic voting approach, for selecting the best base explainer,
we introduced the idea of using an aggregation function that combines multiple base GCEs
into a final one. Furthermore, we analyzed the multi-instance GCE problem for the first time.
There, we devised two approaches: Multi-instance explainability and Multiple Single-Instance
explainability. Moreover, we proposed an ensemble-based pipeline to implement the second
approach and tackle the multi-instance GCE problem.</p>
      <p>In general, our position is that GCE ensembles can be used to solve many of the issues faced by
the existing explanation methods. In future works, we will develop and test ensemble-based GCE
methods to tackle the diferent problems presented in this work. Furthermore, we will analyze
in more detail the more convenient way of designing the aggregation functions presented in
our proposed pipelines.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work is partially supported by Territori Aperti a project funded by Fondo Territori Lavoro
e Conoscenza CGIL CISL UIL, and by SoBigData-PlusPlus H2020-INFRAIA-2019-1 EU project,
contract number 871042.
[5] M. A. Prado-Romero, G. Stilo, Gretel: Graph counterfactual explanation evaluation
framework, in: Proc. of the 31st ACM Int. Conf. on Inf. and Knowl. Management, Association
for Computing Machinery, 2022.
[6] O. Sagi, L. Rokach, Ensemble learning: A survey, Wiley Interdisciplinary Reviews: Data</p>
      <p>Mining and Knowledge Discovery 8 (2018) e1249.
[7] L. Breiman, Bagging predictors, Machine learning 24 (1996) 123–140.
[8] R. E. Schapire, The strength of weak learnability, Machine learning 5 (1990) 197–227.
[9] D. H. Wolpert, Stacked generalization, Neural Networks 5 (1992) 241–259.
[10] A. Campagner, D. Ciucci, F. Cabitza, Aggregation models in ensemble learning: A
largescale comparison, Information Fusion (2022).
[11] E. Fiesler, Neural network classification and formalization, Computer Standards &amp;</p>
      <p>Interfaces 16 (1994) 231–239.
[12] D. S. Fisher, Random walks in random environments, Physical Review A 30 (1984) 960.
[13] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, G. E. Dahl, Neural message passing for
quantum chemistry, in: D. Precup, Y. W. Teh (Eds.), Proceedings of the 34th International
Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017,
volume 70 of Proceedings of Machine Learning Research, PMLR, 2017, pp. 1263–1272.
[14] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks,
in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France,
April 24-26, 2017, Conference Track Proceedings, OpenReview.net, 2017.
[15] S. Zhang, H. Tong, J. Xu, R. Maciejewski, Graph convolutional networks: a comprehensive
review, Computational Social Networks 6 (2019) 1–23.
[16] E. Commission, On artificial intelligence—a european approach to excellence and trust,
2020.
[17] R. Guidotti, Counterfactual explanations and how to find them: literature review and
benchmarking, Data Mining and Knowledge Discovery (2022) 1–55.
[18] U. Alon, Network motifs: theory and experimental approaches, Nature Reviews Genetics
8 (2007) 450–461.
[19] H. Wu, W. Chen, S. Xu, B. Xu, Counterfactual supporting facts extraction for explainable
medical record based diagnosis with graph network, in: Proceedings of the 2021 Conference
of the North American Chapter of the Association for Computational Linguistics: Human
Language Technologies, 2021, pp. 1942–1955.
[20] A. Lucic, M. A. Ter Hoeve, G. Tolomei, M. De Rijke, F. Silvestri, Cf-gnnexplainer:
Counterfactual explanations for graph neural networks, in: International Conference on Artificial
Intelligence and Statistics, PMLR, 2022, pp. 4499–4511.
[21] G. P. Wellawatte, A. Seshadri, A. D. White, Model agnostic generation of counterfactual
explanations for molecules, Chemical science 13 (2022) 3697–3705.
[22] D. Numeroso, D. Bacciu, Explaining deep graph networks with molecular counterfactuals,
arXiv preprint arXiv:2011.05134 (2020).
[23] M. Bajaj, L. Chu, Z. Y. Xue, J. Pei, L. Wang, P. C.-H. Lam, Y. Zhang, Robust counterfactual
explanations on graph neural networks, Advances in Neural Information Processing
Systems 34 (2021).
[24] R. Guidotti, S. Ruggieri, Ensemble of counterfactual explainers, in: International
Conference on Discovery Science, Springer, 2021, pp. 358–368.
[25] S. Bobek, P. Bałaga, G. J. Nalepa, Towards model-agnostic ensemble explanations, in:</p>
      <p>International Conference on Computational Science, Springer, 2021, pp. 39–51.
[26] T. G. Dietterich, R. H. Lathrop, T. Lozano-PÃ©rez, Solving the multiple instance problem
with axis-parallel rectangles, Artificial Intelligence 89 (1997) 31–71.
[27] T. Komárek, J. Brabec, P. Somol, Explainable multiple instance learning with instance
selection randomized trees, in: Joint European Conference on Machine Learning and
Knowledge Discovery in Databases, Springer, 2021, pp. 715–730.
[28] S. Lehrer, Inhaled biguanides and mtor inhibition for influenza and coronavirus, World</p>
      <p>Academy of Sciences Journal 2 (2020) 1–1.
[29] J. Kindrachuk, B. Ork, S. Mazur, M. R. Holbrook, M. B. Frieman, D. Traynor, R. F. Johnson,
J. Dyall, J. H. Kuhn, G. G. Olinger, et al., Antiviral potential of erk/mapk and pi3k/akt/mtor
signaling modulation for middle east respiratory syndrome coronavirus infection as
identified by temporal kinome analysis, Antimicrobial agents and chemotherapy 59 (2015)
1088–1099.
[30] R. Mathur, C. J. Alexander, J. Yano, B. Trivax, R. Azziz, Use of metformin in polycystic
ovary syndrome, American journal of obstetrics and gynecology 199 (2008) 596–609.
[31] U. K. Singh, M. Jamei, M. Karbasi, A. Malik, M. Pandey, Application of a modern multi-level
ensemble approach for the estimation of critical shear stress in cohesive sediment mixture,
Journal of Hydrology 607 (2022) 127549.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models, ACM computing surveys (CSUR) 51 (</article-title>
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Tsoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagenbuchner</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Monfardini,</surname>
          </string-name>
          <article-title>The graph neural network model</article-title>
          ,
          <source>IEEE transactions on neural networks 20</source>
          (
          <year>2008</year>
          )
          <fpage>61</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Numeroso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bacciu</surname>
          </string-name>
          , Meg:
          <article-title>Generating molecular counterfactual explanations for deep graph networks</article-title>
          ,
          <source>in: 2021 International Joint Conference on Neural Networks (IJCNN)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Abrate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonchi</surname>
          </string-name>
          ,
          <article-title>Counterfactual graphs for explainable classification of brain networks</article-title>
          ,
          <source>in: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>2495</fpage>
          -
          <lpage>2504</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>