<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>X (M. Fontanesi);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>through Explainable AI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michele Fontanesi</string-name>
          <email>michele.fontanesi@phd.unipi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessio Micheli</string-name>
          <email>alessio.micheli@unipi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Podda</string-name>
          <email>marco.podda@unipi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Deep Graph Networks, Explainable AI, Inductive Biases</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pisa, Department of Computer Science</institution>
          ,
          <addr-line>Largo B. Pontecorvo 3, 56127 Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The field of Explainable Artificial Intelligence (XAI) for Deep Graph Networks (DGNs) collects methods to study the learned correlation between the input graphs and their labels. The extracted information is then provided as an explanation to increase the user's trust in the system's response. However, the purpose of these techniques extends beyond the search for explanations. In this short abstract, we provide an overview of some research directions that stem from the field of XAI for DGNs, contextualizing their relevance for the fields of XAI and DGN and their pertinence to the Ph.D program. Then, we provide further details on the main concepts behind a methodological approach, based on XAI techniques, to study the inductive biases of diverse DGN variants performing graph classification tasks while ofering a synopsis of the acquired findings.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Graphs are complex data structures, comprising entities, or vertices, associated pairwise through
relationships that are modeled as edges. As vertices and edges may assume any type of semantic,
graphs are a very flexible modeling approach but their non-euclidean structure makes them hard to
process and study. Deep Graph Networks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], pioneered by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], are currently the most powerful,
versatile, and promising approach to solve classification as well as regression tasks on graph data [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ].
However, DGNs are still far from being a human-centered approach as the logic behind their responses
is hidden in the learned parameters. This lack of transparency hinders their trustworthiness [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and
consequently, their adoption, as understanding an autonomous system’s response became imperative
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To this end, the field of Explainable AI (XAI) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] has been founded and a huge research efort
has been put into developing techniques able to highlight the correlation learned by Neural Network,
including DGNs [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], between the input data and the target labels. The retrieved information is then
mainly used to craft an explanation regarding the reasons behind the model outcome. However, the
importance of XAI techniques for DGNs overcomes the sole objective of explaining to the final user, as
it is possible to identify more purposes for these methods and, consequently, diverse research directions
across the fields of XAI and DGNs. These directions are all aligned with the objectives of the National
Ph.D. in Artificial Intelligence for Society as any advancements in the fields of XAI or DGN would either
close the gap between humans and AI or provide humans with better tools to address and understand
complex problems. We outline these research directions in Section 2, highlighting their relevance
and impacts in the fields of XAI and DGNs. Further details are provided for the research direction
that is currently under active investigation. For this latter one, the background to the methodological
approach is outlined in section 3, while the methodology itself is summarised in section 4. In section 5,
we introduce the preliminary results, while in section 6 we discuss future research activities.
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>2. Reserach Directions</title>
      <p>
        XAI for knowledge extraction. A DGN able to successfully solve a task can be seen as a possible
source of information to derive new knowledge concerning the problem it has learned to solve [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
To this matter, multiple and diferent XAI techniques can be applied with the purpose of retrieving
the meaningful patterns that a DGN has learned to associate with a particular class [11]. Among
the patterns, some may reveal novel insights into the faced problem. This line of work addresses
the following research questions: are diferent XAI techniques converging to the same or similar
explanations? Are retrieved explanations meaningful to acquire new knowledge?
XAI-based architectures. Most XAI techniques for DGNs are model-independent approaches capable
of analyzing a trained DGN at test time (post-hoc) [12, 13]. However, an interesting, promising, and
challenging research direction is to find architectural design principles that make DGNs easier to
analyze for diferent tasks and at diferent levels of granularity: input-output-wise, layer-wise, and
unit-wise. A direct consequence of such a design is the creation of a direct coupling between the
architecture and the extracted information which increases the reliability of the analysis and therefore
the trust in the retrieved explanation. This line of work addresses the following research question:
can we retrieve more meaningful explanations by introducing the explainability requirement into the
design of a DGN approach?
XAI for model analysis and improvement. Post-hoc, model-agnostic techniques [12, 13] may be
used to study the behavior of diferent DGN models on a given task and to identify diferences and
potential shortcomings of each architecture. The acquired knowledge could be exploited to understand
which models are better at solving a given task and to learn why some architectures achieve better
performances. This line of work addresses the following research questions: are feedforward [14],
constructive [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and recursive [15] architectures solving tasks based on the same input patterns? Can
we understand which DGN variants are better suited to solve a task based on the input graph properties?
      </p>
      <p>This research direction stems from the observation that XAI techniques can be used as model
inspection tools to analyze the diverse inductive biases characterizing diferent types of DGNs. Inductive
biases are the set of assumptions used by a DGN to perform predictions on unknown inputs and
consequently, their characterization is of the utmost importance to select the model that better aligns
with a particular learning task to solve. In this regard, we have demonstrated that XAI methodologies
can be utilized to discern the class assignment policy induced by the inductive biases and learned by each
type of DGN to associate a graph with its target class. Specifically, we investigated the inductive biases
of recursive and convolutional DGNs in graph classification tasks. This was achieved by comparing the
explanations generated by XAI techniques with the ground truth (GT) explanations associated with
each valid policy. Results highlighted (i) the existence of diverse class assignment policies for three
XAI graph classification benchmarks [ 16, 17] [18], (ii) the capabilities of recursive and convolutional
DGNs to learn diferent policies [ 18], (iii) the efect of using multiple layers on the inductive bias of
convolutional DGNs [19], and (iv) the alignment of recursive and convolutional DGN explanations with
the values of Katz centrality and Fiedler eigenvector, respectively [20]. From the XAI side, characterizing
the inductive biases of diverse DGNs may increase the trust in these systems as we identify the problem
specifics that diverse DGNs can leverage to solve a task. From an ML point of view, linking the aspects
learned by diverse DGN variants with their specific formulation may be beneficial to developing more
performant and eficient models.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Background</title>
      <p>
        Deep Graph Networks. A DGN is a parameterized function capable of learning a mapping between
input graphs  ∈  and their associated classes  ∈  following the message passing (MP) paradigm; a
procedure that updates node embeddings   ∈ ℝ (the vectorial information associated with each node)
iteratively starting from the initial node feature vectors   ∈ ℝ . MP is a blueprint defined at the node
level as follows:
 
(1)
where the Msg function computes a message between every node and its neighbors; Agg summarizes
all the messages received by each node in a permutation-invariant fashion; and Upd combines the
current node embedding and the aggregated messages to generate a novel embedding for each
node. DGN characteristics are determined by their specific implementation of the MP blueprint.
Across the set of experiments, we studied convolutional DGN variants as GIN (Graph Isomorphism
Networks) [21], GC (Graph Conv) [22] and PNA (Principal Neighborhood Aggregation) [23] and,
as recursive variant, GESN (Graph Echo State Networks) [15]. Each variant features a pooling
operator to generate a single graph vector based on which each model outputs the target class probabilities.
XAI attribution methods. Across the experiments summarized in this short abstract, we used a local
post-hoc XAI technique able to associate an importance score with each node in a graph in the form of a
mask  ∈̂ ℝ   with   the cardinality of the set of nodes of graph  . Among the many methods that
exist in the literature [12, 13], we employed CAM [24] as we observed that it was able to compute more
stable explanations than GNNExplainer [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] or Integrated Gradients [25].
      </p>
      <p>Graph centrality and connectivity notions. The Katz centrality [26] values are higher for the nodes
that have in their neighborhood many other well-connected nodes. As a consequence, the notion of
Katz centrality is well suited to detect an inductive bias that leads DGNs to solve the graph classification
tasks based on low-order graph structures like isolated nodes with a high degree. However, a DGN may
also base its predictions on higher-order structural information as detecting the presence of certain
subgraphs. To Identify this second type of inductive bias through node scores we used the values of the
the Fiedler eigenvector [27] whose signs are usually used to cut the graph into two communities.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Method</title>
      <p>∈  ,  ∈  }
explanations  = {

 ∈ {0, 1} ∣  ∈  }</p>
      <p>collecting a diverse ground truth (GT) for each class assignment
Our methodology is based on multiple XAI graph classification datasets of the form  = {(, 
 ,  ) ∣
where graphs  are associated to target classes  as well as to sets of ground truth
policy in the set  . In particular, a GT explanation is a binary vector that encodes the relevance (1)
or irrelevance (0) of each node to the graph prediction depending on the associated class assignment
policy  . To identify the policy learned from each DGN variant (trained with cross-validation), we
computed the explanations for each test sample and quantified their adherence to the available GT
with the plausibility score [28] (AUROC). Then, we identified the policy learned by a DGN as the one
associated with the GT that maximizes the average plausibility scores across the test set samples. Last,
we compute the average Pearson Correlation Coeficient between the explanation importance scores
and the Katz centrality and Fiedler values to identify whether the inductive biases of diverse DGNs
focus on low or high-order graph structure, respectively.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Preliminary Results</title>
      <p>First, we discovered the existence of diverse class assignment policies for the XAI graph classification
datasets of BA2Motif [17], BA2grid [16] and GridHouse [16]. In particular, we found that the correct
graph class could be predicted by either looking at the presence of a motif or by identifying the nodes
with a degree greater or equal to three. In figure
1 we provide as an example the GT explanations
associated with the motif-based assignment policy and the degree-based assignment policy.</p>
      <p>Then, through the computation of the plausibility metric, we found that GESN and PNA are
characterized by a strong inductive bias that leads them to always learn the degree-based policy. GC and
(a) Motif-based
(b) Degree-based
GIN, instead, were capable of learning a diferent policy depending on the number of layers of their
architecture and the local minima reached by the optimization procedure, as shown in Figure 2.</p>
      <p>Last, we computed the average Pearson correlation coeficients between the explanation scores of
various types of DGNs and the Katz centrality and Fiedler values. The obtained results highlighted the
better alignment between the Katz centrality and Fiedler values with the explanation scores computed
for the recursive and convolutional DGN variants, respectively.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion and future research activities</title>
      <p>
        In this short abstract, we introduced some research directions related to the fields of XAI and DGNs
while focusing on the one that is currently under investigation. For this latter research direction, we
summarized our contributions to the DGNs and XAI fields. In particular, we found that (i) simple
graph classification tasks can feature multiple class assignment policies as viable solutions, (ii) recursive
and convolutional DGNs feature diverse inductive biases that lead them to learn a preferred class
assignment policy, (iii) the learned policy is influenced by the architectural number of layers and,
in some cases, by the training procedure and, (iv) that inductive biases may be grounded in known
concepts of graph theory as the Katz centrality and the Fiedler values. Studying and characterizing
the inductive biases of DGNs impacts both the fields of XAI and DGNs. From the XAI perspective,
increasing the knowledge about the inductive bias and consequently, the generalization capabilities
of diferent DGNs variants leads to a more conscious and trustful application of these methods to
diferent tasks. Moreover, the discovery of multiple sound GT raises warnings on the benchmarking
processes of the XAI attribution methods as lower performance may be due to the usage of the wrong
GT. From the DGN perspective, instead, understanding the association between the inductive biases
and the MP variants may uncover opportunities to create novel models. In addition, from a practical
perspective, our results may be of use to practitioners in selecting the DGN variant that best aligns with
the characteristics of the task they want to solve. As future research directions, we plan to perform
extensive experiments by (i) increasing the number of tested DGNs including spectral [29], constructive
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and other convolutional variants [30], (ii) increasing the number of tested explainers including
generative [31] and factual/counterfactual approaches [32], and (iii) increasing the number of tested
datasets possibly featuring non-synthetic graphs. In particular, extending results to more DGN variants
would facilitate the discovery and characterization of additional opportunities to solve and generalize
on graph-related tasks. Increasing the number of explainers would help to explore and compare
diferent explanations. Finally, adopting real-world datasets would help in understanding the DGNs
and explainer’s behaviors outside controlled synthetic environments. However, the required datasets
should feature ground truth explanations to check whether a DGN coupled with a particular explainer
was aligned with the problem characteristics. We plan to find graphs with associated GT by exploiting
the knowledge already developed in bioinformatics and chemistry. Alternatively, the field of business
optimization processes can provide graphs modeling concurrent and interacting procedures with GT
retrieved from the knowledge developed in the field. We also expect that achievements along this
research direction may become opportunities to start investigating the direction of “XAI for knowledge
extraction” in the fields of bioinformatics and chemistry and the direction of “XAI-based architecture”
by exploiting the knowledge acquired on the tested explainers and inductive biases of DGNs.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>Research partly funded by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - ”FAIR
- Future Artificial Intelligence Research” - Spoke 1 ”Human-centered AI”, funded by the European
Commission under the NextGeneration EU programme.
[11] L. C. Magister, D. Kazhdan, V. Singh, P. Liò, Gcexplainer: Human-in-the-loop concept-based
explanations for graph neural networks, arXiv preprint arXiv:2107.11889 (2021).
[12] H. Yuan, H. Yu, S. Gui, S. Ji, Explainability in graph neural networks: A taxonomic survey, IEEE
Transactions on Pattern Analysis &amp; amp; Machine Intelligence 45 (2023) 5782–5799. doi:10.1109/
TPAMI.2022.3204236.
[13] J. Kakkad, J. Jannu, K. Sharma, C. Aggarwal, S. Medya, A survey on explainability of graph neural
networks, arXiv preprint arXiv:2306.01958 (2023).
[14] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P. S. Yu, A comprehensive survey on graph neural
networks, IEEE Transactions on Neural Networks and Learning Systems 32 (2021) 4–24. doi:10.
1109/TNNLS.2020.2978386.
[15] C. Gallicchio, A. Micheli, Graph echo state networks, in: The 2010 International Joint Conference
on Neural Networks (IJCNN), 2010, pp. 1–8. doi:10.1109/IJCNN.2010.5596796.
[16] A. Longa, S. Azzolin, G. Santin, G. Cencetti, P. Liò, B. Lepri, A. Passerini, Explaining the explainers
in graph neural networks: a comparative study, arXiv preprint arXiv:2210.15304 (2022).
[17] D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen, X. Zhang, Parameterized explainer for graph
neural network, Advances in neural information processing systems 33 (2020) 19620–19631.
[18] M. Fontanesi, A. Micheli, M. Podda, Xai and bias of deep graph networks, in: Proceedings of the
32th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine
Learning Intelligence (ESANN 2024), 2024. URL: https://doi.org/10.14428/esann/2024.ES2024-85.
[19] M. Fontanesi, A. Micheli, M. Podda, Relating explanations with the inductive biases of deep graph
networks, in: Accepted at AixiA 2024 and under publication in the AixiA Springer LNAI, 2024.
[20] M. Fontanesi, A. Micheli, M. Podda, D. Tortorella, Analyzing explanations of dgns through node
centrality and connectivity, in: Accepted at Discovery Science 2024 and under publication in the
Discovery Science Springer LNCS, 2024.
[21] K. Xu, W. Hu, J. Leskovec, S. Jegelka, How powerful are graph neural networks?, arXiv preprint
arXiv:1810.00826 (2018).
[22] C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. L. et al., Weisfeiler and leman go neural:
Higherorder graph neural networks, Proceedings of the AAAI Conference on Artificial Intelligence 33
(2019). doi:10.1609/aaai.v33i01.33014602.
[23] G. Corso, L. Cavalleri, D. Beaini, P. Liò, P. Veličković, Principal neighbourhood aggregation for
graph nets, Advances in Neural Information Processing Systems 33 (2020).
[24] P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, H. Hofmann, Explainability methods for graph
convolutional neural networks, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2019, pp. 10764–10773. doi:10.1109/CVPR.2019.01103.
[25] M. Sundararajan, A. Taly, Q. Yan, Axiomatic attribution for deep networks, in: International
conference on machine learning, PMLR, 2017, pp. 3319–3328.
[26] L. Katz, A new status index derived from sociometric analysis, Psychometrika 18 (1953) 39–43.</p>
      <p>doi:10.1007/BF02289026.
[27] M. Fiedler, Algebraic connectivity of graphs, Czechoslovak Mathematical Journal 23 (1973)
298–305. doi:10.21136/CMJ.1973.101168.
[28] M. Rathee, T. Funke, A. Anand, M. Khosla, Bagel: A benchmark for assessing graph neural network
explanations, arXiv preprint arXiv:2206.13983 (2022) 1–20.
[29] M. Deferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast
localized spectral filtering, Advances in neural information processing systems 29 (2016).
[30] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio, et al., Graph attention
networks, stat 1050 (2017) 10–48550.
[31] W. Lin, H. Lan, B. Li, Generative causal explanations for graph neural networks, in: International</p>
      <p>Conference on Machine Learning, PMLR, 2021, pp. 6666–6679.
[32] J. Tan, S. Geng, Z. Fu, Y. Ge, S. Xu, Y. Li, Y. Zhang, Learning and evaluating graph neural network
explanations based on counterfactual and factual reasoning, in: Proceedings of the ACM Web
Conference 2022, 2022, pp. 1018–1027.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bacciu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Errica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Micheli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Podda</surname>
          </string-name>
          ,
          <article-title>A gentle introduction to deep learning for graphs</article-title>
          ,
          <source>Neural Networks</source>
          <volume>129</volume>
          (
          <year>2020</year>
          )
          <fpage>203</fpage>
          -
          <lpage>221</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/ S0893608020302197. doi:https://doi.org/10.1016/j.neunet.
          <year>2020</year>
          .
          <volume>06</volume>
          .006.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Micheli</surname>
          </string-name>
          ,
          <article-title>Neural network for graphs: A contextual constructive approach</article-title>
          ,
          <source>IEEE Transactions on Neural Networks</source>
          <volume>20</volume>
          (
          <year>2009</year>
          )
          <fpage>498</fpage>
          -
          <lpage>511</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNN.
          <year>2008</year>
          .
          <volume>2010350</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Tsoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagenbuchner</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Monfardini,</surname>
          </string-name>
          <article-title>The graph neural network model</article-title>
          ,
          <source>IEEE Transactions on Neural Networks</source>
          <volume>20</volume>
          (
          <year>2009</year>
          )
          <fpage>61</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNN.
          <year>2008</year>
          .
          <volume>2005605</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Spatial temporal graph convolutional networks for skeleton-based action recognition</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>32</volume>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Derrow-Pinion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>She</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nunkesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wiltshire</surname>
          </string-name>
          , et al.,
          <article-title>Eta prediction with graph neural networks in google maps</article-title>
          ,
          <source>in: Proceedings of the 30th ACM International Conference on Information &amp; Knowledge Management</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fontanesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Micheli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Milazzo</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Podda, Exploiting the structure of biochemical pathways to investigate dynamical properties with neural networks for graphs</article-title>
          ,
          <source>Bioinformatics</source>
          <volume>39</volume>
          (
          <year>2023</year>
          )
          <article-title>btad678</article-title>
          . doi:
          <volume>10</volume>
          .1093/bioinformatics/btad678.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Oneto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Navarin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Biggio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Errica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Micheli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bianchini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Demetrio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bongini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tacchella</surname>
          </string-name>
          , et al.,
          <article-title>Towards learning trustworthily, automatically, and with guarantees on graphs: An overview</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>493</volume>
          (
          <year>2022</year>
          )
          <fpage>217</fpage>
          -
          <lpage>243</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bodria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Naretto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rinzivillo</surname>
          </string-name>
          ,
          <article-title>Benchmarking and survey of explanation methods for black box models, Data Mining and Knowledge Discovery (</article-title>
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          , D. Aha,
          <article-title>Darpa's explainable artificial intelligence (xai) program</article-title>
          ,
          <source>AI</source>
          magazine
          <volume>40</volume>
          (
          <year>2019</year>
          )
          <fpage>44</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ying</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bourgeois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>You</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zitnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          , Gnnexplainer:
          <article-title>Generating explanations for graph neural networks</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>32</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>