=Paper= {{Paper |id=Vol-3318/short27 |storemode=property |title=Privacy and transparency in graph machine learning: A unified perspective |pdfUrl=https://ceur-ws.org/Vol-3318/short27.pdf |volume=Vol-3318 |authors=Megha Kosla |dblpUrl=https://dblp.org/rec/conf/cikm/Kosla22 }} ==Privacy and transparency in graph machine learning: A unified perspective== https://ceur-ws.org/Vol-3318/short27.pdf
Privacy and transparency in graph machine learning: A
unified perspective
Megha Khosla
Delft University of Technology, Delft, The Netherlands


                                          Abstract
                                          Graph Machine Learning (GraphML), whereby classical machine learning is generalized to irregular graph domains, has
                                          enjoyed a recent renaissance, leading to a dizzying array of models and their applications in several domains. With its
                                          growing applicability to sensitive domains and regulations by governmental agencies for trustworthy AI systems, researchers
                                          have started looking into the issues of transparency and privacy of graph learning. However, these topics have been
                                          mainly investigated independently. In this position paper, we provide a unified perspective on the interplay of privacy and
                                          transparency in GraphML. In particular, we describe the challenges and possible research directions for a formal investigation
                                          of privacy-transparency tradeoffs in GraphML.

                                          Keywords
                                          Graph machine learning, Graph neural networks, Privacy-preserving machine learning, Interpretability/Explainability in
                                          machine learning, Post-hoc explainability, Privacy-transparency tradeoffs



1. Introduction                                                       black box GraphML models in a post-hoc manner, de-
                                                                      signing interpretable models [15, 16, 17] as well as pri-
Graphs are a highly informative, flexible, and natural vacy preserving techniques for real world deployments
way to represent data. Graph based machine learning of graph models [18, 19, 20].
(GraphML), whereby classical machine learning is gener-                  Despite the growing research interest, the current state
alized to irregular graph domains, has enjoyed a recent of the art considers privacy and transparency in GraphML
renaissance, leading to a dizzying array of models and independently. While transparency provides insight into
their applications in several fields [1, 2, 3, 4, 5]. GraphML the model’s working, privacy aims to preserve the sensi-
models have achieved great success due to their ability to tive information about the training data1 . The seemingly
flexibly learn from the complex interplay of graph struc- conflicting goals of privacy and transparency call for the
ture and node attributes/features. Such ability comes need of a joint investigation. To date, any gain in pri-
with a compromise in privacy and transparency, two vacy or transparency is usually compared to any drop in
indispensable ingredients to achieve trustworthy ML [6]. model performance. However, questions like “what ef-
   Deep models trained on graph data are inherently fects would be releasing post-hoc explanations have on the
blackbox, and their decisions are difficult for humans privacy of training data?” or “how well can we interpret
to understand and interpret. The growing application of the decisions of privacy-preserving graph models?” have
these models in sensitive applications like healthcare and so far received little attention [21, 22].
finance and the regulations by various AI governance                     In this position paper, we provide a unified perspec-
frameworks necessitate the need for transparency in their tive on the inextricable link between privacy and trans-
decision-making process. Meanwhile, recent research parency for GraphML. Besides, we sketch the possible
[7, 8, 9, 10] has highlighted the privacy risks of deploying research directions towards formally exploring privacy-
models trained on graph data. It has been suggested that transparency tradeoffs in GraphML.
these models are even more vulnerable to privacy leak-
age than models trained on non-graph data due to the
additional encoding of relational structure in the model 2. Background
itself [7].
   Consequently, an increasing number of works are fo- 2.1. Graph Machine Learning
cussing on explaining [11, 12, 13, 14] the decisions of
                                                                      The key idea in graph machine learning is to encode the
                                                                      discrete graph structure into low dimensional continuous
AIMLAI’22: In Proceedings of Advances in Interpretable Machine Learn- vector representations using non-linear dimensionality
ing and Artificial Intelligence (AIMLAI) at CIKM’22
                                                                      reduction techniques. Popular classes of GraphML meth-
Envelope-Open m.khosla@tudelft.nl (M. Khosla)
GLOBE https://khosla.github.io/ (M. Khosla)
                                                                                                                                   1
Orcid 0000-0002-0319-3181 (M. Khosla)                                                                                                  Here we are only concerned with data privacy. Model Privacy or
                                    © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License
                                    Attribution 4.0 International (CC BY 4.0).                                                         protecting the model itself against, for example, stealing model
 CEUR
 Workshop
 Proceedings
               http://ceur-ws.org
               ISSN 1613-0073
                                    CEUR Workshop Proceedings (CEUR-WS.org)                                                            parameters is out of the scope of this paper.
                                                                          Bob
       Which model provides the best prediction
              over the unseen test set?       Effectiveness




                                                                                                  Bob
                                                                                                        X
                         Bob                          Graph Machine
                                                         Learning
                                    Privacy                                      Transparency
       Privacy attacks
                                                        Trade-offs

Membership Inference : Was Bob part of training data?                Decision:    Bob to be denied of loan
       Link Inference : Who are Bob’s friends ?                      Explanation: In terms important features and
  Attribute Inference : Does Bob smoke?                                           connections (marked as red)
Figure 1: Privacy and transparency are usually studied together with their effect on model performance. But the trade-offs
between privacy and transparency have been so far ignored. Can transparency increase the risk of privacy leakage? How
transparent are privacy preserving models?



ods include random walk based strategies [23, 24] which        Specifically, predictions on graphs are induced by a
encode structural similarity of the nodes exposed by their  complex combination of nodes and paths of edges be-
co-occurrence in random walks; matrix-factorization         tween them in addition to the node features. A trivial
based [25] which rely on low rank factorization of some     application of existing explainability methods to graph
node similarity matrix; and the most popular graph neural   models cannot account for the role of graph structure
networks (GNNs) [26, 27] which learns node represen-        in the model decision. Consequently several graph spe-
tations by recursive aggregation and transformation of      cific explainability approaches have been recently devel-
neighborhood features. These methods are usually non-       oped which focus primarily on explaining graph neural
transparent and are shown to be prone to privacy leakage    networks’ decisions for node and graph classification
risks.                                                      [32, 33].
                                                               Explanations usually include the importance scores for
     Towards improving the adoption of these meth-          nodes/edges in a subgraph (or node’s neighborhood in
     ods in sensitivity applications like healthcare        case of node-level task) and the node features [11, 12, 13].
     and medicine the community has started pay-            Figure 2 depicts an example of an explanation over graph
     ing attention to the aspects of transparency and       data. Depending on the explanation method, the im-
     privacy. However these aspects have been so            portance scores could be either continuous (soft masks)
     far studied independently (see also Figure 1 for       or binary (hard masks). A few works have also been
     an illustration). A formal investigation into          proposed to explain dense unsupervised node represen-
     the linked role of transparency and privacy in         tations [34, 35]. In terms of methodologies, several tech-
     achieving trustworthy GraphML is missing.              niques based on input perturbations [11, 12, 13], input
                                                            gradients[36, 37], causal techniques [34, 38, 33] as well
                                                            as utilizing simpler surrogate models [14] have been ex-
                                                            plored.
2.2. Transparency for GraphML Models                           Another methodology to provide transparency is to
Transparency for deep models, as in GraphML, is usu- develop interpretable by design models [15, 16, 39]. Such
ally achieved by providing explanations corresponding to models usually contain a self-explanatory module trained
decisions of an already trained model or by building inter- jointly with the learner module. Explanations are thus,
pretable by design or self-explaining models. Numerous by design, faithful to the model.
approaches have been proposed in the literature for ex-        A few other works also focus on unifying diverse no-
plaining general machine learning models [28, 29, 30, 31];  tions  of evaluation strategies [40, 37] necessary for effec-
however, models learned over graph-structured data have tively assessing the quality and utility of explanations.
some unique challenges.
                                                          Model decision: Bob should not get the loan
                                                          Example Explanation
                Bob
                                                          Features and neighbouring nodes with highest
                                                          importance scores for the prediction :
                                                          1) Features
                                                              • Income
                                                              • Age
                                                          2) Neighbourhood Nodes (Marked green)
                                                              • Bob’s colleague named Tom
                                                              • A close friend of Tom
Figure 2: An example explanation in terms of features and node attribution over a social network in which a node represents
a user and edges represent friendship relation. Node features correspond to demographic attributes of the user. Neighboring
nodes with high importance scores are marked green.



                                                             knowledge-distillation framework with the two noise
    Despite the progress in improving transparency
                                                             mechanisms, random subsampling, and noisy labeling to
    of GraphML techniques its effect on data pri-
                                                             release graph neural networks under differential privacy
    vacy has escaped attention. While transparency
                                                             guarantees. In particular it uses only a random sample
    could increase the utility of the model, for sensi-
                                                             of private data to train teacher models corresponding to
    tive applications any unaddressed concerns for
                                                             nodes in an unlabelled public dataset. The final model
    privacy can hinder the full adoption of the mod-
                                                             which is later released is trained on public data using
    els and further dissuade the participants to share
                                                             the noisy labels generated by the teacher models. Other
    their data.
                                                             works [20, 19] do not build a separate public model but
                                                             achieve DP via adding noise directly to the aggregation
                                                             module of GNNs. Adversarial defence to privacy attacks
2.3. Privacy in GraphML                                      on GNNs is proposed in [43], in which the predictability
                                                             of private labels is destroyed and the utility of perturbed
Deep learning models, in general, are known to leak graphs is maintained. An adversarial learning approach
private information about the employed training data. based on mini-max game between the desired graph fea-
Recent works showed that trained model on graph data ture encoder and the worst-case attacker is proposed in
can leak sensitive information about the training data [44] to address the attribute inference attack on GNNs.
(see Figure 3) like node membership [7, 8], certain dataset
properties [41] and connectivity structure of the nodes
                                                                  Despite the growing number of works in im-
[9]. In Figure 3 we illustrate the possibility of different
                                                                  proving privacy in GraphML, its effect on trans-
privacy attacks given access to trained GraphML model.
                                                                  parency of these models is not at all studied.
Compared to general deep learning models, GraphML is
                                                                 The complex mechanisms employed to ensure
more vulnerable to privacy risks as they incorporate not
                                                                  privacy further hurts the model transparency.
only the node features/labels but also the graph structure
                                                                  Consequently it is not clear if existing explain-
[7].
                                                                  ers can be used to explain the decision making
   Privacy-preserving techniques for graph models are
                                                                  process of privacy-preserving models.
mainly based on differential privacy [42, 7, 19, 20] and
adversarial training frameworks [43, 44, 45]. The key
idea in differential privacy [46] is to conceal the presence
of a single individual in the dataset. In particular, if we
query a dataset containing 𝑁 individuals, the query’s re-
                                                             3. A Unified Perspective
sult will be probabilistically indistinguishable from the Graphs are powerful abstractions that facilitate leverag-
result of querying a neighboring dataset with one less ing data interconnection to represent, predict, and ex-
or one more individual. For machine learning models, plain real-world phenomena. Exploiting such explicit
such probabilistic indistuinguishability is achieved by or latent data interconnections, on the one hand, makes
adding appropriate levels of noise at different levels of GraphML more powerful but also brings in additional
model development. For instance, [42] employs objective challenges, further exacerbating the need for a joint in-
perturbation mechanism to develop differential private vestigation of privacy and transparency. In following
network embeddings. Olatunji et al. [7] combines the
                                                                      Bob




                                                 ?


                                 Tries to infer private information



                    Node Membership Inference : Is Bob a part of training data?
                    Relation reconstruction :              Who are friends of Bob?
                    Attribute Inference :                  Does Bob smoke?

Figure 3: Given access to trained model or embeddings trained on graph data, an adversary can launch several attacks to
infer membership, relations or attributes of a node.



we discuss the key issues arising due to the independent    complex privacy-preserving mechanisms, which results
treatment of privacy and transparency for GraphML.          in a further loss of transparency. To understand the issue,
                                                            consider a simple differential privacy-based mechanism
3.1. Diverse explanation types and                          in which randomized noise is added to the model’s out-
                                                            put. Such noise could alter the final decision but not the
       methods                                              decision process that an explanation (according to its
Model explanations for graph data are usually in the form current definition) is usually expected to reveal. Model
of feature and neighborhood (subgraph) attributions. In agnostic approaches for explainability, which only as-
particular, importance scores for node features and its sume black-box access to the trained model, might be
neighboring nodes/edges are released as explanations. misguided by such alteration in the final decision.
Neighborhood attributions or structure explanations are
a more direct form of information leakage. They can be, 3.3. The curse of overfitting
for example, leveraged to identify nodes in the training
set or infer hidden attributes of sensitive nodes using the In traditional machine learning, we can randomly divide
attributes of their neighbors.                              the data into two parts to obtain training and test sets. It
   Besides, the data points (nodes) in graph data are cor- is more tricky in graphs where the data points are con-
related, thus violating the usual i.i.d. assumption over nected, and random data sampling may result in non i.i.d.
data distributions. Consequently, the decisions and ex- train and test sets. Even for the task of graph classifica-
planations over correlated nodes might themselves be tion where the graphs constitute the datapoints instead
correlated. Such correlations among released explana- of the the nodes, distributional changes are common in
tions might be exploited to reconstruct sensitive infor- train and test splits [47] due to varying graph structure
mation of the training data. For example, the similarity and size. Specifically, the train set may contain specific
in feature explanations for recommendations to two con- spurious correlations which are not representative of the
nected users might reveal the sensitive link information entire dataset. This puts GraphML models at a higher risk
they want to hide. Towards this [22] show that the link of overfitting to sample specific correlations rather than
structure of the training graph can be reconstructed with learning the desired general patterns [48]. Existing pri-
a high success rate even if only the feature explanations vacy attacks have leveraged overfitting to reveal sensitive
are available.                                              information about the training sample [49]. Exploiting
                                                            associated explanations, which in principle should reveal
                                                            learned spurious correlations, can further aid in privacy
3.2. Transparency of private models                         leakage.
Moreover, due to the correlated nature of the graph data,
privacy-preserving mechanisms on graph models need
to focus on several aspects such as node privacy, edge
privacy, and attribute privacy [20]. This leads to more
4. Research Directions                                            using stochastic attention mechanisms [39], graph
                                                                  sparsification strategies [16] etc. These methods are
Based on the described issues and challenges in the pre-          claimed to remove spurious correlations in the train-
vious section, we recommend the following research                ing phase leading to a reduction in overfitting. A
directions towards a formal investigation of privacy-             possible research direction is further exploiting such
transparency tradeoffs.                                           transparency strategies to minimize privacy leakage.
1. New Threat Models. A first step is to quantify the
   privacy risks of releasing post-hoc explanations. To-       5. Conclusion
   wards that, we need to design new threat models
   and structure-aware privacy attacks in the presence of      There has been an unprecedented rise in the popularity
   post-hoc model explanations. Care should be taken           of graph machine learning in recent years. With its grow-
   to formulate realistic assumptions on adversary’s back-     ing applications in sensitive areas, several works focus
   ground knowledge. For example, in highly homophilic         independently on their transparency and privacy aspects.
   graphs, an adversary might be able to approximate           We provide a unified perspective on the need for a joint
   well the link structure of the graph only if the node       investigation of privacy and transparency in GraphML.
   features/labels are available. What information expla-      We hope to start a discussion and foster future research
   nations could leak in addition when explanations are        in quantifying and resolving the privacy-transparency
   provided?                                                   tradeoffs in GraphML. Resolution of such tradeoffs would
                                                               make GraphML more accessible to stakeholders currently
2. Risk-utilty assessment of different explanation             tied down by regulatory concerns and lack of trust in the
   types and methods. Model explanations for                   solutions.
   GraphML can be in the form of feature or node/edge
   importance scores. Besides, existing explanation
   methods are based on different methodologies and            References
   might be discovering different aspects of the model
   decision process. Depending on the dataset and appli-        [1] T. Gaudelet, B. Day, A. R. Jamasb, J. Soman,
   cation, certain types of explanation methods and types           C. Regep, G. Liu, J. B. R. Hayter, R. Vickers,
   of explanation (feature or structural) might be pre-             C. Roberts, J. Tang, D. Roblin, T. L. Blundell, M. M.
   ferred over others. A dataset and application-specific           Bronstein, J. P. Taylor-King, Utilizing graph ma-
   risk-utility assessment might reveal more favorable ex-          chine learning within drug discovery and devel-
   planations for minimizing privacy loss. For instance,            opment, Briefings in Bioinformatics 22 (2021).
   [22] finds that gradient-based feature explanations              doi:1 0 . 1 0 9 3 / b i b / b b a b 1 5 9 .
   have the least predictive (faithfulness to the model)        [2] T. N. Dong, S. Mucke, M. Khosla, Mucomid: A mul-
   power for the task of node classification but leak the           titask graph convolutional learning framework for
   most amount of information about the private struc-              mirna-disease association prediction, IEEE/ACM
   ture of the training graph. In such cases, one can               Transactions on Computational Biology and Bioin-
   decide not to reveal such an explanation as it has little        formatics (2022).
   utility for the user.                                        [3] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L.
                                                                    Hamilton, J. Leskovec, Graph convolutional neu-
3. Transparency of privacy-preserving models. Be-                   ral networks for web-scale recommender systems,
   sides evaluating the privacy risks of releasing expla-           in: Proceedings of the 24th ACM SIGKDD Inter-
   nations, it is essential to analyze the transparency of          national Conference on Knowledge Discovery and
   privacy-preserving techniques. It is not clear if exist-         Data Mining, KDD ’18, ACM, 2018, pp. 974–983.
   ing explanation strategies can faithfully explain the        [4] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying,
   privacy-preserving models’ decisions. Questions like             J. Leskovec, P. Battaglia, Learning to simulate com-
   what should be the properties of explanations of such            plex physics with graph networks, in: International
   models? What constitutes a faithful explanation? need            Conference on Machine Learning, PMLR, 2020, pp.
   to be investigated. Consequently new techniques to               8459–8468.
   explain privacy preserving models need to be devel-          [5] T. N. Dong, S. Johanna, S. Mucke, M. Khosla, A
   oped.                                                            message passing framework with multiple data
                                                                    integration for mirna-disease association predic-
4. Reducing overfitting. Overfitting is usually con-                tion,         Scientific Reports (2022). doi:1 0 . 1 0 3 8 /
   sidered a common enemy for model effectiveness on                s41598- 022- 20529- 5.
   unseen data and privacy. Recently, a few works have          [6] E. Dai, T. Zhao, H. Zhu, J. Xu, Z. Guo, H. Liu,
   proposed interpretable by design models for example              J. Tang, S. Wang, A comprehensive survey on trust-
     worthy graph neural networks: Privacy, robust-                     Communications Security, 2021, pp. 2130–2145.
     ness, fairness, and explainability, arXiv preprint            [20] S. Sajadmanesh, A. S. Shamsabadi, A. Bellet,
     arXiv:2204.08570 (2022).                                           D. Gatica-Perez, Gap: Differentially private graph
 [7] I. E. Olatunji, W. Nejdl, M. Khosla, Membership in-                neural networks with aggregation perturbation,
     ference attack on graph neural networks, in: 2021                  arXiv preprint arXiv:2203.00949 (2022).
     IEEE International Conference on Trust, Privacy               [21] R. Shokri, M. Strobel, Y. Zick,                           On the pri-
     and Security in Intelligent Systems and Applica-                   vacy risks of model explanations, in: Proceed-
     tions (TPS-ISA), IEEE Computer Society, Los Alami-                 ings of the 2021 AAAI/ACM Conference on AI,
     tos, CA, USA, 2021, pp. 11–20.                                     Ethics, and Society, AIES ’21, Association for
 [8] V. Duddu, A. Boutet, V. Shejwalkar, Quantifying pri-               Computing Machinery, New York, NY, USA, 2021,
     vacy leakage in graph embedding, in: MobiQuitous                   p. 231–241. URL: https://doi.org/10.1145/3461702.
     2020-17th EAI International Conference on Mobile                   3462533. doi:1 0 . 1 1 4 5 / 3 4 6 1 7 0 2 . 3 4 6 2 5 3 3 .
     and Ubiquitous Systems: Computing, Networking                 [22] I. E. Olatunji, M. Rathee, T. Funke, M. Khosla, Pri-
     and Services, 2020, pp. 76–85.                                     vate graph extraction via feature explanations, in:
 [9] Z. Zhang, Q. Liu, Z. Huang, H. Wang, C. Lu, C. Liu,                Accepted for publication in 23rd Privacy Enhancing
     E. Chen, Graphmi: Extracting private graph data                    Technologies Symposium (PETS 2023), 2023. URL:
     from graph neural networks, in: Z.-H. Zhou                         https://arxiv.org/abs/2206.14724.
     (Ed.), Proceedings of the Thirtieth International             [23] B. Perozzi, R. Al-Rfou, S. Skiena, Deepwalk: Online
     Joint Conference on Artificial Intelligence, IJCAI-21,             learning of social representations, in: KDD, 2014.
     2021, pp. 3749–3755.                                          [24] M. Khosla, J. Leonhardt, W. Nejdl, A. Anand, Node
[10] X. He, J. Jia, M. Backes, N. Z. Gong, Y. Zhang, Steal-             representation learning for directed graphs, in:
     ing links from graph neural networks, in: 30th                     ECML, 2019.
     USENIX Security Symposium (USENIX Security 21),               [25] C. Zhou, Y. Liu, X. Liu, Z. Liu, J. Gao, Scalable graph
     2021, pp. 2669–2686.                                               embedding for asymmetric proximity., in: AAAI,
[11] R. Ying, D. Bourgeois, J. You, et al., GNN explainer:              2017, pp. 2942–2948.
     A tool for post-hoc explanation of graph neural net-          [26] T. N. Kipf, M. Welling, Semi-supervised classifica-
     works, Advances in neural information processing                   tion with graph convolutional networks, in: ICLR,
     systems 32 (2019) 9240–9251.                                       2017.
[12] T. Funke, M. Khosla, M. Rathee, A. Anand, Zorro:              [27] W. L. Hamilton, R. Ying, J. Leskovec, Inductive rep-
     Valid, sparse, and stable explanations in graph neu-               resentation learning on large graphs, in: NeurIPS,
     ral networks, IEEE Transactions on Knowledge and                   2017.
     Data Engineering (2022) 1–12. doi:1 0 . 1 1 0 9 / T K D E .   [28] J. Chen, L. Song, M. J. Wainwright, M. I. Jordan,
     2022.3201170.                                                      Learning to explain: An information-theoretic per-
[13] D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen,                  spective on model interpretation, arXiv:1802.07814
     X. Zhang, Parameterized explainer for graph neural                 (2018).
     network, Advances in Neural Information Process-              [29] J. Yoon, J. Jordon, M. van der Schaar, Invase:
     ing Systems 33 (2020).                                             Instance-wise variable selection using neural net-
[14] M. N. Vu, M. T. Thai, Pgm-explainer: Probabilis-                   works, ICLR (2018).
     tic graphical model explanations for graph neural             [30] A. Binder, G. Montavon, S. Lapuschkin, K.-R. Müller,
     networks, in: NeurIPS, 2020.                                       W. Samek, Layer-wise relevance propagation for
[15] J. Yu, T. Xu, Y. Rong, Y. Bian, J. Huang, R. He, Graph             neural networks with local renormalization layers,
     information bottleneck for subgraph recognition,                   in: ICANN, 2016.
     arXiv preprint arXiv:2010.05563 (2020).                       [31] M. Sundararajan, A. Taly, Q. Yan, Axiomatic attri-
[16] M. Rathee, Z. Zhang, T. Funke, M. Khosla, A. Anand,                bution for deep networks, in: PMLR, 2017.
     Learnt sparsification for interpretable graph neural          [32] H. Yuan, J. Tang, X. Hu, S. Ji, Xgnn: Towards model-
     networks, arXiv preprint arXiv:2106.12920 (2021).                  level explanations of graph neural networks, in:
[17] Z. Zhang, Q. Liu, H. Wang, C. Lu, C. Lee, Protgnn:                 SIGKDD, 2020.
     Towards self-explaining graph neural networks,                [33] Y. Gao, T. Sun, R. Bhatt, D. Yu, S. Hong, L. Zhao,
     arXiv preprint arXiv:2112.00911 (2021).                            Gnes: Learning to explain graph neural networks,
[18] I. E. Olatunji, T. Funke, M. Khosla, Releasing graph               in: ICDM, 2021.
     neural networks with differential privacy guaran-             [34] B. Kang, J. Lijffijt, T. De Bie, Explaine: An approach
     tees, arXiv preprint arXiv:2109.08907 (2021).                      for explaining network embedding-based link pre-
[19] S. Sajadmanesh, D. Gatica-Perez, Locally private                   dictions, arXiv:1904.12694 (2019).
     graph neural networks, in: Proceedings of the                 [35] M. Idahl, M. Khosla, A. Anand, Finding inter-
     2021 ACM SIGSAC Conference on Computer and                         pretable concept spaces in node embeddings using
     knowledge bases, in: Workshops of ECML PKDD,                     in machine learning, Journal of Computer Security
     2019.                                                            28 (2020) 35–70.
[36] P. E. Pope, S. Kolouri, M. Rostami, et al., Explain-
     ability methods for graph convolutional neural net-
     works, in: CVPR, 2019.
[37] B. Sanchez-Lengeling, J. Wei, B. Lee, E. Reif,
     P. Wang, W. W. Qian, K. McCloskey, L. Colwell,
     A. Wiltschko, Evaluating attribution for graph neu-
     ral networks, NeurIPS (2020).
[38] M. Bajaj, L. Chu, Z. Y. Xue, J. Pei, L. Wang, P. C.-H.
     Lam, Y. Zhang, Robust counterfactual explanations
     on graph neural networks, Advances in Neural In-
     formation Processing Systems 34 (2021) 5644–5655.
[39] S. Miao, M. Liu, P. Li, Interpretable and generaliz-
     able graph learning via stochastic attention mecha-
     nism, arXiv preprint arXiv:2201.12987 (2022).
[40] M. Rathee, T. Funke, A. Anand, M. Khosla, Bagel:
     A benchmark for assessing graph neural network
     explanations, 2022. URL: https://arxiv.org/abs/2206.
     13983. doi:1 0 . 4 8 5 5 0 / A R X I V . 2 2 0 6 . 1 3 9 8 3 .
[41] Z. Zhang, M. Chen, M. Backes, Y. Shen, Y. Zhang,
     Inference attacks against graph neural networks,
     in: Proc. USENIX Security, 2022.
[42] D. Xu, S. Yuan, X. Wu, H. Phan, Dpne: Differentially
     private network embedding, in: Pacific-Asia Con-
     ference on Knowledge Discovery and Data Mining,
     Springer, 2018, pp. 235–246.
[43] I.-C. Hsieh, C.-T. Li, Netfense: Adversarial de-
     fenses against privacy attacks on neural networks
     for graph data, IEEE Transactions on Knowledge
     and Data Engineering (2021) 1–1. doi:1 0 . 1 1 0 9 / T K D E .
     2021.3087515.
[44] P. Liao, H. Zhao, K. Xu, T. Jaakkola, G. J. Gordon,
     S. Jegelka, R. Salakhutdinov, Information obfusca-
     tion of graph neural networks, in: International
     Conference on Machine Learning, PMLR, 2021, pp.
     6600–6610.
[45] K. Li, G. Luo, Y. Ye, W. Li, S. Ji, Z. Cai, Adversarial
     privacy-preserving graph embedding against infer-
     ence attack, IEEE Internet of Things Journal 8 (2020)
     6904–6915.
[46] C. Dwork, F. McSherry, K. Nissim, A. Smith, Cali-
     brating noise to sensitivity in private data analysis,
     in: Theory of cryptography conference, Springer,
     2006, pp. 265–284.
[47] H. Li, X. Wang, Z. Zhang, W. Zhu, Out-of-
     distribution generalization on graphs: A survey,
     arXiv preprint arXiv:2202.07987 (2022).
[48] Q. Zhu, N. Ponomareva, J. Han, B. Perozzi, Shift-
     robust gnns: Overcoming the limitations of local-
     ized graph training data, Advances in Neural Infor-
     mation Processing Systems 34 (2021) 27965–27977.
[49] S. Yeom, I. Giacomelli, A. Menaged, M. Fredrikson,
     S. Jha, Overfitting, robustness, and malicious algo-
     rithms: A study of potential causes of privacy risk