Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks (2310.01820v2)

Published 3 Oct 2023 in cs.LG

Abstract: Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes. GNNs have emerged as pivotal architectures in analyzing graph-structured data, and their expansive application in sensitive domains requires a comprehensive understanding of their decision-making processes -- necessitating a framework for GNN explainability. An explanation function for GNNs takes a pre-trained GNN along with a graph as input, to produce a `sufficient statistic' subgraph with respect to the graph label. A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions. This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics, including $Fid_+$, $Fid_-$, and $Fid_\Delta$. Specifically, a formal, information-theoretic definition of explainability is introduced and it is shown that existing metrics often fail to align with this definition across various statistical scenarios. The reason is due to potential distribution shifts when subgraphs are removed in computing these fidelity measures. Subsequently, a robust class of fidelity measures are introduced, and it is shown analytically that they are resilient to distribution shift issues and are applicable in a wide range of scenarios. Extensive empirical analysis on both synthetic and real datasets are provided to illustrate that the proposed metrics are more coherent with gold standard metrics. The source code is available at https://trustai4s-lab.github.io/fidelity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Global explainability of GNNs via logic combination of learned concepts. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=OTbRTIY4YS.
  2. Global explainability of gnns via logic combination of learned concepts. In Proceedings of the International Conference on Learning Representations (ICLR), 2023b.
  3. Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686, 2019.
  4. Patrick Billingsley. Probability and measure. John Wiley & Sons, 2017.
  5. Thomas M Cover. Elements of information theory. John Wiley & Sons, 1999.
  6. Information theory: coding theorems for discrete memoryless systems. Cambridge University Press, 2011.
  7. Towards self-explainable graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp.  302–311, 2021.
  8. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.
  9. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013.
  10. On regularization for explaining graph neural networks: An information theory perspective, 2023a. URL https://openreview.net/forum?id=5rX7M4wa2R˙.
  11. Cooperative explanations of graph neural networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp.  616–624, 2023b.
  12. A survey of graph edit distance. Pattern Analysis and applications, 13:113–129, 2010.
  13. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  14. The out-of-distribution problem in explainability and search methods for feature importance explanations. Advances in neural information processing systems, 34:3650–3666, 2021.
  15. A benchmark for interpretability methods in deep neural networks. Advances in neural information processing systems, 32, 2019.
  16. Evaluations and methods for explanation through robustness analysis. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=4dXmpCDGNp7.
  17. Don’t be fooled: label leakage in explanation methods and the importance of their quantitative evaluation. In International Conference on Artificial Intelligence and Statistics, pp.  8925–8953. PMLR, 2023.
  18. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJU4ayYgl.
  19. Vladimir A Kovalevsky. The problem of character recognition from the point of view of mathematical statistics. Character Readers and Pattern Recognition, pp.  3–30, 1968.
  20. A survey of explainable graph neural networks: Taxonomy and evaluation metrics. arXiv preprint arXiv:2207.12599, 2022.
  21. Generative causal explanations for graph neural networks. In International Conference on Machine Learning, pp.  6666–6679. PMLR, 2021.
  22. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
  23. Clear: Generative counterfactual explanations on graphs. Advances in Neural Information Processing Systems, 35:25895–25907, 2022.
  24. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pp.  15524–15543. PMLR, 2022.
  25. Interpretable geometric deep learning via learnable randomness injection. In Proceedings of the International Conference on Learning Representations (ICLR), 2023.
  26. Research design and statistical analysis. Routledge, 2013.
  27. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Computing Surveys, 2022.
  28. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10772–10781, 2019.
  29. Resisting out-of-distribution data problem in perturbation of xai. arXiv preprint arXiv:2107.14000, 2021.
  30. Efficient gnn explanation via learning removal-based attribution. arXiv preprint arXiv:2306.05760, 2023.
  31. Sharp bounds on arimoto’s conditional rényi entropies between two distinct orders. In 2017 IEEE International Symposium on Information Theory (ISIT), pp.  2975–2979. IEEE, 2017.
  32. The graph neural network model. IEEE transactions on neural networks, 20(1):61–80, 2008.
  33. D Tebbe and S Dwyer. Uncertainty and the probability of error (corresp.). IEEE Transactions on Information theory, 14(3):516–518, 1968.
  34. Graph attention networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.
  35. Gnninterpreter: A probabilistic generative model-level explanation for graph neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2023.
  36. A survey of trustworthy graph learning: Reliability, explainability, and privacy protection. arXiv preprint arXiv:2205.10014, 2022.
  37. Task-agnostic graph explanations. Advances in Neural Information Processing Systems, 35:12027–12039, 2022.
  38. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km.
  39. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
  40. Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  430–438, 2020.
  41. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, pp.  12241–12252. PMLR, 2021.
  42. Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  43. Trustworthy graph neural networks: Aspects, methods and trends. arXiv preprint arXiv:2205.07424, 2022a.
  44. Mixupexplainer: Generalizing explanations for graph neural networks with data augmentation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.  3286–3296, 2023.
  45. Gstarx: Explaining graph neural networks with structure-aware cooperative games. Advances in Neural Information Processing Systems, 35:19810–19823, 2022b.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xu Zheng (88 papers)
  2. Farhad Shirani (45 papers)
  3. Tianchun Wang (19 papers)
  4. Wei Cheng (175 papers)
  5. Zhuomin Chen (10 papers)
  6. Haifeng Chen (99 papers)
  7. Hua Wei (71 papers)
  8. Dongsheng Luo (46 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.