Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 62 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 213 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

On the Robustness of Post-hoc GNN Explainers to Label Noise (2309.01706v2)

Published 4 Sep 2023 in cs.LG and cs.AI

Abstract: Proposed as a solution to the inherent black-box limitations of graph neural networks (GNNs), post-hoc GNN explainers aim to provide precise and insightful explanations of the behaviours exhibited by trained GNNs. Despite their recent notable advancements in academic and industrial contexts, the robustness of post-hoc GNN explainers remains unexplored when confronted with label noise. To bridge this gap, we conduct a systematic empirical investigation to evaluate the efficacy of diverse post-hoc GNN explainers under varying degrees of label noise. Our results reveal several key insights: Firstly, post-hoc GNN explainers are susceptible to label perturbations. Secondly, even minor levels of label noise, inconsequential to GNN performance, harm the quality of generated explanations substantially. Lastly, we engage in a discourse regarding the progressive recovery of explanation effectiveness with escalating noise levels.

Citations (1)

Summary

  • The paper demonstrates that post-hoc GNN explainers lose effectiveness when even minor label noise is present.
  • It evaluates explanation quality using fidelity+ and fidelity- metrics across synthetic and real-world datasets.
  • Findings include a surprising recovery in performance beyond 50% noise, indicating the need for more robust explainer methods.

On the Robustness of Post-hoc GNN Explainers to Label Noise

The paper "On the Robustness of Post-hoc GNN Explainers to Label Noise" explores an important aspect of the interpretability of Graph Neural Networks (GNNs): the robustness of post-hoc GNN explainers when faced with label noise. In recent years, GNNs have been increasingly utilized for graph-structured data, drawing attention for their effectiveness in various fields but also for their opacity. Post-hoc explainers, such as GNNExplainer and PGExplainer, have been developed to address this limitation and facilitate insight into the decision-making process of GNNs. However, the robustness of these explainers under conditions of label noise, which is prevalent in real-world datasets, remains underexplored.

The paper introduces two central research questions: (i) Are post-hoc GNN explainers robust to malicious label noise? (ii) Does the robustness of the GNN model guarantee effective explanations by the post-hoc explainers? To address these questions, the authors conduct a systematic empirical investigation into the effects of label noise on the explanations generated by post-hoc GNN explainers. These evaluations are conducted across four datasets, two of which are synthetic (BA-2motifs and BA-Multi-Shapes), and two are real-world datasets (MUTAG and Graph-Twitter).

The results demonstrate that post-hoc GNN explainers are indeed susceptible to label noise, which significantly decreases the quality of explanations even when the noise level is relatively minor. This sensitivity is noteworthy considering the robustness observed in GNN models themselves, suggesting that the stability of GNN models does not directly translate to the robustness of their explanations. A particularly surprising finding is that beyond a noise threshold of 50%, explanation effectiveness begins to recover despite increasing noise, hinting at a complex interaction between noise levels and feature recognition by GNN explainers.

The paper utilizes two key metrics for evaluating the quality of explanations: fidelity+ and fidelity-. Notably, the paper criticizes the use of fidelity- as a robust metric in this context, suggesting that low values might not necessarily indicate poor explanation quality in scenarios involving high label noise.

These findings have broad implications. Practically, they suggest that caution should be exercised when deploying GNN explainers in domains where label noise is common. There is also a methodological implication regarding the metrics used to evaluate explanation robustness. Theoretically, the results highlight the need for more noise-resilient explainer methodologies. Future research directions proposed include designing more robust post-hoc explainers, refining evaluation metrics, and developing larger-scale benchmark datasets to better paper these phenomena.

Overall, this investigation provides critical insights into a previously underexplored area of GNN interpretability, emphasizing the necessity for further research to develop robust solutions capable of maintaining fidelity under noise. As AI models continue to be deployed in high-stakes environments, understanding the nuances of their interpretability and robustness will remain crucial.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube