Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Uncertainty Quantification over Graph with Conformalized Graph Neural Networks (2305.14535v2)

Published 23 May 2023 in cs.LG and stat.ML

Abstract: Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data. However, GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant. We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates. Given an entity in the graph, CF-GNN produces a prediction set/interval that provably contains the true label with pre-defined coverage probability (e.g. 90%). We establish a permutation invariance condition that enables the validity of CP on graph data and provide an exact characterization of the test-time coverage. Moreover, besides valid coverage, it is crucial to reduce the prediction set size/interval length for practical use. We observe a key connection between non-conformity scores and network structures, which motivates us to develop a topology-aware output correction model that learns to update the prediction and produces more efficient prediction sets/intervals. Extensive experiments show that CF-GNN achieves any pre-defined target marginal coverage while significantly reducing the prediction set/interval size by up to 74% over the baselines. It also empirically achieves satisfactory conditional coverage over various raw and network features.

Citations (43)

Summary

  • The paper presents CF-GNN, a framework integrating conformal prediction with graph neural networks to provide guaranteed uncertainty quantification.
  • It introduces a topology-aware output correction model that refines predictions using network structure, reducing prediction set sizes by up to 74%.
  • Extensive experiments across 15 datasets demonstrate that CF-GNN achieves target marginal coverage and strong empirical conditional coverage in node classification and regression.

Uncertainty Quantification over Graphs with Conformalized Graph Neural Networks

The paper "Uncertainty Quantification over Graph with Conformalized Graph Neural Networks" (2305.14535) introduces CF-GNN, a novel framework that extends conformal prediction to GNNs, enabling rigorous uncertainty quantification for graph-structured data. This approach provides prediction sets or intervals with a guaranteed coverage probability, addressing a critical gap in GNN deployment where the cost of errors is significant.

Addressing Exchangeability in Graph Data

A key contribution of the paper lies in establishing the validity of conformal prediction for graphs in transductive settings. The paper demonstrates that standard conformal prediction remains valid if the non-conformity score is invariant to the ordering of calibration and test samples, a condition readily satisfied by many GNN models. This permutation invariance enables the application of conformal prediction to GNNs without compromising statistical guarantees.

Topology-Aware Output Correction Model

To enhance the efficiency of conformal prediction, the authors propose a topology-aware correction model that learns to update predictions based on network structure. This model leverages the observed correlation between non-conformity scores and network topology to refine predictions and reduce the size of prediction sets or the length of prediction intervals. The correction model is trained by minimizing a differentiable inefficiency loss that simulates the CP set sizes or interval lengths, aligning with the theoretical framework of graph exchangeability to ensure valid coverage guarantees.

Empirical Validation and Performance

The paper presents extensive experimental results across 15 datasets for both node classification and regression tasks. CF-GNN consistently achieves pre-defined target marginal coverage, outperforming existing UQ methods that often fail to meet coverage guarantees. Furthermore, CF-GNN significantly reduces the prediction set sizes or interval lengths by up to 74% compared to direct application of conformal prediction to GNNs. The method also demonstrates strong empirical conditional coverage over various network features.

Implications and Future Directions

The CF-GNN framework offers a practical approach to uncertainty quantification in GNNs, providing statistically sound and efficient prediction sets or intervals. By addressing the challenges of exchangeability in graph data and incorporating topology-aware corrections, this research advances the reliable deployment of GNNs in critical applications. Future research directions include generalizing the inefficiency loss to other desirable CP properties such as robustness and conditional coverage, extensions to inductive settings or transductive but non-random split settings, and extensions to other graph tasks such as link prediction, community detection, and so on.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube