Learning on Graphs under Label Noise (2306.08194v1)
Abstract: Node classification on graphs is a significant task with a wide range of applications, including social analysis and anomaly detection. Even though graph neural networks (GNNs) have produced promising results on this task, current techniques often presume that label information of nodes is accurate, which may not be the case in real-world applications. To tackle this issue, we investigate the problem of learning on graphs with label noise and develop a novel approach dubbed Consistent Graph Neural Network (CGNN) to solve it. Specifically, we employ graph contrastive learning as a regularization term, which promotes two views of augmented nodes to have consistent representations. Since this regularization term cannot utilize label information, it can enhance the robustness of node representations to label noise. Moreover, to detect noisy labels on the graph, we present a sample selection technique based on the homophily assumption, which identifies noisy nodes by measuring the consistency between the labels with their neighbors. Finally, we purify these confident noisy labels to permit efficient semantic graph learning. Extensive experiments on three well-known benchmark datasets demonstrate the superiority of our CGNN over competing approaches.
- “Exploring deeper graph convolutions for semi-supervised node classification,” in ICASSP, 2022.
- “Learning deep neural networks for node classification,” Expert Systems with Applications, vol. 137, pp. 324–334, 2019.
- “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
- “Ghnn: Graph harmonic neural networks for semi-supervised graph-level classification,” Neural Networks, vol. 151, pp. 70–79, 2022.
- “Dualgraph: Improving semi-supervised graph classification via dual contrastive learning,” in ICDE, 2022, pp. 699–712.
- “Glcc: A general framework for graph-level clustering,” in AAAI, 2023.
- “Heterogeneous graph node classification with multi-hops relation features,” in ICASSP, 2022.
- “Graph convolutional label noise cleaner: Train a plug-and-play action classifier for anomaly detection,” in CVPR, 2019.
- “Learning from noisy labels with deep neural networks: A survey,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
- “A second-order approach to learning with instance-dependent label noise,” in CVPR, 2021.
- “Learning with instance-dependent label noise: A sample sieve approach,” arXiv preprint arXiv:2010.02347, 2020.
- “Graph contrastive learning with adaptive augmentation,” in WWW, 2021.
- “Infogcl: Information-aware graph contrastive learning,” in NeurIPS, 2021.
- “Negative sampling strategies for contrastive self-supervised learning of graph representations,” Signal Processing, vol. 190, pp. 108310, 2022.
- “Graph contrastive learning with augmentations,” NeurIPS, 2020.
- “How powerful are graph neural networks?,” in ICLR, 2019.
- “Graph attention networks,” in ICLR, 2018.
- “Towards robust graph neural networks for noisy graphs with sparse labels,” in WSDM, 2022.
- “Nrgnn: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs,” in KDD, 2021.
- “Graph structure learning for robust graph neural networks,” in KDD, 2020.
- “Towards an efficient and general framework of robust training for graph neural networks,” in ICASSP, 2022.
- “Multi-scale contrastive siamese networks for self-supervised graph representation learning,” 2021.
- Yu Song and Donglin Wang, “Learning on graphs with out-of-distribution nodes,” in KDD, 2022.
- “Making deep neural networks robust to label noise: A loss correction approach,” in CVPR, 2017.
- “How does disagreement help generalization against label corruption?,” in ICML, 2019.
- “Learning graph neural networks with noisy labels,” arXiv preprint arXiv:1905.01591, 2019.
- “Adversarial label-flipping attack and defense for graph neural networks,” in ICDM, 2020.