Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
124 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method (2310.14525v2)

Published 23 Oct 2023 in cs.LG and cs.AI

Abstract: Graph contrastive learning (GCL) has emerged as a representative graph self-supervised method, achieving significant success. The currently prevalent optimization objective for GCL is InfoNCE. Typically, it employs augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples. The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples. However, due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples. This can impair the learned node representations and subsequently hinder performance in downstream tasks. While numerous methods have been proposed to mitigate the impact of false negatives, they still face various challenges. For instance, while increasing the number of negative samples can dilute the impact of false negatives, it concurrently increases computational burden. Thus, we propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples by redefining the concept of negative samples to a certain extent, thereby avoiding the issue of false negative samples. The effectiveness of GraphRank is empirically validated through experiments on the node, edge, and graph level tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229 (2019).
  2. Investigating the role of negatives in contrastive representation learning. arXiv preprint arXiv:2106.09943 (2021).
  3. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597–1607.
  4. Hongliang Chi and Yao Ma. 2022. Enhancing Graph Contrastive Learning with Node Similarity. arXiv preprint arXiv:2208.06743 (2022).
  5. Debiased contrastive learning. Advances in neural information processing systems 33 (2020), 8765–8775.
  6. Chen Feng and Ioannis Patras. 2022. Adaptive soft contrastive learning. In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2721–2727.
  7. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997 (2018).
  8. Linkless link prediction via relational distillation. In International Conference on Machine Learning. PMLR, 12012–12033.
  9. Provable guarantees for self-supervised deep learning with spectral contrastive loss. Advances in Neural Information Processing Systems 34 (2021), 5000–5011.
  10. Semi-implicit graph variational auto-encoders. Advances in neural information processing systems 32 (2019).
  11. Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In International conference on machine learning. PMLR, 4116–4126.
  12. Ranking info noise contrastive estimation: Boosting contrastive learning via ranked positives. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 897–905.
  13. Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594–604.
  14. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988 (2014).
  15. Unbiased learning-to-rank with biased feedback. In Proceedings of the tenth ACM international conference on web search and data mining. 781–789.
  16. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
  17. Hang Li. 2011. A short introduction to learning to rank. IEICE TRANSACTIONS on Information and Systems 94, 10 (2011), 1854–1862.
  18. SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking. In Proceedings of the ACM Web Conference 2023. 143–153.
  19. Perfect Alignment May be Poisonous to Graph Contrastive Learning. arXiv:2310.03977 [cs.LG]
  20. Co-embedding attributed networks. In Proceedings of the twelfth ACM international conference on web search and data mining. 393–401.
  21. Graph soft-contrastive learning via neighborhood ranking. arXiv preprint arXiv:2209.13964 (2022).
  22. Affinity Uncertainty-Based Hard Negative Mining in Graph Contrastive Learning. IEEE Transactions on Neural Networks and Learning Systems (2024).
  23. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
  24. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1150–1160.
  25. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018).
  26. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12, 9 (2011).
  27. InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. In International Conference on Learning Representations.
  28. Large-scale representation learning on graphs via bootstrapping. arXiv preprint arXiv:2102.06514 (2021).
  29. Large-Scale Representation Learning on Graphs via Bootstrapping. In International Conference on Learning Representations. https://openreview.net/forum?id=0UXT6PpRpW
  30. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
  31. Deep Graph Infomax. In International Conference on Learning Representations. https://openreview.net/forum?id=rklz9iAcKQ
  32. Progcl: Rethinking hard negative mining in graph contrastive learning. arXiv preprint arXiv:2110.02027 (2021).
  33. Infogcl: Information-aware graph contrastive learning. Advances in Neural Information Processing Systems 34 (2021), 30414–30425.
  34. How Powerful are Graph Neural Networks?. In International Conference on Learning Representations.
  35. Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1365–1374.
  36. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning. PMLR, 40–48.
  37. Graph contrastive learning automated. In International Conference on Machine Learning. PMLR, 12121–12132.
  38. Graph contrastive learning with augmentations. Advances in neural information processing systems 33 (2020), 5812–5823.
  39. Towards Generalizable Graph Contrastive Learning: An Information Theory Perspective. arXiv preprint arXiv:2211.10929 (2022).
  40. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning. PMLR, 12310–12320.
  41. From canonical correlation analysis to self-supervised graph neural networks. Advances in Neural Information Processing Systems 34 (2021), 76–89.
  42. Weakly supervised contrastive learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10042–10051.
  43. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
  44. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021. 2069–2080.

Summary

We haven't generated a summary for this paper yet.