Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Scalable Graph Neural Network Inference with Node-Adaptive Propagation (2310.10998v2)

Published 17 Oct 2023 in cs.LG and cs.AI

Abstract: Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications. However, the sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs. Although existing Scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph to be known and fixed. To further accelerate Scalable GNNs inference in this inductive setting, we propose an online propagation framework and two novel node-adaptive propagation methods that can customize the optimal propagation depth for each node based on its topological information and thereby avoid redundant feature propagation. The trade-off between accuracy and latency can be flexibly managed through simple hyper-parameters to accommodate various latency constraints. Moreover, to compensate for the inference accuracy loss caused by the potential early termination of propagation, we further propose Inception Distillation to exploit the multi-scale receptive field information within graphs. The rigorous and comprehensive experimental study on public datasets with varying scales and characteristics demonstrates that the proposed inference acceleration framework outperforms existing state-of-the-art graph inference acceleration methods in terms of accuracy and efficiency. Particularly, the superiority of our approach is notable on datasets with larger scales, yielding a 75x inference speedup on the largest Ogbn-products dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. W. Hu, M. Fey, H. Ren, M. Nakata, Y. Dong, and J. Leskovec, “Ogb-lsc: A large-scale challenge for machine learning on graphs,” arXiv preprint arXiv:2103.09430, 2021.
  2. P. Yi, J. Li, B. Choi, S. S. Bhowmick, and J. Xu, “Flag: towards graph query autocompletion for large graphs,” Data Science and Engineering, vol. 7, no. 2, pp. 175–191, 2022.
  3. T. Chen, H. Yin, J. Ren, Z. Huang, X. Zhang, and H. Wang, “Uniting heterogeneity, inductiveness, and efficiency for graph representation learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  4. J. Yu, H. Yin, X. Xia, T. Chen, J. Li, and Z. Huang, “Self-supervised learning for recommender systems: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  5. B. Chandramouli, J. J. Levandoski, A. Eldawy, and M. F. Mokbel, “Streamrec: a real-time recommender system,” in Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, 2011, pp. 1243–1246.
  6. R. Zheng, L. Qu, T. Chen, K. Zheng, Y. Shi, and H. Yin, “Personalized elastic embedding learning for on-device recommendation,” arXiv preprint arXiv:2306.10532, 2023.
  7. R. Zheng, L. Qu, B. Cui, Y. Shi, and H. Yin, “Automl for deep recommender systems: A survey,” arXiv preprint arXiv:2203.13922, 2022.
  8. S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, and T. Tan, “Session-based recommendation with graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 346–353.
  9. Y. Li, T. Chen, P.-F. Zhang, and H. Yin, “Lightweight self-attentive sequential recommendation,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 967–977.
  10. S. Zhang, Y. Liu, Y. Sun, and N. Shah, “Graph-less neural networks: Teaching old mlps new tricks via distillation,” in International Conference on Learning Representations, 2021.
  11. X. Xia, J. Yu, Q. Wang, C. Yang, N. Q. V. Hung, and H. Yin, “Efficient on-device session-based recommendation,” ACM Transactions on Information Systems, vol. 41, no. 4, pp. 1–24, 2023.
  12. C. T. Duong, T. D. Hoang, H. Yin, M. Weidlich, Q. V. H. Nguyen, and K. Aberer, “Efficient streaming subgraph isomorphism with graph neural networks,” Proceedings of the VLDB Endowment, vol. 14, no. 5, pp. 730–742, 2021.
  13. J. Chen, Z. Chen, M. Wang, G. Fan, G. Zhong, O. Liu, W. Du, Z. Xu, and Z. Gong, “A neural inference of user social interest for item recommendation,” Data Science and Engineering, pp. 1–11, 2023.
  14. X. Xia, J. Yu, G. Xu, and H. Yin, “Towards communication-efficient model updating for on-device session-based recommendation,” arXiv preprint arXiv:2308.12777, 2023.
  15. D. Wang, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, Q. Yu, J. Zhou, S. Yang, and Y. Qi, “A semi-supervised graph attentive network for financial fraud detection,” in 2019 IEEE International Conference on Data Mining (ICDM).   IEEE, 2019, pp. 598–607.
  16. A. Li, Z. Qin, R. Liu, Y. Yang, and D. Li, “Spam review detection with graph convolutional networks,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, ser. CIKM ’19.   New York, NY, USA: Association for Computing Machinery, 2019, p. 2703–2711.
  17. Z. Liu, C. Chen, X. Yang, J. Zhou, X. Li, and L. Song, “Heterogeneous graph neural networks for malicious account detection,” in Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018, pp. 2077–2085.
  18. X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun, “3d graph neural networks for rgbd semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5199–5208.
  19. W. Shi and R. Rajkumar, “Point-gnn: Graph neural network for 3d object detection in a point cloud,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1711–1719.
  20. L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4558–4567.
  21. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
  22. W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 2017, pp. 1024–1034.
  23. X. Gao, W. Zhang, T. Chen, J. Yu, Q. V. H. Nguyen, and H. Yin, “Semantic-aware node synthesis for imbalanced heterogeneous information networks,” arXiv preprint arXiv:2302.14061, 2023.
  24. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 2018.
  25. W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, “Open graph benchmark: Datasets for machine learning on graphs,” arXiv preprint arXiv:2005.00687, 2020.
  26. H. Zeng, M. Zhang, Y. Xia, A. Srivastava, A. Malevich, R. Kannan, V. Prasanna, L. Jin, and R. Chen, “Decoupling the depth and scope of graph neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 19 665–19 679, 2021.
  27. F. Wu, A. H. S. Jr., T. Zhang, C. Fifty, T. Yu, and K. Q. Weinberger, “Simplifying graph convolutional networks,” in Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, 2019, pp. 6861–6871.
  28. F. Frasca, E. Rossi, D. Eynard, B. Chamberlain, M. Bronstein, and F. Monti, “Sign: Scalable inception graph neural networks,” in ICML 2020 Workshop on Graph Representation Learning and Beyond, 2020.
  29. W. Zhang, Y. Shen, Z. Lin, Y. Li, X. Li, W. Ouyang, Y. Tao, Z. Yang, and B. Cui, “Pasca: A graph neural architecture search system under the scalable paradigm,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 1817–1828.
  30. M. Chen, Z. Wei, B. Ding, Y. Li, Y. Yuan, X. Du, and J.-R. Wen, “Scalable graph neural networks via bidirectional propagation,” Advances in neural information processing systems, vol. 33, pp. 14 556–14 566, 2020.
  31. W. Zhang, Z. Sheng, M. Yang, Y. Li, Y. Shen, Z. Yang, and B. Cui, “Nafs: A simple yet tough-to-beat baseline for graph representation learning,” in International Conference on Machine Learning.   PMLR, 2022, pp. 26 467–26 483.
  32. H. Zhu and P. Koniusz, “Simple spectral graph convolution,” in International Conference on Learning Representations, 2020.
  33. W. Zhang, Z. Yin, Z. Sheng, Y. Li, W. Ouyang, X. Li, Y. Tao, Z. Yang, and B. Cui, “Graph attention multi-layer perceptron,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 4560–4570.
  34. A. Bojchevski, J. Gasteiger, B. Perozzi, A. Kapoor, M. Blais, B. Rózemberczki, M. Lukasik, and S. Günnemann, “Scaling graph neural networks with approximate pagerank,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.   New York, NY, USA: ACM, 2020.
  35. J. Gasteiger, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” arXiv preprint arXiv:1810.05997, 2018.
  36. “Open graph benchmark,” https://ogb.stanford.edu/docs/leader_nodeprop/.
  37. H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. K. Prasanna, “Graphsaint: Graph sampling based inductive learning method,” in 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.   OpenReview.net, 2020.
  38. W. Zhang, M. Yang, Z. Sheng, Y. Li, W. Ouyang, Y. Tao, Z. Yang, and B. Cui, “Node dependent local smoothing for scalable graph learning,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  39. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  40. W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh, “Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 257–266.
  41. J. Gasteiger, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” in International Conference on Learning Representations (ICLR), 2019.
  42. K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in Proceedings of the 35th International Conference on Machine Learning, 2018, pp. 5449–5458.
  43. E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” arXiv preprint arXiv:1611.01144, 2016.
  44. G. Hinton, O. Vinyals, J. Dean et al., “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, vol. 2, no. 7, 2015.
  45. L. Yuan, F. E. Tay, G. Li, T. Wang, and J. Feng, “Revisiting knowledge distillation via label smoothing regularization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  46. B. Yan, C. Wang, G. Guo, and Y. Lou, “Tinygnn: Learning efficient graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 1848–1856.
  47. Y. Tian, C. Zhang, Z. Guo, X. Zhang, and N. V. Chawla, “Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency,” International Conference on Learning Representations (ICLR), 2023.
  48. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems, 2019, pp. 8024–8035.
  49. W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, “Open graph benchmark: Datasets for machine learning on graphs,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
  50. J. Chen, J. Zhu, and L. Song, “Stochastic training of graph convolutional networks with variance reduction,” arXiv preprint arXiv:1710.10568, 2017.
  51. J. Chen, T. Ma, and C. Xiao, “Fastgcn: Fast learning with graph convolutional networks via importance sampling,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.   OpenReview.net, 2018.
  52. W. Huang, T. Zhang, Y. Rong, and J. Huang, “Adaptive sampling towards fast graph representation learning,” Advances in neural information processing systems, vol. 31, 2018.
  53. D. Zou, Z. Hu, Y. Wang, S. Jiang, Y. Sun, and Q. Gu, “Layer-dependent importance sampling for training deep and large graph convolutional networks,” Advances in neural information processing systems, vol. 32, 2019.
  54. H. Zhou, A. Srivastava, H. Zeng, R. Kannan, and V. Prasanna, “Accelerating large scale real-time gnn inference using channel pruning,” arXiv preprint arXiv:2105.04528, 2021.
  55. T. Chen, Y. Sui, X. Chen, A. Zhang, and Z. Wang, “A unified lottery ticket hypothesis for graph neural networks,” in International conference on machine learning.   PMLR, 2021, pp. 1695–1706.
  56. B. Hui, D. Yan, X. Ma, and W.-S. Ku, “Rethinking graph lottery tickets: Graph sparsity matters,” arXiv preprint arXiv:2305.02190, 2023.
  57. S. A. Tailor, J. Fernandez-Marques, and N. D. Lane, “Degree-quant: Quantization-aware training for graph neural networks,” arXiv preprint arXiv:2008.05000, 2020.
  58. Y. Yang, J. Qiu, M. Song, D. Tao, and X. Wang, “Distilling knowledge from graph convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7074–7083.
  59. Y. Jing, Y. Yang, X. Wang, M. Song, and D. Tao, “Amalgamating knowledge from heterogeneous graph neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 709–15 718.
  60. C. Yang, J. Liu, and C. Shi, “Extract the knowledge of graph neural networks and go beyond it: An effective knowledge distillation framework,” in Proceedings of the Web Conference 2021, 2021, pp. 1227–1237.
  61. C. Yang, Q. Wu, and J. Yan, “Geometric knowledge distillation: Topology compression for graph neural networks,” in Advances in Neural Information Processing Systems, 2022.
  62. H. He, J. Wang, Z. Zhang, and F. Wu, “Compressing deep graph neural networks via adversarial knowledge distillation,” arXiv preprint arXiv:2205.11678, 2022.
  63. W. Zhang, Y. Jiang, Y. Li, Z. Sheng, Y. Shen, X. Miao, L. Wang, Z. Yang, and B. Cui, “Rod: reception-aware online distillation for sparse graphs,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 2232–2242.
  64. W. Zhang, X. Miao, Y. Shao, J. Jiang, L. Chen, O. Ruas, and B. Cui, “Reliable data distillation on graph convolutional network,” in Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 2020, pp. 1399–1414.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xinyi Gao (25 papers)
  2. Wentao Zhang (262 papers)
  3. Junliang Yu (34 papers)
  4. Yingxia Shao (54 papers)
  5. Quoc Viet Hung Nguyen (57 papers)
  6. Bin Cui (165 papers)
  7. Hongzhi Yin (211 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.