Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Node Centrality Approximation For Large Networks Based On Inductive Graph Neural Networks (2403.04977v1)

Published 8 Mar 2024 in cs.SI and cs.AI

Abstract: Closeness Centrality (CC) and Betweenness Centrality (BC) are crucial metrics in network analysis, providing essential reference for discerning the significance of nodes within complex networks. These measures find wide applications in critical tasks, such as community detection and network dismantling. However, their practical implementation on extensive networks remains computationally demanding due to their high time complexity. To mitigate these computational challenges, numerous approximation algorithms have been developed to expedite the computation of CC and BC. Nevertheless, even these approximations still necessitate substantial processing time when applied to large-scale networks. Furthermore, their output proves sensitive to even minor perturbations within the network structure. In this work, We redefine the CC and BC node ranking problem as a machine learning problem and propose the CNCA-IGE model, which is an encoder-decoder model based on inductive graph neural networks designed to rank nodes based on specified CC or BC metrics. We incorporate the MLP-Mixer model as the decoder in the BC ranking prediction task to enhance the model's robustness and capacity. Our approach is evaluated on diverse synthetic and real-world networks of varying scales, and the experimental results demonstrate that the CNCA-IGE model outperforms state-of-the-art baseline models, significantly reducing execution time while improving performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’ networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998.
  2. A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999.
  3. P. Basaras, D. Katsaros, and L. Tassiulas, “Detecting influential spreaders in complex, dynamic networks,” Computer, vol. 46, no. 4, pp. 24–29, 2013.
  4. G. Mangioni, G. Jurman, and M. De Domenico, “Multilayer flows in molecular networks identify biological modules in the human proteome,” IEEE Transactions on Network Science and Engineering, vol. 7, no. 1, pp. 411–420, 2018.
  5. D. Yang, M. Liu, Y. Zhang, D. Lin, Z. Fan, and G. Chen, “Henneberg growth of social networks: Modeling the facebook,” IEEE Transactions on Network Science and Engineering, vol. 7, no. 2, pp. 701–712, 2018.
  6. M. Z. Racz and D. E. Rigobon, “Towards consensus: Reducing polarization by perturbing social networks,” IEEE Transactions on Network Science and Engineering, vol. 10, pp. 3450–3464, 2022.
  7. R. Albert, H. Jeong, and A.-L. Barabási, “Error and attack tolerance of complex networks,” Nature, vol. 406, no. 6794, pp. 378–382, 2000.
  8. F. Grando and L. C. Lamb, “Estimating complex networks centrality via neural networks and machine learning,” in International Joint Conference on Neural Networks, 2015, pp. 1–8.
  9. C. He, X. Fei, Q. Cheng, H. Li, Z. Hu, and Y. Tang, “A survey of community detection in complex networks using nonnegative matrix factorization,” IEEE Transactions on Computational Social Systems, vol. 9, no. 2, pp. 440–457, 2021.
  10. J. M. Tylianakis, L. B. Martínez-García, S. J. Richardson, D. A. Peltzer, and I. A. Dickie, “Symmetric assembly and disassembly processes in an ecological network,” Ecology letters, vol. 21, no. 6, pp. 896–904, 2018.
  11. S. T. Hasson and Z. Hussein, “Correlation among network centrality metrics in complex networks,” in International Engineering Conference, 2020, pp. 54–58.
  12. F. Grando, L. Z. Granville, and L. C. Lamb, “Machine learning in network centrality measures: Tutorial and outlook,” ACM Computing Surveys, vol. 51, pp. 1–32, 2018.
  13. F. Grando and L. C. Lamb, “On approximating networks centrality measures via neural learning algorithms,” in International Joint Conference on Neural Networks, 2016, pp. 551–557.
  14. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  15. T. N. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv:1611.07308, 2016.
  16. S. K. Maurya, X. Liu, and T. Murata, “Graph neural networks for fast node ranking approximation,” ACM Transactions on Knowledge Discovery from Data, vol. 15, pp. 1–32, 2021.
  17. M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Advances in Neural Information Processing Systems, vol. 14, 2001.
  18. B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20t⁢hsuperscript20𝑡ℎ20^{th}20 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014, pp. 701–710.
  19. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv:1301.3781, 2013.
  20. A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in Proceedings of the 22t⁢hsuperscript22𝑡ℎ22^{th}22 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 855–864.
  21. K. Li, J. Gao, S. Guo, N. Du, X. Li, and A. Zhang, “Lrbm: A restricted boltzmann machine based approach for representation learning on linked data,” in 2014 IEEE International Conference on Data Mining, 2014, pp. 300–309.
  22. X. Li, N. Du, H. Li, K. Li, J. Gao, and A. Zhang, “A deep learning approach to link prediction in dynamic networks,” in Proceedings of the 2014 SIAM International Conference on Data Mining, 2014, pp. 289–297.
  23. D. Wang, P. Cui, and W. Zhu, “Structural deep network embedding,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1225–1234.
  24. S. Cao, W. Lu, and Q. Xu, “Deep neural networks for learning graph representations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
  25. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations, 2017.
  26. J. Chen, J. Zhu, and L. Song, “Stochastic training of graph convolutional networks with variance reduction,” in International Conference on Machine Learning, 2018, pp. 942–950.
  27. J. Chen, T. Ma, and C. Xiao, “Fastgcn: Fast learning with graph convolutional networks via importance sampling.” 2018.
  28. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv:1312.6114, 2013.
  29. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” in 6th International Conference on Learning Representations, ICLR 2018, 2018.
  30. U. Brandes, “A faster algorithm for betweenness centrality,” Journal of Mathematical Sociology, vol. 25, no. 2, pp. 163–177, 2001.
  31. P. Crescenzi, P. Fraigniaud, and A. Paz, “Simple and fast distributed computation of betweenness centrality,” pp. 337–346, 2020.
  32. N. Edmonds, T. Hoefler, and A. Lumsdaine, “A space-efficient parallel algorithm for computing betweenness centrality in distributed memory,” in International Conference on High Performance Computing, 2010, pp. 1–10.
  33. L. Hoang, M. Pontecorvi, R. Dathathri, G. Gill, B. You, K. Pingali, and V. Ramachandran, “A round-efficient distributed betweenness centrality algorithm,” in Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming, 2019, pp. 272–286.
  34. D. A. Bader, S. Kintali, K. Madduri, and M. Mihail, “Approximating betweenness centrality,” in Algorithms and Models for the Web-Graph: 5th International Workshop, WAW 2007, San Diego, CA, USA, December 11-12, 2007. Proceedings 5.   Springer, 2007, pp. 124–137.
  35. M. Riondato and E. M. Kornaropoulos, “Fast approximation of betweenness centrality through sampling,” in International Conference on Web Search and Data Mining, vol. 30, 2014, pp. 413–422.
  36. E. Bergamini, H. Meyerhenke, and C. Staudt, “Approximating betweenness centrality in large evolving networks,” in Proceedings of the Seventeenth Workshop on Algorithm Engineering and Experiments.   SIAM, 2014, pp. 133–146.
  37. Y. Chen, Z. Zhuang, and W. Qin, “Learning to rank high closeness centrality nodes in a given network based on ranknet method,” in International Conference on Automation Science and Engineering, 2021, pp. 1695–1700.
  38. C. Fan, L. Zeng, Y. Ding, M. Chen, Y. Sun, and Z. Liu, “Learning to identify high betweenness centrality nodes from scratch: A novel graph neural network approach,” in International Conference on Information and Knowledge Management, 2019, pp. 559–568.
  39. M. R. Mendonça, A. M. Barreto, and A. Ziviani, “Approximating network centrality measures using node embedding and machine learning,” IEEE Transactions on Network Science and Engineering, vol. 8, pp. 220–230, 2020.

Summary

We haven't generated a summary for this paper yet.