Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Graph Disentanglement (2103.07295v4)

Published 12 Mar 2021 in cs.LG and cs.AI

Abstract: A real-world graph has a complex topological structure, which is often formed by the interaction of different latent factors. However, most existing methods lack consideration of the intrinsic differences in relations between nodes caused by factor entanglement. In this paper, we propose an \underline{\textbf{A}}dversarial \underline{\textbf{D}}isentangled \underline{\textbf{G}}raph \underline{\textbf{C}}onvolutional \underline{\textbf{N}}etwork (ADGCN) for disentangled graph representation learning. To begin with, we point out two aspects of graph disentanglement that need to be considered, i.e., micro-disentanglement and macro-disentanglement. For them, a component-specific aggregation approach is proposed to achieve micro-disentanglement by inferring latent components that cause the links between nodes. On the basis of micro-disentanglement, we further propose a macro-disentanglement adversarial regularizer to improve the separability among component distributions, thus restricting the interdependence among components. Additionally, to reveal the topological graph structure, a diversity-preserving node sampling approach is proposed, by which the graph structure can be progressively refined in a way of local structure awareness. The experimental results on various real-world graph data verify that our ADGCN obtains more favorable performance over currently available alternatives. The source codes of ADGCN are available at \textit{\url{https://github.com/SsGood/ADGCN}}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. D. Wang, P. Cui, and W. Zhu, “Structural deep network embedding,” in Proc. the 22nd ACM SIGKDD Int. Conf. on Knowl. Discovery Data Mining, 2016, pp. 1225–1234.
  2. T. N. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308, 2016.
  3. L. Tang and H. Liu, “Leveraging social media networks for classification,” Data Mining and Knowledge Discovery, vol. 23, no. 3, pp. 447–478, 2011.
  4. B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proc. 20th ACM SIGKDD Inf. Conf. on Knowl. Discovery Data Mining, 2014, pp. 701–710.
  5. A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in Proc. 22nd ACM SIGKDD Inf. Conf. on Knowl. Discovery Data Mining, 2016, pp. 855–864.
  6. J. Tang, M. Qu, M. Wang et al., “Line: Large-scale information network embedding,” in Proc. 24th World Wide Web, 2015, pp. 1067–1077.
  7. J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang, “Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec,” in Proc. the Eleventh ACM Int. Conf. on Web Search and Data Mining, 2018, pp. 459–467.
  8. S. Cao, W. Lu, and Q. Xu, “Grarep: Learning graph representations with global structural information,” in Proc. 24th ACM on Conf. Inf. Knowl. Manage, 2015, pp. 891–900.
  9. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  10. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in Proc. Int. Conf. Learn. Represent., 2018.
  11. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 1024–1034.
  12. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learn. Syst., 2020.
  13. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018.
  14. S. van Steenkiste, F. Locatello, J. Schmidhuber, and O. Bachem, “Are disentangled representations helpful for abstract visual reasoning?” in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 14 245–14 258.
  15. F. Locatello, S. Bauer, M. Lucic et al., “Challenging common assumptions in the unsupervised learning of disentangled representations,” in Proc. 36th Int. Conf. on Mach. Learn.   PMLR, 2019, pp. 4114–4124.
  16. J. Ma, P. Cui, K. Kuang et al., “Disentangled graph convolutional networks,” in Proc. 36th Int. Conf. on Mach. Learn., 2019, pp. 4212–4221.
  17. Y. Liu, X. Wang, S. Wu, and Z. Xiao, “Independence promoted graph disentangled networks.” in Proc. the AAAI Conf. on Artif. Intell, 2020, pp. 4916–4923.
  18. Y. Yang, Z. Feng, M. Song, and X. Wang, “Factorizable graph convolutional networks,” arXiv preprint arXiv:2010.05421, 2020.
  19. L. Wu, Z. Li, H. Zhao, Q. Liu, J. Wang, M. Zhang, and E. Chen, “Learning the implicit semantic representation on graph-structured data,” in Database Syst. for Adv. Appl.   Springer, 2021, pp. 3–19.
  20. J. Guo, K. Huang, X. Yi, and R. Zhang, “Learning disentangled graph convolutional networks locally and globally,” IEEE Trans. Neural Netw. Learn. Syst., 2022.
  21. T. Zhao, X. Zhang, and S. Wang, “Exploring edge disentanglement for node classification,” in Proc. ACM Web Conf., 2022, pp. 1028–1036.
  22. T. Xiao, Z. Chen, Z. Guo et al., “Decoupled self-supervised learning for graphs,” in Proc. Adv. Neural Inf. Process. Syst., 2022.
  23. H. Li, X. Wang, Z. Zhang, Z. Yuan, H. Li, and W. Zhu, “Disentangled contrastive learning on graphs,” Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 21 872–21 884, 2021.
  24. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
  25. N. De Cao and T. Kipf, “Molgan: An implicit generative model for small molecular graphs,” arXiv preprint arXiv:1805.11973, 2018.
  26. A. Bojchevski, O. Shchur, D. Zügner, and S. Günnemann, “Netgan: Generating graphs via random walks,” in Proc. 35th Int. Conf. on Mach. Learn., 2018, pp. 610–619.
  27. J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial feature learning,” arXiv preprint arXiv:1605.09782, 2016.
  28. V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville, “Adversarially learned inference,” arXiv preprint arXiv:1606.00704, 2016.
  29. M. HassanPour Zonoozi and V. Seydi, “A survey on adversarial domain adaptation,” Neural Processing Letters, pp. 1–41, 2022.
  30. H. Wang, J. Wang, J. Wang, M. Zhao, W. Zhang et al., “Graphgan: Graph representation learning with generative adversarial nets,” in Proc. 32nd AAAI Conf. on Artif. Intell, 2018.
  31. Y.-C. Lee, N. Seo et al., “Asine: Adversarial signed network embedding,” in Proc. the 43rd Int. ACM SIGIR Conf. on Resear. and Develop. in Info. Retri., 2020, pp. 609–618.
  32. S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, “Adversarially regularized graph autoencoder for graph embedding,” arXiv preprint arXiv:1802.04407, 2018.
  33. S. Zheng, Z. Zhu, X. Zhang, Z. Liu, J. Cheng, and Y. Zhao, “Distribution-induced bidirectional generative adversarial network for graph representation learning,” in Proc. IEEE Conf. on Comput. Vis. and Pattern Recognit., June 2020.
  34. P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax,” arXiv preprint arXiv:1809.10341, 2018.
  35. Q. Lu, N. de Silva, D. Dou, T. H. Nguyen, P. Sen, B. Reinwald, and Y. Li, “Exploiting node content for multiview graph convolutional network and adversarial regularization,” in Proc. 28th Int. Conf. on Comput. Linguis., 2020, pp. 545–555.
  36. R. Suter, D. Miladinovic, B. Schölkopf, and S. Bauer, “Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness,” in Proc. 36th Int. Conf. on Mach. Learn., 2019, pp. 6056–6065.
  37. D. Croce, G. Castellucci, and R. Basili, “Gan-bert: Generative adversarial learning for robust text classification with a bunch of labeled examples,” in Proc.the 58th Annual Meet. the Assoc. Comput. Linguist., 2020, pp. 2114–2119.
  38. Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proc. 32th Int. Conf. on Mach. Learn.   PMLR, 2015, pp. 1180–1189.
  39. Y. Chen, L. Wu, and M. Zaki, “Iterative deep graph learning for graph neural networks: Better and robust node embeddings,” Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020.
  40. J. B. Hough, M. Krishnapur, Y. Peres et al., “Determinantal processes and independence,” Probability surveys, vol. 3, pp. 206–229, 2006.
  41. C. Wachinger and P. Golland, “Diverse landmark sampling from determinantal point processes for scalable manifold learning,” arXiv preprint arXiv:1503.03506, 2015.
  42. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  43. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  44. Q. Lu and L. Getoor, “Link-based classification,” in Proc. 20th Int. Conf. on Mach. Learn., 2003, pp. 496–503.
  45. G. Namata, B. London, L. Getoor, B. Huang, and U. EDU, “Query-driven active surveying for collective classification,” in 10th Int. Workshop on Mining and Learning with Graphs, 2012, p. 8.
  46. O. Shchur, M. Mumme, A. Bojchevski et al., “Pitfalls of graph neural network evaluation,” arXiv preprint arXiv:1811.05868, 2018.
  47. K. Toutanova, D. Klein, C. D. Manning, and Y. Singer, “Feature-rich part-of-speech tagging with a cyclic dependency network,” ser. NAACL ’03, 2003, p. 173–180.
  48. L. Tang and H. Liu, “Relational learning via latent social dimensions,” in Proc. the 15th ACM SIGKDD Int. Conf. on Knowl. Discovery Data Mining, 2009.
  49. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 3844–3852.
  50. F. M. Bianchi, D. Grattarola, L. Livi, and C. Alippi, “Graph neural networks with convolutional arma filters,” IEEE Trans. Pattern Anal. Mach. Intell, 2021.
  51. J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” arXiv preprint arXiv:1810.05997, 2018.
  52. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in Proc. 36th Int. Conf. on Mach. Learn., 2019, pp. 6861–6871.
  53. I. Spinelli, S. Scardapane, and A. Uncini, “Adaptive propagation graph convolutional network,” IEEE Trans. Neural Netw. Learn. Syst., 2020.
  54. X. Zhu, Z. Ghahramani, and J. D. Lafferty, “Semi-supervised learning using gaussian fields and harmonic functions,” in Proc. 20th Int. Conf. on Mach. Learn., 2003, pp. 912–919.
  55. F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein, “Geometric deep learning on graphs and manifolds using mixture model cnns,” in Proc. IEEE Conf. on Comput. Vis. and Pattern Recognit., 2017, pp. 5115–5124.
  56. Z. Yang, W. Cohen, and R. Salakhudinov, “Revisiting semi-supervised learning with graph embeddings,” in Proc 33th Int. Conf. on Mach. Learn.   PMLR, 2016, pp. 40–48.
  57. J. Bergstra, D. Yamins, and D. Cox, “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in Proc. 30th Int. Conf. on Mach. Learn., 2013, pp. 115–123.
  58. S. Lloyd, “Least squares quantization in pcm,” IEEE Trans. Info. Theory., vol. 28, no. 2, pp. 129–137, 1982.
  59. D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proc. 24th ACM SIGKDD Inf. Conf. on Knowl. Discovery Data Mining, 2018, pp. 2847–2856.
  60. D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networks via meta learning,” arXiv preprint arXiv:1902.08412, 2019.
Citations (10)

Summary

We haven't generated a summary for this paper yet.