Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Hierarchical Relational Representations through Relational Convolutions (2310.03240v3)

Published 5 Oct 2023 in cs.LG

Abstract: An evolving area of research in deep learning is the study of architectures and inductive biases that support the learning of relational feature representations. In this paper, we address the challenge of learning representations of hierarchical relations--that is, higher-order relational patterns among groups of objects. We introduce "relational convolutional networks", a neural architecture equipped with computational mechanisms that capture progressively more complex relational features through the composition of simple modules. A key component of this framework is a novel operation that captures relational patterns in groups of objects by convolving graphlet filters--learnable templates of relational patterns--against subsets of the input. Composing relational convolutions gives rise to a deep architecture that learns representations of higher-order, hierarchical relations. We present the motivation and details of the architecture, together with a set of experiments to demonstrate how relational convolutional networks can provide an effective framework for modeling relational tasks that have hierarchical structure.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. “Approximation of relation functions and attention mechanisms”, 2024 arXiv:2402.08856 [cs.LG]
  2. “Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers” In The Twelfth International Conference on Learning Representations, 2024
  3. “Relational Inductive Biases, Deep Learning, and Graph Networks” arXiv, 2018 arXiv:1806.01261 [cs, stat]
  4. Stefan L Frank, Rens Bod and Morten H Christiansen “How hierarchical is language use?” In Proceedings of the Royal Society B: Biological Sciences 279.1747 The Royal Society, 2012, pp. 4522–4531
  5. “Neural Message Passing for Quantum Chemistry” In International Conference on Machine Learning PMLR, 2017, pp. 1263–1272
  6. William L Hamilton “Graph Representation Learning”, Synthesis Lectures on Artificial Intelligence and Machine Learning San Rafael, CA: Morgan & Claypool, 2020
  7. “CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2901–2910
  8. “On Neural Architecture Inductive Biases for Relational Tasks”, 2022 arXiv:2206.05056 [cs]
  9. “Neural relational inference for interacting systems” In International conference on machine learning, 2018
  10. Thomas N. Kipf and Max Welling “Semi-Supervised Classification with Graph Convolutional Networks” arXiv, 2017 DOI: 10.48550/arXiv.1609.02907
  11. “Object-centric learning with slot attention” In Advances in Neural Information Processing Systems 33, 2020, pp. 11525–11538
  12. Mathias Niepert, Mohamed Ahmed and Konstantin Kutzkov “Learning convolutional neural networks for graphs” In International conference on machine learning, 2016 PMLR
  13. Barbara Rosario, Marti A Hearst and Charles J Fillmore “The descent of hierarchy, and selection in relational semantics” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002, pp. 247–254
  14. “Relational Recurrent Neural Networks” In Advances in Neural Information Processing Systems 31 Curran Associates, Inc., 2018
  15. “A simple neural network module for relational reasoning” In Advances in neural information processing systems 30, 2017
  16. “Modeling relational data with graph convolutional networks” In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, 2018, pp. 593–607 Springer
  17. “An Explicitly Relational Neural Network Architecture” In Proceedings of the 37th International Conference on Machine Learning 119, Proceedings of Machine Learning Research PMLR, 2020, pp. 8593–8603
  18. “Attention is all you need” In Advances in neural information processing systems 30, 2017
  19. “Graph Attention Networks” In International Conference on Learning Representations, 2018 URL: https://openreview.net/forum?id=rJXMpikCZ
  20. “Learning Representations that Support Extrapolation” In Proceedings of the 37th International Conference on Machine Learning 119, Proceedings of Machine Learning Research PMLR, 2020, pp. 10136–10146
  21. “The Relational Bottleneck as an Inductive Bias for Efficient Abstraction” arXiv, 2024 arXiv:2309.06629 [cs]
  22. Taylor Whittington Webb, Ishan Sinha and Jonathan Cohen “Emergent Symbols through Binding in External Memory” In International Conference on Learning Representations, 2021
  23. “How Powerful are Graph Neural Networks?” In International Conference on Learning Representations, 2019
  24. “Deep sets” In Advances in neural information processing systems 30, 2017
  25. “Deep reinforcement learning with relational inductive biases” In International conference on learning representations, 2018
  26. Matthew D Zeiler and Rob Fergus “Visualizing and understanding convolutional networks” In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, 2014, pp. 818–833 Springer
Citations (2)

Summary

We haven't generated a summary for this paper yet.