Learning Hierarchical Relational Representations through Relational Convolutions (2310.03240v3)
Abstract: An evolving area of research in deep learning is the study of architectures and inductive biases that support the learning of relational feature representations. In this paper, we address the challenge of learning representations of hierarchical relations--that is, higher-order relational patterns among groups of objects. We introduce "relational convolutional networks", a neural architecture equipped with computational mechanisms that capture progressively more complex relational features through the composition of simple modules. A key component of this framework is a novel operation that captures relational patterns in groups of objects by convolving graphlet filters--learnable templates of relational patterns--against subsets of the input. Composing relational convolutions gives rise to a deep architecture that learns representations of higher-order, hierarchical relations. We present the motivation and details of the architecture, together with a set of experiments to demonstrate how relational convolutional networks can provide an effective framework for modeling relational tasks that have hierarchical structure.
- “Approximation of relation functions and attention mechanisms”, 2024 arXiv:2402.08856 [cs.LG]
- “Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers” In The Twelfth International Conference on Learning Representations, 2024
- “Relational Inductive Biases, Deep Learning, and Graph Networks” arXiv, 2018 arXiv:1806.01261 [cs, stat]
- Stefan L Frank, Rens Bod and Morten H Christiansen “How hierarchical is language use?” In Proceedings of the Royal Society B: Biological Sciences 279.1747 The Royal Society, 2012, pp. 4522–4531
- “Neural Message Passing for Quantum Chemistry” In International Conference on Machine Learning PMLR, 2017, pp. 1263–1272
- William L Hamilton “Graph Representation Learning”, Synthesis Lectures on Artificial Intelligence and Machine Learning San Rafael, CA: Morgan & Claypool, 2020
- “CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2901–2910
- “On Neural Architecture Inductive Biases for Relational Tasks”, 2022 arXiv:2206.05056 [cs]
- “Neural relational inference for interacting systems” In International conference on machine learning, 2018
- Thomas N. Kipf and Max Welling “Semi-Supervised Classification with Graph Convolutional Networks” arXiv, 2017 DOI: 10.48550/arXiv.1609.02907
- “Object-centric learning with slot attention” In Advances in Neural Information Processing Systems 33, 2020, pp. 11525–11538
- Mathias Niepert, Mohamed Ahmed and Konstantin Kutzkov “Learning convolutional neural networks for graphs” In International conference on machine learning, 2016 PMLR
- Barbara Rosario, Marti A Hearst and Charles J Fillmore “The descent of hierarchy, and selection in relational semantics” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002, pp. 247–254
- “Relational Recurrent Neural Networks” In Advances in Neural Information Processing Systems 31 Curran Associates, Inc., 2018
- “A simple neural network module for relational reasoning” In Advances in neural information processing systems 30, 2017
- “Modeling relational data with graph convolutional networks” In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, 2018, pp. 593–607 Springer
- “An Explicitly Relational Neural Network Architecture” In Proceedings of the 37th International Conference on Machine Learning 119, Proceedings of Machine Learning Research PMLR, 2020, pp. 8593–8603
- “Attention is all you need” In Advances in neural information processing systems 30, 2017
- “Graph Attention Networks” In International Conference on Learning Representations, 2018 URL: https://openreview.net/forum?id=rJXMpikCZ
- “Learning Representations that Support Extrapolation” In Proceedings of the 37th International Conference on Machine Learning 119, Proceedings of Machine Learning Research PMLR, 2020, pp. 10136–10146
- “The Relational Bottleneck as an Inductive Bias for Efficient Abstraction” arXiv, 2024 arXiv:2309.06629 [cs]
- Taylor Whittington Webb, Ishan Sinha and Jonathan Cohen “Emergent Symbols through Binding in External Memory” In International Conference on Learning Representations, 2021
- “How Powerful are Graph Neural Networks?” In International Conference on Learning Representations, 2019
- “Deep sets” In Advances in neural information processing systems 30, 2017
- “Deep reinforcement learning with relational inductive biases” In International conference on learning representations, 2018
- Matthew D Zeiler and Rob Fergus “Visualizing and understanding convolutional networks” In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, 2014, pp. 818–833 Springer