Homogeneous vector bundles and $G$-equivariant convolutional neural networks (2105.05400v1)
Abstract: $G$-equivariant convolutional neural networks (GCNNs) is a geometric deep learning model for data defined on a homogeneous $G$-space $\mathcal{M}$. GCNNs are designed to respect the global symmetry in $\mathcal{M}$, thereby facilitating learning. In this paper, we analyze GCNNs on homogeneous spaces $\mathcal{M} = G/K$ in the case of unimodular Lie groups $G$ and compact subgroups $K \leq G$. We demonstrate that homogeneous vector bundles is the natural setting for GCNNs. We also use reproducing kernel Hilbert spaces to obtain a precise criterion for expressing $G$-equivariant layers as convolutional layers. This criterion is then rephrased as a bandwidth criterion, leading to even stronger results for some groups.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.