Emergent Mind

Fast Haar Transforms for Graph Neural Networks

(1907.04786)
Published Jul 10, 2019 in cs.LG and stat.ML

Abstract

Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces Haar basis which is a sparse and localized orthonormal system for a coarse-grained chain on graph. The graph convolution under Haar basis, called Haar convolution, can be defined accordingly for GNNs. The sparsity and locality of the Haar basis allow Fast Haar Transforms (FHTs) on graph, by which a fast evaluation of Haar convolution between graph data and filters can be achieved. We conduct experiments on GNNs equipped with Haar convolution, which demonstrates state-of-the-art results on graph-based regression and node classification tasks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.