Papers
Topics
Authors
Recent
2000 character limit reached

GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks (2103.10836v1)

Published 19 Mar 2021 in cs.AR

Abstract: Graph Neural Networks (GNNs) use a fully-connected layer to extract features from the nodes of a graph and aggregate these features using message passing between nodes, combining two distinct computational patterns: dense, regular computations and sparse, irregular computations. To address this challenge, we propose GNNerator, an accelerator with heterogeneous compute engines optimized for these two patterns. Further, GNNerator implements feature-blocking, a novel GNN dataflow that beneficially trades off irregular memory accesses during aggregation for regular memory accesses during feature extraction. We show GNNerator achieves speedups of 5.7-37x over an NVIDIA RTX 2080-Ti, and 2.3x-3.8x over HyGCN, a state-of-the-art GNN accelerator.

Citations (17)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.