Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Architectural Implications of Embedding Dimension during GCN on CPU and GPU (2212.00827v1)

Published 1 Dec 2022 in cs.LG and cs.PF

Abstract: Graph Neural Networks (GNNs) are a class of neural networks designed to extract information from the graphical structure of data. Graph Convolutional Networks (GCNs) are a widely used type of GNN for transductive graph learning problems which apply convolution to learn information from graphs. GCN is a challenging algorithm from an architecture perspective due to inherent sparsity, low data reuse, and massive memory capacity requirements. Traditional neural algorithms exploit the high compute capacity of GPUs to achieve high performance for both inference and training. The architectural decision to use a GPU for GCN inference is a question explored in this work. GCN on both CPU and GPU was characterized in order to better understand the implications of graph size, embedding dimension, and sampling on performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Matthew Adiletta (2 papers)
  2. David Brooks (204 papers)
  3. Gu-Yeon Wei (54 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.