Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

Decoupling feature propagation from the design of graph auto-encoders (1910.08589v1)

Published 18 Oct 2019 in cs.LG and stat.ML

Abstract: We present two instances, L-GAE and L-VGAE, of the variational graph auto-encoding family (VGAE) based on separating feature propagation operations from graph convolution layers typically found in graph learning methods to a single linear matrix computation made prior to input in standard auto-encoder architectures. This decoupling enables the independent and fixed design of the auto-encoder without requiring additional GCN layers for every desired increase in the size of a node's local receptive field. Fixing the auto-encoder enables a fairer assessment on the size of a nodes receptive field in building representations. Furthermore a by-product of fixing the auto-encoder design often results in substantially smaller networks than their VGAE counterparts especially as we increase the number of feature propagations. A comparative downstream evaluation on link prediction tasks show comparable state of the art performance to similar VGAE arrangements despite considerable simplification. We also show the simple application of our methodology to more challenging representation learning scenarios such as spatio-temporal graph representation learning.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube