Emergent Mind

Abstract

Self-supervised auto-encoders have emerged as a successful framework for representation learning in computer vision and natural language processing in recent years, However, their application to graph data has been met with limited performance due to the non-Euclidean and complex structure of graphs in comparison to images or text, as well as the limitations of conventional auto-encoder architectures. In this paper, we investigate factors impacting the performance of auto-encoders on graph data and propose a novel auto-encoder model for graph representation learning. Our model incorporates a hierarchical adaptive masking mechanism to incrementally increase the difficulty of training in order to mimic the process of human cognitive learning, and a trainable corruption scheme to enhance the robustness of learned representations. Through extensive experimentation on ten benchmark datasets, we demonstrate the superiority of our proposed method over state-of-the-art graph representation learning models.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.