GGNNs : Generalizing GNNs using Residual Connections and Weighted Message Passing (2311.15448v1)
Abstract: Many real-world phenomena can be modeled as a graph, making them extremely valuable due to their ubiquitous presence. GNNs excel at capturing those relationships and patterns within these graphs, enabling effective learning and prediction tasks. GNNs are constructed using Multi-Layer Perceptrons (MLPs) and incorporate additional layers for message passing to facilitate the flow of features among nodes. It is commonly believed that the generalizing power of GNNs is attributed to the message-passing mechanism between layers, where nodes exchange information with their neighbors, enabling them to effectively capture and propagate information across the nodes of a graph. Our technique builds on these results, modifying the message-passing mechanism further: one by weighing the messages before accumulating at each node and another by adding Residual connections. These two mechanisms show significant improvements in learning and faster convergence
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv preprint arXiv:1512.03385, (2015).
- C. Yang, Q. Wu, J. Wang, and J. Yan, “Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs,” in *International Conference on Learning Representations*, 2020.
- Thomas N Kipf and Max Welling. ”Semi-Supervised Classification with Graph Convolutional Networks.” ICLR, 2017.
- Katarzyna Janocha and Wojciech Marian Czarnecki. ”On loss functions for deep neural networks in classification.” arXiv preprint arXiv:1702.05659, 2017.
- Abhinav Raghuvanshi (2 papers)
- Kushal Sokke Malleshappa (1 paper)