Emergent Mind

On the Scalability of GNNs for Molecular Graphs

(2404.11568)
Published Apr 17, 2024 in cs.LG

Abstract

Scaling deep learning models has been at the heart of recent revolutions in language modelling and image generation. Practitioners have observed a strong relationship between model size, dataset size, and performance. However, structure-based architectures such as Graph Neural Networks (GNNs) are yet to show the benefits of scale mainly due to the lower efficiency of sparse operations, large data requirements, and lack of clarity about the effectiveness of various architectures. We address this drawback of GNNs by studying their scaling behavior. Specifically, we analyze message-passing networks, graph Transformers, and hybrid architectures on the largest public collection of 2D molecular graphs. For the first time, we observe that GNNs benefit tremendously from the increasing scale of depth, width, number of molecules, number of labels, and the diversity in the pretraining datasets. We further demonstrate strong finetuning scaling behavior on 38 highly competitive downstream tasks, outclassing previous large models. This gives rise to MolGPS, a new graph foundation model that allows to navigate the chemical space, outperforming the previous state-of-the-arts on 26 out the 38 downstream tasks. We hope that our work paves the way for an era where foundational GNNs drive pharmaceutical drug discovery.

Comparison of MPNN++, Transformer, and GPS++ model performances with different training set sizes during molecule scaling.

Overview

  • The paper investigates the scalability of Graph Neural Networks (GNNs) in interpreting and predicting properties of molecular graphs for pharmaceutical applications.

  • It evaluates various GNN architectures, including message-passing networks, graph Transformers, and hybrids, against a dataset of five million molecules.

  • Key findings indicate significant performance enhancements in GNNs as model scale increases, particularly evident in larger model dimensions and data sizes.

  • The study suggests potential future directions for improving GNN efficiency in drug discovery and other high-dimensional molecular analyses.

Examining the Scalability of Graph Neural Networks for Molecular Graphs

Introduction

The recent work under examination focuses extensively on the scalability of Graph Neural Networks (GNNs) for interpreting and predicting properties of molecular graphs. Despite the expansive use and success of GNNs in various domains, their ability to scale effectively, especially concerning molecular data for pharmaceutical applications, has been relatively unexplored. This paper addresses this gap by analyzing multiple GNN architectures on the largest public collection of 2D molecular graphs.

Methodology

The study encompasses an array of GNN architectures including message-passing networks, graph Transformers, and hybrid models, evaluating them against a vast dataset comprising five million molecules with an extensive array of labels across various tasks:

  • Architectures: Three principal models were analyzed: MPNN++ (Message Passing Neural Networks), a graph Transformer, and a hybrid model integrating features of both previous models.
  • Dataset Preparation: Utilizing the LargeMix dataset, the research splits the molecular data into various tasks and labels, ensuring the diversity and comprehensiveness of the dataset suited for robust large-scale training.
  • Scaling Parameters: The paper explores scaling across several dimensions—model size (width), complexity (depth), amount of training data (number of molecules), diversity in the training data, and the number of labels.
  • Training and Evaluation: All models were assessed in a supervised pretraining setting followed by finetuning on downstream tasks, including 38 distinct benchmarks for molecular property prediction.

Key Results

The findings highlight significant enhancements in model performance with increased scale:

  • Performance Gains: There was a reported improvement of up to 30.25% when scaling models to 1 billion parameters and a 28.98% enhancement when the dataset size was expanded eightfold.
  • Depth and Width Effects: Both the depth and width of the models showed profound impacts on the model performance, reinforcing the benefit of larger and more complex models.
  • Data Scaling: Increasing the number of molecules consistently improved model performance across all architectures, with the graph Transformer and hybrid models benefiting the most in lower data regimes.

Implications and Future Work

The research undeniably pushes the boundary of GNN applications in drug discovery by showcasing the potential of scalability in molecular graph analysis. The practical implications for pharmaceutical industries are vast, potentially accelerating drug discovery processes and reducing costs through more effective predictive models.

Looking forward, the paper suggests further exploration into other scalability factors, like the optimization of aggregation functions in GNNs, which could reveal deeper insights into improving the efficiency and accuracy of these models in high-dimensional spaces.

Conclusion

In summary, this paper presents a thorough analysis of GNN scalability for molecular graphs, showing not just performance improvements with increased scale but also laying a groundwork for future research in the field. The exploration of varied architectural frameworks and extensive dataset provides a substantial foundation for advancing GNN applications in pharmaceuticals and other fields requiring molecular-level precision.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube