A quantitative analysis of knowledge-learning preferences in large language models in molecular science (2402.04119v2)
Abstract: Deep learning has significantly advanced molecular modeling and design, enabling efficient understanding and discovery of novel molecules. In particular, LLMs introduce a fresh research paradigm to tackle scientific problems from a NLP perspective. LLMs significantly enhance our understanding and generation of molecules, often surpassing existing methods with their capabilities to decode and synthesize complex molecular patterns. However, two key issues remain: how to quantify the match between model and data modalities and how to identify the knowledge-learning preferences of models. To address these challenges, we propose a multi-modal benchmark, named ChEBI-20-MM, and perform 1263 experiments to assess the model's compatibility with data modalities and knowledge acquisition. Through the modal transition probability matrix, we provide insights into the most suitable modalities for tasks. Furthermore, we introduce a statistically interpretable approach to discover context-specific knowledge mapping by localized feature filtering. Our analysis offers an exploration of the learning mechanism and paves the way for advancing LLMs in molecular science.