Emergent Mind

Sparsely constrained neural networks for model discovery of PDEs

(2011.04336)
Published Nov 9, 2020 in cs.LG and physics.comp-ph

Abstract

Sparse regression on a library of candidate features has developed as the prime method to discover the partial differential equation underlying a spatio-temporal data-set. These features consist of higher order derivatives, limiting model discovery to densely sampled data-sets with low noise. Neural network-based approaches circumvent this limit by constructing a surrogate model of the data, but have to date ignored advances in sparse regression algorithms. In this paper we present a modular framework that dynamically determines the sparsity pattern of a deep-learning based surrogate using any sparse regression technique. Using our new approach, we introduce a new constraint on the neural network and show how a different network architecture and sparsity estimator improve model discovery accuracy and convergence on several benchmark examples. Our framework is available at \url{https://github.com/PhIMaL/DeePyMoD}

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

GitHub