Emergent Mind

A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions

(2004.08867)
Published Apr 19, 2020 in cs.LG , cs.NA , math.NA , math.ST , stat.ML , and stat.TH

Abstract

This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution $\pi$ and a source distribution $pz$ both defined on $\mathbb{R}d$, we prove under some assumptions that there exists a deep neural network $g:\mathbb{R}d\rightarrow \mathbb{R}$ with ReLU activation such that the push-forward measure $(\nabla g)# pz$ of $pz$ under the map $\nabla g$ is arbitrarily close to the target measure $\pi$. The closeness are measured by three classes of integral probability metrics between probability distributions: $1$-Wasserstein distance, maximum mean distance (MMD) and kernelized Stein discrepancy (KSD). We prove upper bounds for the size (width and depth) of the deep neural network in terms of the dimension $d$ and the approximation error $\varepsilon$ with respect to the three discrepancies. In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially. Our proof relies on convergence estimates of empirical measures under aforementioned discrepancies and semi-discrete optimal transport.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.