Emergent Mind

Expressivity and Approximation Properties of Deep Neural Networks with ReLU$^k$ Activation

(2312.16483)
Published Dec 27, 2023 in cs.LG , cs.NA , cs.NE , and math.NA

Abstract

In this paper, we investigate the expressivity and approximation properties of deep neural networks employing the ReLU$k$ activation function for $k \geq 2$. Although deep ReLU networks can approximate polynomials effectively, deep ReLU$k$ networks have the capability to represent higher-degree polynomials precisely. Our initial contribution is a comprehensive, constructive proof for polynomial representation using deep ReLU$k$ networks. This allows us to establish an upper bound on both the size and count of network parameters. Consequently, we are able to demonstrate a suboptimal approximation rate for functions from Sobolev spaces as well as for analytic functions. Additionally, through an exploration of the representation power of deep ReLU$k$ networks for shallow networks, we reveal that deep ReLU$k$ networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU$k$ activation function. This finding demonstrates the adaptability of deep ReLU$k$ networks in approximating functions within various variation spaces.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.