Emergent Mind

Approximation of functions with one-bit neural networks

(2112.09181)
Published Dec 16, 2021 in cs.LG , cs.IT , cs.NA , math.IT , and math.NA

Abstract

The celebrated universal approximation theorems for neural networks roughly state that any reasonable function can be arbitrarily well-approximated by a network whose parameters are appropriately chosen real numbers. This paper examines the approximation capabilities of one-bit neural networks -- those whose nonzero parameters are $\pm a$ for some fixed $a\not=0$. One of our main theorems shows that for any $f\in Cs([0,1]d)$ with $|f|\infty<1$ and error $\varepsilon$, there is a $f{NN}$ such that $|f(\boldsymbol{x})-f{NN}(\boldsymbol{x})|\leq \varepsilon$ for all $\boldsymbol{x}$ away from the boundary of $[0,1]d$, and $f{NN}$ is either implementable by a ${\pm 1}$ quadratic network with $O(\varepsilon{-2d/s})$ parameters or a ${\pm \frac 1 2 }$ ReLU network with $O(\varepsilon{-2d/s}\log (1/\varepsilon))$ parameters, as $\varepsilon\to0$. We establish new approximation results for iterated multivariate Bernstein operators, error estimates for noise-shaping quantization on the Bernstein basis, and novel implementation of the Bernstein polynomials by one-bit quadratic and ReLU neural networks.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.