Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks (2106.07724v1)

Published 14 Jun 2021 in cs.LG, cs.IT, math.IT, and stat.ML

Abstract: It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that \emph{deep threshold} networks can memorize $n$ points in $d$ dimensions using $\widetilde{\mathcal{O}}(e{1/\delta2}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(e{1/\delta2}(d+\sqrt{n})+n)$ weights, where $\delta$ is the minimum distance between the points. In this work, we improve the dependence on $\delta$ from exponential to almost linear, proving that $\widetilde{\mathcal{O}}(\frac{1}{\delta}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(\frac{d}{\delta}+n)$ weights are sufficient. Our construction uses Gaussian random weights only in the first layer, while all the subsequent layers use binary or integer weights. We also prove new lower bounds by connecting memorization in neural networks to the purely geometric problem of separating $n$ points on a sphere using hyperplanes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shashank Rajput (17 papers)
  2. Kartik Sreenivasan (8 papers)
  3. Dimitris Papailiopoulos (59 papers)
  4. Amin Karbasi (116 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.