Emergent Mind

Geometric structure of Deep Learning networks and construction of global ${\mathcal L}^2$ minimizers

(2309.10639)
Published Sep 19, 2023 in cs.LG , cs.AI , math-ph , math.MP , math.OC , and stat.ML

Abstract

In this paper, we explicitly determine local and global minimizers of the $\mathcal{L}2$ cost function in underparametrized Deep Learning (DL) networks; our main goal is to shed light on their geometric structure and properties. We accomplish this by a direct construction, without invoking the gradient descent flow at any point of this work. We specifically consider $L$ hidden layers, a ReLU ramp activation function, an $\mathcal{L}2$ Schatten class (or Hilbert-Schmidt) cost function, input and output spaces $\mathbb{R}Q$ with equal dimension $Q\geq1$, and hidden layers also defined on $\mathbb{R}{Q}$; the training inputs are assumed to be sufficiently clustered. The training input size $N$ can be arbitrarily large - thus, we are considering the underparametrized regime. More general settings are left to future work. We construct an explicit family of minimizers for the global minimum of the cost function in the case $L\geq Q$, which we show to be degenerate. Moreover, we determine a set of $2Q-1$ distinct degenerate local minima of the cost function. In the context presented here, the concatenation of hidden layers of the DL network is reinterpreted as a recursive application of a {\em truncation map} which "curates" the training inputs by minimizing their noise to signal ratio.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.