Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-time asymptotics in deep learning (2008.02491v2)

Published 6 Aug 2020 in math.OC and cs.LG

Abstract: We consider the neural ODE perspective of supervised learning and study the impact of the final time $T$ (which may indicate the depth of a corresponding ResNet) in training. For the classical $L2$--regularized empirical risk minimization problem, whenever the neural ODE dynamics are homogeneous with respect to the parameters, we show that the training error is at most of the order $\mathcal{O}\left(\frac{1}{T}\right)$. Furthermore, if the loss inducing the empirical risk attains its minimum, the optimal parameters converge to minimal $L2$--norm parameters which interpolate the dataset. By a natural scaling between $T$ and the regularization hyperparameter $\lambda$ we obtain the same results when $\lambda\searrow0$ and $T$ is fixed. This allows us to stipulate generalization properties in the overparametrized regime, now seen from the large depth, neural ODE perspective. To enhance the polynomial decay, inspired by turnpike theory in optimal control, we propose a learning problem with an additional integral regularization term of the neural ODE trajectory over $[0,T]$. In the setting of $\ellp$--distance losses, we prove that both the training error and the optimal parameters are at most of the order $\mathcal{O}\left(e{-\mu t}\right)$ in any $t\in[0,T]$. The aforementioned stability estimates are also shown for continuous space-time neural networks, taking the form of nonlinear integro-differential equations. By using a time-dependent moving grid for discretizing the spatial variable, we demonstrate that these equations provide a framework for addressing ResNets with variable widths.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Carlos Esteve (7 papers)
  2. Borjan Geshkovski (12 papers)
  3. Dario Pighin (9 papers)
  4. Enrique Zuazua (102 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.