Emergent Mind

Accelerated Methods for Non-Convex Optimization

(1611.00756)
Published Nov 2, 2016 in math.OC and cs.DS

Abstract

We present an accelerated gradient method for non-convex optimization problems with Lipschitz continuous first and second derivatives. The method requires time $O(\epsilon{-7/4} \log(1/ \epsilon) )$ to find an $\epsilon$-stationary point, meaning a point $x$ such that $|\nabla f(x)| \le \epsilon$. The method improves upon the $O(\epsilon{-2} )$ complexity of gradient descent and provides the additional second-order guarantee that $\nabla2 f(x) \succeq -O(\epsilon{1/2})I$ for the computed $x$. Furthermore, our method is Hessian free, i.e. it only requires gradient computations, and is therefore suitable for large scale applications.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.