Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization (1912.13515v2)

Published 31 Dec 2019 in stat.ML, cs.LG, and math.OC

Abstract: Stochastic compositional optimization arises in many important machine learning tasks such as value function evaluation in reinforcement learning and portfolio management. The objective function is the composition of two expectations of stochastic functions, and is more challenging to optimize than vanilla stochastic optimization problems. In this paper, we investigate the stochastic compositional optimization in the general smooth non-convex setting. We employ a recently developed idea of \textit{Stochastic Recursive Gradient Descent} to design a novel algorithm named SARAH-Compositional, and prove a sharp Incremental First-order Oracle (IFO) complexity upper bound for stochastic compositional optimization: $\mathcal{O}((n+m){1/2} \varepsilon{-2})$ in the finite-sum case and $\mathcal{O}(\varepsilon{-3})$ in the online case. Such a complexity is known to be the best one among IFO complexity results for non-convex stochastic compositional optimization, and is believed to be optimal. Our experiments validate the theoretical performance of our algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Huizhuo Yuan (16 papers)
  2. Xiangru Lian (18 papers)
  3. Ji Liu (285 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.