Emergent Mind

DIPPA: An improved Method for Bilinear Saddle Point Problems

(2103.08270)
Published Mar 15, 2021 in cs.LG and math.OC

Abstract

This paper studies bilinear saddle point problems $\min{\bf{x}} \max{\bf{y}} g(\bf{x}) + \bf{x}{\top} \bf{A} \bf{y} - h(\bf{y})$, where the functions $g, h$ are smooth and strongly-convex. When the gradient and proximal oracle related to $g$ and $h$ are accessible, optimal algorithms have already been developed in the literature \cite{chambolle2011first, palaniappan2016stochastic}. However, the proximal operator is not always easy to compute, especially in constraint zero-sum matrix games \cite{zhang2020sparsified}. This work proposes a new algorithm which only requires the access to the gradients of $g, h$. Our algorithm achieves a complexity upper bound $\tilde{\mathcal{O}}\left( \frac{|\bf{A}|2}{\sqrt{\mux \muy}} + \sqrt[4]{\kappax \kappay (\kappax + \kappay)} \right)$ which has optimal dependency on the coupling condition number $\frac{|\bf{A}|2}{\sqrt{\mux \muy}}$ up to logarithmic factors.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.