Emergent Mind

Stochastic Difference-of-Convex Algorithms for Solving nonconvex optimization problems

(1911.04334)
Published Nov 11, 2019 in math.NA , cs.NA , and math.OC

Abstract

The paper deals with stochastic difference-of-convex functions (DC) programs, that is, optimization problems whose the cost function is a sum of a lower semicontinuous DC function and the expectation of a stochastic DC function with respect to a probability distribution. This class of nonsmooth and nonconvex stochastic optimization problems plays a central role in many practical applications. Although there are many contributions in the context of convex and/or smooth stochastic optimization, algorithms dealing with nonconvex and nonsmooth programs remain rare. In deterministic optimization literature, the DC Algorithm (DCA) is recognized to be one of the few algorithms to solve effectively nonconvex and nonsmooth optimization problems. The main purpose of this paper is to present some new stochastic DCAs for solving stochastic DC programs. The convergence analysis of the proposed algorithms is carefully studied, and numerical experiments are conducted to justify the algorithms' behaviors.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.