Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 186 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Methods of Nonconvex Optimization (2406.10406v1)

Published 14 Jun 2024 in math.OC

Abstract: This book is devoted to finite-dimensional problems of non-convex non-smooth optimization and numerical methods for their solution. The problem of nonconvexity is studied in the book on two main models of nonconvex dependencies: these are the so-called generalized differentiable functions and locally Lipschitz functions. Non-smooth functions naturally arise in various applications. In addition, they often appear in the theory of extremal problems itself due to the operations of taking the maximum and minimum, decomposition techniques, exact non-smooth penalties, and duality. The considered models of nonconvexity are quite general and cover the majority of practically important optimization problems; they clearly show all the difficulties of non-convex optimization. The method of studying the generalized differentiable functions is that for these functions a generalization of the concept of gradient is introduced, a calculus is constructed, and various properties of nonconvex problems are studied in terms of generalized gradients. As for numerical methods, it is possible to extend the theory and algorithms of subgradient descent of convex optimization to problems with generalized differentiable functions. Methods for solving Lipschitz problems are characterized by the fact that the original functions are approximated by smoothed ones and iterative minimization procedures are applied to them. With this approach, it is possible to approximate the gradients of smoothed functions by stochastic finite differences and thus to construct methods without calculating gradients. A similar approach can be justified in generalized differentiable and Lipschitz stochastic programming. In these cases, various generalizations of the classical stochastic approximation and stochastic quasi-gradient method are obtained for solving constrained nonconvex nonsmooth stochastic programming problems.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets