Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

ECCO: Equivalent Circuit Controlled Optimization (2211.08478v2)

Published 15 Nov 2022 in eess.SY and cs.SY

Abstract: We propose an adaptive optimization algorithm for solving unconstrained scaled gradient flow problems that achieves fast convergence by controlling the optimization trajectory shape and the discretization step sizes. Under a broad class of scaling functions, we establish convergence of the proposed approach to critical points of smooth objective functions, while demonstrating its flexibility and robustness with respect to hyperparameter tuning. First, we prove convergence of component-wise scaled gradient flow to a critical point under regularity conditions. We show that this controlled gradient flow dynamics is equivalent to the transient response of an electrical circuit, allowing for circuit theory concepts to solve the problem. Based on this equivalence, we develop two optimization trajectory control schemes based on minimizing the charge stored in the circuit: a second order method that uses the true Hessian and an alternate first order method that approximates the optimization trajectory with only gradient information. While the control schemes are derived from circuit concepts, no circuit knowledge is needed to implement the algorithms. To find the value of the critical point, we propose a time step search routine for Forward Euler discretization that controls the local truncation error, a method adapted from circuit simulation ideas. In simulation we find that the trajectory control outperforms uncontrolled gradient flow, and the error-aware discretization out-performs line search with the Armijo condition. Our algorithms are evaluated on convex and non-convex test functions, including neural networks, with convergence speeds comparable to or exceeding Adam.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.