Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 74 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Composite Anderson acceleration method with dynamic window-sizes and optimized damping (2203.14627v1)

Published 28 Mar 2022 in math.NA and cs.NA

Abstract: In this paper, we propose and analyze a set of fully non-stationary Anderson acceleration algorithms with dynamic window sizes and optimized damping. Although Anderson acceleration (AA) has been used for decades to speed up nonlinear solvers in many applications, most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with fixed window size and a constant damping factor. The behavior and potential of the non-stationary version of Anderson acceleration methods remain an open question. Since most efficient linear solvers use composable algorithmic components. Similar ideas can be used for AA to solve nonlinear systems. Thus in the present work, to develop non-stationary Anderson acceleration algorithms, we first propose two systematic ways to dynamically alternate the window size $m$ by composition. One simple way to package sAA(m) with sAA(n) in each iteration is applying sAA(m) and sAA(n) separately and then average their results. It is an additive composite combination. The other more important way is the multiplicative composite combination, which means we apply sAA(m) in the outer loop and apply sAA(n) in the inner loop. By doing this, significant gains can be achieved. Secondly, to make AA to be a fully non-stationary algorithm, we need to combine these strategies with our recent work on the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD), which is another important direction of producing non-stationary AA and nice performance gains have been observed. Moreover, we also investigate the rate of convergence of these non-stationary AA methods under suitable assumptions. Finally, our numerical results show that some of these proposed non-stationary Anderson acceleration algorithms converge faster than the stationary sAA method and they may significantly reduce the storage and time to find the solution in many cases.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.