Emergent Mind

Abstract

This paper deals with a general class of algorithms for the solution of fixed-point problems that we refer to as \emph{Anderson--Pulay acceleration}. This family includes the DIIS technique and its variant sometimes called commutator-DIIS, both introduced by Pulay in the 1980s to accelerate the convergence of self-consistent field procedures in quantum chemistry, as well as the related Anderson acceleration which dates back to the 1960s, and the wealth of techniques they have inspired. Such methods aim at accelerating the convergence of any fixed-point iteration method by combining several iterates in order to generate the next one at each step. This extrapolation process is characterised by its \emph{depth}, i.e. the number of previous iterates stored, which is a crucial parameter for the efficiency of the method. It is generally fixed to an empirical value. In the present work, we consider two parameter-driven mechanisms to let the depth vary along the iterations. In the first one, the depth grows until a certain nondegeneracy condition is no longer satisfied; then the stored iterates (save for the last one) are discarded and the method "restarts". In the second one, we adapt the depth continuously by eliminating at each step some of the oldest, less relevant, iterates. In an abstract and general setting, we prove under natural assumptions the local convergence and acceleration of these two adaptive Anderson--Pulay methods, and we show that one can theoretically achieve a superlinear convergence rate with each of them. We then investigate their behaviour in quantum chemistry calculations. These numerical experiments show that both adaptive variants exhibit a faster convergence than a standard fixed-depth scheme, and require on average less computational effort per iteration. This study is complemented by a review of known facts on the DIIS, in particular its link with the Anderson acceleration and some multisecant-type quasi-Newton methods.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.