A proof that Anderson acceleration increases the convergence rate in linearly converging fixed point methods (but not in quadratically converging ones)

Dr. Leo Rebholz
Dr. Leo Rebholz

Abstract: Anderson acceleration (AA) is an extrapolation technique originally proposed in 1965 that recombines the most recent iterates and update steps in a fixed point iteration to improve the convergence properties of the sequence.  Despite being successfully used for many years to improve nonlinear solver behavior on a wide variety of problems, a theory that explains the often-observed accelerated convergence was lacking.  In this talk we give an introduction to AA, then present a proof of AA convergence which shows that it improves the linear convergence rate based on a gain factor of an underlying optimization problem, but also introduces higher order terms in the residual error bound.  We then discuss improvements to AA based on our convergence theory, and show numerical results for the algorithms applied to several application problems including Navier-Stokes, Boussinesq, Gross-Pitaevskii, and nonlinear Helmholtz systems.

Bio: Leo Rebholz is a Professor of Mathematical Sciences at Clemson University, with research interests in numerical analysis, PDEs, CFD, turbulence, numerical linear algebra, model reduction, data assimilation and nonlinear solvers.  He received his PhD in mathematics from the University of Pittsburgh in 2006, studying numerical analysis under the direction of Prof. William Layton.  After Pitt, he spent two years as a Senior Mathematician at Bechtel Bettis Atomic Power Laboratory, working on dynamical system model reduction.  In 2008, he began a tenure track job at Clemson and has been there ever since. He has published 3 books and over 100 journal articles, and has advised 9 PhD students.  He spends as much of his free time as possible outdoors, and enjoys boating, golfing, running, and drinking beer, wine, cognac and high end scotch.

Last Updated: August 11, 2020 - 9:44 am