Fixed Point Iterations appear in many areas of science, finance, engineering and applied mathematics. Their general form is $$x_{n+1} = f(x_n)$$ which is repeated until a convergence criterion is reached. One problem with such techniques is that convergence can be slow which limits their usefulness.
In a 2015 paper, NAG collaborator Nick Higham along with Nataša Strabić applied a technique called Anderson Acceleration to his Alternating-Projections algorithm for computing Nearest Correlation Matrices (NCMs) resulting in much faster convergence.
The Anderson Accelerated NCM routine was included in Mark 27 of the NAG Library with the function name nag_correg_corrmat_fixed. While implementing it, the NAG team decided to make general Anderson Acceleration methods available to users of the NAG Library: sys_func_aa and sys_func_aa_rcomm which can be applied to any fixed point problem.
In a recently published Jupyter Notebook, I demonstrate the use of these routines from Python where I first show how the convergence of the simple fixed point iteration $x_{n+1} = \cos(x_n)$ can be improved from requiring 88 iterations to only 10 using Anderson Acceleration. A speed-up of almost a factor of 9
I go on to show how Anderson Acceleration can be applied to three well-known methods for solving Poisson's equation in two dimensions. For example, in the Jacobi Method, a fixed point iteration of the form $$u^{n+1}_{j,i} = \frac{1}{4} \left(u^{n}_{j+1,i} +u^{n}_{j-1,i} +u^{n}_{j,i+1} +u^{n}_{j,i-1} \right)+\frac{h^2}{4}f_{j,i}$$ is applied to every element of a $100 \times 100$ solution grid per iteration. Without Anderson Acceleration, the Jacobi method took 28734 iterations to converge on an example problem. With Anderson acceleration, it converged in only 313 iterations -- a speed-up of over 90x
Head over to GitHub to access the notebook and let us know if you apply Anderson Acceleration to one of your own problems.