Finding the zeros of a polynomial is a long-standing problem in mathematics, with applications in finance, physics, engineering, control theory, signal processing…the list goes on. It is tempting to think that such an old and classical problem must have been completely solved by now, however, this is far from the case.

In fact, we don’t even need to look beyond simple quadratics to find subtleties. Below is a snippet of Python code which implements the standard quadratic formula $x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}.$

from cmath import sqrt
return (-b+sqrt(b**2-4*a*c))/(2*a), (-b-sqrt(b**2-4*a*c))/(2*a)


Lets use it to solve the equation $$x^2+10^9x +1 = 0$$:

>>> quad_solve(1,1e9,1)
(0j, (-1000000000+0j))


According to the code snippet, one of the roots is identically zero, which, on brief inspection of the quadratic, is clearly nonsense. The reason for this is that when the x coefficient is very large in comparison to the others, then subtractive cancellation occurs in the formula, leading to large inaccuracies. That’s why in the NAG Library for Python routines quadratic_real and quadratic_complex (which find roots of real and complex quadratic equations respectively) we are very careful to avoid such pitfalls. In the code snippet below, we call quadratic_real to solve the same quadratic.

>>> from naginterfaces.library import zeros


This time, we can see that the root close to zero has been more sensibly evaluated.

Of course, for higher order polynomials things are trickier still, and at order five or higher no closed form expressions exist for the roots, so we must turn to iterative methods. One approach is to find a root (using a standard iterative method such as Newton’s method) then divide out this root (this is known as deflation) and repeat the process. However, Wilkinson’s polynomial (named after James Hardy Wilkinson who was himself an early supporter of NAG) demonstrates why this approach fails. Small perturbations in a single coefficient of the polynomial can change dramatically the value and nature of the roots – the problem is ill-conditioned. A more sophisticated approach is required.

One of the best algorithms for the problem is Laguerre’s method, which converges cubically for simple roots and is, in general, very stable. In the NAG Library for Python, this was implemented as poly_complex (for complex coefficients) and poly_real (for real coefficients). However, even Laguerre’s method struggles for sufficiently large or ill-conditioned problems.

Recently, Thomas R. Cameron, assistant professor of mathematics at Penn State Behrend, has developed a modification of Laguerre’s method which uses an implicit deflation strategy to improve the performance and accuracy of the method. After a successful collaboration with Thomas, this algorithm is now available in Mark 27.1 of the NAG Library as poly_complex_fpml (for polynomials with complex coefficients) and poly_real_fpml (for polynomials with real coefficients). The graphs in Figure 2 show how the new routines outperform the old ones for general polynomials (in fact, for larger degrees, sometimes the old routines failed to converge at all). Generally, the new routines are twice as fast as the old ones, and the relative errors remain small, whereas in the old routines they increased linearly with the polynomial degree.

Author
N
NumpyGuy
Hi

This looks really nice. How is the performance of the new solver compared to the standard Numpy routine? https://numpy.org/doc/stable/reference/generated/numpy.roots.html#numpy.roots

Thanks
EH
Edvin Hopkins
Thanks for your comment - it's an interesting question, particularly because the NumPy routine uses a completely different algorithm (I believe it finds the eigenvalues of the companion matrix).

To get a rough idea of the relative performance of the solvers, I used the code in the blogpost to compute roots of the same polynomials using the NumPy roots solver. At first glance it looked like NumPy was computing the roots more quickly (although to be fair the NAG routine additionally computes condition numbers and backward errors). However, the backward errors from the NumPy routine were often one or two orders of magnitude larger than the NAG solver, and the equivalent plot to Figure 1 contained only about 10% of the number of points, suggesting that the forward errors were also large in some cases.

It would be interesting for us to do a more complete comparison in the future, but it seems that the NAG solvers certainly have an accuracy advantage over NumPy. If you don't have access to the solvers, do feel free to get in touch with support@nag.com for a 30 day trial.
DA
David Allwright
Very interesting to read this. Is NAG planning to include algorithms for the multivariate case --- n polynomial equations in n variables?
EH
Edvin Hopkins
Hi, thanks for your comment. We have solvers for nonlinear systems of equations (including a recent addition, c05mb, which uses Anderson acceleration) though these are not specific to multivariate polynomials. There are no immediate plans for such a routine, however we are always interested in potential new algorithms and ready to respond to customer needs.