 Finding the zeros of a polynomial is a long-standing problem in mathematics, with applications in finance, physics, engineering, control theory, signal processing…the list goes on. It is tempting to think that such an old and classical problem must have been completely solved by now, however, this is far from the case.

In fact, we don’t even need to look beyond simple quadratics to find subtleties. Below is a snippet of Python code which implements the standard quadratic formula $x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}.$

from cmath import sqrt
return (-b+sqrt(b**2-4*a*c))/(2*a), (-b-sqrt(b**2-4*a*c))/(2*a)


Lets use it to solve the equation $$x^2+10^9x +1 = 0$$:

>>> quad_solve(1,1e9,1)
(0j, (-1000000000+0j))


According to the code snippet, one of the roots is identically zero, which, on brief inspection of the quadratic, is clearly nonsense. The reason for this is that when the x coefficient is very large in comparison to the others, then subtractive cancellation occurs in the formula, leading to large inaccuracies. That’s why in the NAG Library for Python routines quadratic_real and quadratic_complex (which find roots of real and complex quadratic equations respectively) we are very careful to avoid such pitfalls. In the code snippet below, we call quadratic_real to solve the same quadratic.

>>> from naginterfaces.library import zeros


This time, we can see that the root close to zero has been more sensibly evaluated. Figure 1: Roots of all polynomials with coefficients +/-1 and degree <=18

Of course, for higher order polynomials things are trickier still, and at order five or higher no closed form expressions exist for the roots, so we must turn to iterative methods. One approach is to find a root (using a standard iterative method such as Newton’s method) then divide out this root (this is known as deflation) and repeat the process. However, Wilkinson’s polynomial (named after James Hardy Wilkinson who was himself an early supporter of NAG) demonstrates why this approach fails. Small perturbations in a single coefficient of the polynomial can change dramatically the value and nature of the roots – the problem is ill-conditioned. A more sophisticated approach is required.

One of the best algorithms for the problem is Laguerre’s method, which converges cubically for simple roots and is, in general, very stable. In the NAG Library for Python, this was implemented as poly_complex (for complex coefficients) and poly_real (for real coefficients). However, even Laguerre’s method struggles for sufficiently large or ill-conditioned problems.

Recently, Thomas R. Cameron, assistant professor of mathematics at Penn State Behrend, has developed a modification of Laguerre’s method which uses an implicit deflation strategy to improve the performance and accuracy of the method. After a successful collaboration with Thomas, this algorithm is now available in Mark 27.1 of the NAG Library as poly_complex_fpml (for polynomials with complex coefficients) and poly_real_fpml (for polynomials with real coefficients). The graphs in Figure 2 show how the new routines outperform the old ones for general polynomials (in fact, for larger degrees, sometimes the old routines failed to converge at all). Generally, the new routines are twice as fast as the old ones, and the relative errors remain small, whereas in the old routines they increased linearly with the polynomial degree. Figure 2: Comparison of NAG routines for polynomials with randomly generated coefficients in [-1000,1000]

When we were first looking into the new algorithm, we found that on some particularly ill-conditioned problems (such as Wilkinson polynomials) the new routines actually gave less accurate results than the old. However, we now have a trump card here: the new routines have an extra argument, polish, which allows the user to select from a choice of polishing techniques to improve the solution, at the cost of a loss in performance. This reduces errors by orders of magnitude and the new routines once again give smaller errors than the old ones. Thus, for ill-conditioned problems, we recommend that polishing is used. (Note that the routines also return condition numbers.)

Figure 1 was generated using poly_real_fpml with polish=1 (which employs a single additional Newton iteration to improve the solution) and shows those roots in the upper complex half plane of all polynomials with coefficients +/-1 and degree eighteen or less. The routine had no problem computing these roots, whereas the equivalent original routine, poly_real, encountered numerous difficulties. If you want to try generating such a plot yourself, the following piece of Python code should do the trick.

import numpy as np
import matplotlib.pyplot as plt
from naginterfaces.library import zeros
nmax = 18
roots = np.empty(0)
for n in range(1,nmax+1):
a = np.ones(n+2)
a[n] = -1
j = 0
while j=0.0,z)))
while a[j]==-1:
j=j+1
if j<=n:
a[j]=-1
a[0:j] = 1
j=0
plt.scatter(roots.real,roots.imag,s=0.05)


The new polynomial root-finding routines are available in Mark 27.1 of the NAG Library, as poly_complex_fpml (for polynomials with complex coefficients) and poly_real_fpml (for polynomials with real coefficients). The old routines (poly_complex and poly_real) have now been deprecated. Note that throughout this blog we have referred to the Python interfaces for the new routines, but they can also be called using the Fortran interfaces (c02aaf and c02abf) and the C interfaces (c02aac and c02abc). Full product trials are available for the NAG Library.

With thanks to Lawrence Mulholland and Joe Davison for their valuable input into this blog post.

Author
N
NumpyGuy
Hi

This looks really nice. How is the performance of the new solver compared to the standard Numpy routine? https://numpy.org/doc/stable/reference/generated/numpy.roots.html#numpy.roots

Thanks
EH
Edvin Hopkins
Thanks for your comment - it's an interesting question, particularly because the NumPy routine uses a completely different algorithm (I believe it finds the eigenvalues of the companion matrix).

To get a rough idea of the relative performance of the solvers, I used the code in the blogpost to compute roots of the same polynomials using the NumPy roots solver. At first glance it looked like NumPy was computing the roots more quickly (although to be fair the NAG routine additionally computes condition numbers and backward errors). However, the backward errors from the NumPy routine were often one or two orders of magnitude larger than the NAG solver, and the equivalent plot to Figure 1 contained only about 10% of the number of points, suggesting that the forward errors were also large in some cases.

It would be interesting for us to do a more complete comparison in the future, but it seems that the NAG solvers certainly have an accuracy advantage over NumPy. If you don't have access to the solvers, do feel free to get in touch with support@nag.com for a 30 day trial.
DA
David Allwright
Very interesting to read this. Is NAG planning to include algorithms for the multivariate case --- n polynomial equations in n variables?
EH
Edvin Hopkins
Hi, thanks for your comment. We have solvers for nonlinear systems of equations (including a recent addition, c05mb, which uses Anderson acceleration) though these are not specific to multivariate polynomials. There are no immediate plans for such a routine, however we are always interested in potential new algorithms and ready to respond to customer needs.