Finding the zeros of a polynomial is a long-standing problem in mathematics, with applications in finance, physics, engineering, control theory, signal processing…the list goes on. It is tempting to think that such an old and classical problem must have been completely solved by now, however, this is far from the case.

In fact, we don’t even need to look beyond simple quadratics to find subtleties. Below is a snippet of Python code which implements the standard quadratic formula \[x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}.\]

```
from cmath import sqrt
def quad_solve(a,b,c):
return (-b+sqrt(b**2-4*a*c))/(2*a), (-b-sqrt(b**2-4*a*c))/(2*a)
```

Lets use it to solve the equation \(x^2+10^9x +1 = 0\):

```
>>> quad_solve(1,1e9,1)
(0j, (-1000000000+0j))
```

According to the code snippet, one of the roots is identically zero, which, on brief inspection of the quadratic, is clearly nonsense. The reason for this is that when the *x* coefficient is very large in comparison to the others, then subtractive cancellation occurs in the formula, leading to large inaccuracies. That’s why in the NAG Library for Python routines `quadratic_real`

and `quadratic_complex`

(which find roots of real and complex quadratic equations respectively) we are very careful to avoid such pitfalls. In the code snippet below, we call `quadratic_real`

to solve the same quadratic.

```
>>> from naginterfaces.library import zeros
>>> zeros.quadratic_real(1,1e9,1)
QuadraticRealReturnData(z_small=(-1e-09+0j), z_large=(-1000000000+0j))
```

This time, we can see that the root close to zero has been more sensibly evaluated.

Of course, for higher order polynomials things are trickier still, and at order five or higher no closed form expressions exist for the roots, so we must turn to iterative methods. One approach is to find a root (using a standard iterative method such as Newton’s method) then divide out this root (this is known as deflation) and repeat the process. However, Wilkinson’s polynomial (named after James Hardy Wilkinson who was himself an early supporter of NAG) demonstrates why this approach fails. Small perturbations in a single coefficient of the polynomial can change dramatically the value and nature of the roots – the problem is ill-conditioned. A more sophisticated approach is required.

One of the best algorithms for the problem is Laguerre’s method, which converges cubically for simple roots and is, in general, very stable. In the NAG Library for Python, this was implemented as `poly_complex`

(for complex coefficients) and `poly_real`

(for real coefficients). However, even Laguerre’s method struggles for sufficiently large or ill-conditioned problems.

Recently, Thomas R. Cameron, assistant professor of mathematics at Penn State Behrend, has developed a modification of Laguerre’s method which uses an implicit deflation strategy to improve the performance and accuracy of the method. After a successful collaboration with Thomas, this algorithm is now available in Mark 27.1 of the NAG Library as `poly_complex_fpml`

(for polynomials with complex coefficients) and `poly_real_fpml`

(for polynomials with real coefficients). The graphs in Figure 2 show how the new routines outperform the old ones for general polynomials (in fact, for larger degrees, sometimes the old routines failed to converge at all). **Generally, the new routines are twice as fast as the old ones**, and the relative errors remain small, whereas in the old routines they increased linearly with the polynomial degree.

When we were first looking into the new algorithm, we found that on some particularly ill-conditioned problems (such as Wilkinson polynomials) the new routines actually gave less accurate results than the old. However, we now have a trump card here: the new routines have an extra argument, `polish`

, which allows the user to select from a choice of polishing techniques to improve the solution, at the cost of a loss in performance. This reduces errors by orders of magnitude and the new routines once again give smaller errors than the old ones. Thus, for ill-conditioned problems, we recommend that polishing is used. (Note that the routines also return condition numbers.)

Figure 1 was generated using `poly_real_fpml`

with `polish=1`

(which employs a single additional Newton iteration to improve the solution) and shows those roots in the upper complex half plane of all polynomials with coefficients +/-1 and degree eighteen or less. The routine had no problem computing these roots, whereas the equivalent original routine, `poly_real`

, encountered numerous difficulties. If you want to try generating such a plot yourself, the following piece of Python code should do the trick.

```
import numpy as np
import matplotlib.pyplot as plt
from naginterfaces.library import zeros
nmax = 18
roots = np.empty(0)
for n in range(1,nmax+1):
a = np.ones(n+2)
a[n] = -1
j = 0
while j
```=0.0,z[0])))
while a[j]==-1:
j=j+1
if j<=n:
a[j]=-1
a[0:j] = 1
j=0
plt.scatter(roots.real,roots.imag,s=0.05)

The new polynomial root-finding routines are available in Mark 27.1 of the NAG Library, as `poly_complex_fpml`

(for polynomials with complex coefficients) and `poly_real_fpml`

(for polynomials with real coefficients). The old routines (`poly_complex`

and `poly_real`

) have now been deprecated. Note that throughout this blog we have referred to the Python interfaces for the new routines, but they can also be called using the Fortran interfaces (`c02aaf`

and `c02abf`

) and the C interfaces (`c02aac`

and `c02abc`

). Full product trials are available for the NAG Library.

With thanks to Lawrence Mulholland and Joe Davison for their valuable input into this blog post.

This looks really nice. How is the performance of the new solver compared to the standard Numpy routine? https://numpy.org/doc/stable/reference/generated/numpy.roots.html#numpy.roots

Thanks

To get a rough idea of the relative performance of the solvers, I used the code in the blogpost to compute roots of the same polynomials using the NumPy roots solver. At first glance it looked like NumPy was computing the roots more quickly (although to be fair the NAG routine additionally computes condition numbers and backward errors). However, the backward errors from the NumPy routine were often one or two orders of magnitude larger than the NAG solver, and the equivalent plot to Figure 1 contained only about 10% of the number of points, suggesting that the forward errors were also large in some cases.

It would be interesting for us to do a more complete comparison in the future, but it seems that the NAG solvers certainly have an accuracy advantage over NumPy. If you don't have access to the solvers, do feel free to get in touch with support@nag.com for a 30 day trial.