`library.roots`

Submodule¶

## Module Summary¶

Interfaces for the NAG Mark 28.6 roots Chapter.

`roots`

- Roots of One or More Transcendental Equations

This module is concerned with the calculation of zeros of continuous functions of one or more variables. The majority of problems considered are for real-valued functions of real variables, in which case complex equations must be expressed in terms of the equivalent larger system of real equations.

## Background¶

The module divides naturally into two parts.

### A Single Equation¶

The first deals with the real zeros of a real function of a single variable .

There are three functions with simple calling sequences.
The first assumes that you can determine an initial interval within which the desired zero lies, (that is, where ), and outside which all other zeros lie.
The function then systematically subdivides the interval to produce a final interval containing the zero.
This final interval has a length bounded by your specified error requirements; the end of the interval where the function has smallest magnitude is returned as the zero.
This function is guaranteed to converge to a **simple** zero of the function. (Here we define a simple zero as a zero corresponding to a sign-change of the function; none of the available functions are capable of making any finer distinction.) However, as with the other functions described below, a non-simple zero might be determined and it is left to you to check for this.
The algorithm used is due to Brent (1973).

The two other functions are both designed for the case where you are unable to specify an interval containing the simple zero. One starts from an initial point and performs a search for an interval containing a simple zero. If such an interval is computed then the method described above is used next to determine the zero accurately. The other method uses a ‘continuation’ method based on a secant iteration. A sequence of subproblems is solved; the first of these is trivial and the last is the actual problem of finding a zero of . The intermediate problems employ the solutions of earlier problems to provide initial guesses for the secant iterations used to calculate their solutions.

Three other functions are also supplied. They employ reverse communication and use the same core algorithms as the functions described above.

Finally, two functions are provided to return values of Lambert’s function (sometimes known as the ‘product log’ or ‘Omega’ function), which is the inverse function of

that is, if Lambert’s function for , then is a zero of the function . One function uses the iterative method described in Barry et al. (1995) to return values from the real branches of (restricting ). The second function enforces no such restriction, and uses the approach described in Corless et al. (1996).

### Systems of Equations¶

The functions in the second part of this module are designed to solve a set of nonlinear equations in unknowns

where stands for transpose.

It is assumed that the functions are continuous and differentiable so that the matrix of first partial derivatives of the functions, the **Jacobian** matrix evaluated at the point , exists, though it may not be possible to calculate it directly.

The functions must be independent, otherwise there will be an infinity of solutions and the methods will fail. However, even when the functions are independent the solutions may not be unique. Since the methods are iterative, an initial guess at the solution has to be supplied, and the solution located will usually be the one closest to this initial guess.

## Available Routines¶

### Zeros of Functions of One Variable¶

The functions can be divided into two classes.
There are three functions (`contfn_interval_rcomm()`

, `contfn_cntin_rcomm()`

and `contfn_brent_rcomm()`

) all written in reverse communication form and three (`contfn_brent_interval()`

, `contfn_cntin()`

and `contfn_brent()`

) written in direct communication form (see Direct and Reverse Communication Routines for a description of the difference between these two conventions).
The direct communication functions are designed for inexperienced users and, in particular, for solving problems where the function whose zero is to be calculated, can be coded as a callback function.
These functions find the zero by using the same core algorithms as the reverse communication functions.
Experienced users are recommended to use the reverse communication functions directly as they permit you more control of the calculation.
Indeed, if the zero-finding process is embedded in a much larger program then the reverse communication functions should always be used.

The recommendation as to which function should be used depends mainly on whether you can supply an interval containing the zero; that is, where .
If the interval can be supplied, then `contfn_brent()`

(or, in reverse communication, `contfn_brent_rcomm()`

) should be used, in general.
This recommendation should be qualified in the case when the only interval which can be supplied is very long relative to your error requirements **and** you can also supply a good approximation to the zero.
In this case `contfn_cntin()`

(or, in reverse communication, `contfn_cntin_rcomm()`

) **may** prove more efficient (though these latter functions will not provide the error bound available from `contfn_brent_rcomm()`

).

If an interval containing the zero cannot be supplied then you must choose between `contfn_brent_interval()`

(or, in reverse communication, `contfn_interval_rcomm()`

followed by `contfn_brent_rcomm()`

) and `contfn_cntin()`

(or, in reverse communication, `contfn_cntin_rcomm()`

). `contfn_brent_interval()`

first determines an interval containing the zero, and then proceeds as in `contfn_brent()`

; it is particularly recommended when you do not have a good initial approximation to the zero.
If a good initial approximation to the zero is available then `contfn_cntin()`

is to be preferred.
Since neither of these latter functions has guaranteed convergence to the zero, you are recommended to experiment with both in case of difficulty.

### Solution of Sets of Nonlinear Equations¶

The solution of a set of nonlinear equations

can be regarded as a special case of the problem of finding a minimum of a sum of squares

So the functions in submodule `opt`

are relevant as well as the special nonlinear equations functions.

The functions for solving a set of nonlinear equations can also be divided into classes.
There are six functions (`sys_func_aa()`

, `sys_func_easy()`

, `sys_func_expert()`

, `sparsys_func_easy()`

, `sys_deriv_easy()`

and `sys_deriv_expert()`

) all written in direct communication form and three (`sys_func_aa_rcomm()`

, `sys_func_rcomm()`

and `sys_deriv_rcomm()`

) written in reverse communication form.
The direct communication functions are designed for inexperienced users and, in particular, these functions require the (and possibly their derivatives) to be calculated in functions.
These should be set up carefully so the NAG Library functions can work as efficiently as possible.
Experienced users are recommended to use the reverse communication functions as they permit you more control of the calculation.
Indeed, if the zero-finding process is embedded in a much larger program then the reverse communication functions should always be used.

The main decision you have to make is whether to supply the derivatives . It is advisable to do so if possible, since the results obtained by algorithms which use derivatives are generally more reliable than those obtained by algorithms which do not use derivatives.

`sys_deriv_easy()`

, `sys_deriv_expert()`

and `sys_deriv_rcomm()`

require you to provide the derivatives, whilst `sys_func_aa()`

, `sys_func_aa_rcomm()`

, `sys_func_easy()`

, `sys_func_expert()`

, `sys_func_rcomm()`

and `sparsys_func_easy()`

do not. `sys_func_easy()`

, `sys_deriv_easy()`

and `sparsys_func_easy()`

are easy-to-use functions; greater flexibility may be obtained using `sys_func_expert()`

and `sys_deriv_expert()`

(or, in reverse communication, `sys_func_rcomm()`

and `sys_deriv_rcomm()`

), but these have longer argument lists. `sys_func_easy()`

, `sys_func_expert()`

, `sys_func_rcomm()`

and `sparsys_func_easy()`

approximate the derivatives internally using finite differences. `sys_func_aa()`

and `sys_func_aa_rcomm()`

do not use derivatives at all, and may be useful when the cost of evaluating is high.
If can be evaluated cheaply, then functions which use the Jacobian or its approximation may converge more quickly.

`sparsys_func_easy()`

is an easy-to-use function specially adapted for sparse problems, that is, problems where each function depends on a small subset of the variables so that the Jacobian matrix has many zeros.
It employs sparse linear algebra methods and consequently is expected to take significantly less time to complete than the other functions, especially if is large.

`sys_deriv_check()`

is provided for use in conjunction with `sys_deriv_easy()`

, `sys_deriv_expert()`

and `sys_deriv_rcomm()`

to check the user-supplied derivatives for consistency with the functions themselves.
You are strongly advised to make use of this function whenever `sys_deriv_easy()`

, `sys_deriv_expert()`

or `sys_deriv_rcomm()`

is used.

Firstly, the calculation of the functions and their derivatives should be ordered so that **cancellation errors** are avoided.
This is particularly important in a function that uses these quantities to build up estimates of higher derivatives.

Secondly, **scaling** of the variables has a considerable effect on the efficiency of a function.
The problem should be designed so that the elements of are of similar magnitude.
The same comment applies to the functions, i.e., all the should be of comparable size.

The accuracy is usually determined by the accuracy arguments of the functions, but the following points may be useful.

Greater accuracy in the solution may be requested by choosing smaller input values for the accuracy arguments. However, if unreasonable accuracy is demanded, rounding errors may become important and cause a failure.

Some idea of the accuracies of the may be obtained by monitoring the progress of the function to see how many figures remain unchanged during the last few iterations.

An approximation to the error in the solution is given by where is the solution to the set of linear equations

where .

Note that the decomposition of is available from

`sys_func_expert()`

and`sys_deriv_expert()`

(or, in reverse communication,`sys_func_rcomm()`

and`sys_deriv_rcomm()`

) so thatand is also provided by these functions.

If the functions are changed by small amounts , for , then the corresponding change in the solution is given approximately by , where is the solution of the set of linear equations

Thus one can estimate the sensitivity of to any uncertainties in the specification of , for . As noted above, the sophisticated functions

`sys_func_expert()`

and`sys_deriv_expert()`

(or, in reverse communication,`sys_func_rcomm()`

and`sys_deriv_rcomm()`

) provide the decomposition of .

### Values of Lambert’s function¶

If you require purely-real values of , these will be evaluated marginally more efficiently by `lambertw_real()`

than `lambertw_complex()`

owing to the differing iterative procedures used by each function.

## Tutorial¶

### Zeros of Functions of One Variable¶

Consider the problem of finding such that ; i.e., the value of that gives when .

```
>>> from math import exp
>>> f = lambda x: x - exp(-x)
```

All of the NAG zero-finding routines require you to have some basic knowledge of the structure of the function you are addressing in order that the solver can know where to begin. This knowledge must either be in the form of a rough initial guess for the zero, or as an interval on which changes sign (and, therefore, passes through zero).

In the worst case a sign-change interval is not known and the initial guess for the zero may be fairly poor:

```
>>> x = 1.
```

If so, use `contfn_brent_interval()`

:

```
>>> from naginterfaces.library import roots
>>> soln = roots.contfn_brent_interval(x, f)
```

Upon successful exit contains the computed zero and the bounding interval that was used internally:

```
>>> print('Final value of x is {:1.8f}.'.format(soln.x))
Final value of x is 0.56714329.
>>> print(
... 'Search interval was [' +
... ', '.join(['{:1.8f}']*2).format(soln.a, soln.b) +
... '].'
... )
Search interval was [0.50000000, 0.90000000].
```

For an alternative approach that uses continuation, more suitable for when a better initial guess is known, use `contfn_cntin()`

:

```
>>> x = 0.6
>>> soln = roots.contfn_cntin(x, f)
```

The continuation algorithm returns only the final :

```
>>> print('Final value of x is {:1.8f}.'.format(soln))
Final value of x is 0.56714329.
```

### Solution of Sets of Nonlinear Equations¶

Consider the following tridiagonal system of nine equations in nine variables:

as a vector function

```
>>> def f(x):
... fvec = (3. - 2.*x)*x + 1.
... fvec[1:] = fvec[1:] - x[:n-1]
... fvec[:n-1] = fvec[:n-1] - 2.*x[1:]
... return fvec
```

for which we seek such that .

As with scalar root-finding above, the NAG solver needs you to supply a rough initial guess for .

```
>>> import numpy as np
>>> n = 9; x = -1.*np.ones(n)
```

To solve a system of nonlinear equations, again the NAG routines can benefit from your knowledge of the structure of the problem as outlined above.

If you are happy for the Jacobian of the system to be approximated internally, use, say, `sys_func_easy()`

:

```
>>> from naginterfaces.library import roots
>>> soln = roots.sys_func_easy(f, x)
>>> def show_soln():
... print(
... 'Final approximate solution is\n'
... '(\n' +
... '\n'.join([' {:.4f},']*n).format(*soln.x) +
... '\n)'
... )
>>> show_soln()
Final approximate solution is
(
-0.5707,
-0.6816,
-0.7017,
-0.7042,
-0.7014,
-0.6919,
-0.6658,
-0.5960,
-0.4164,
)
```

The exact Jacobian of the system in question can be represented using the following Python function:

```
>>> import numpy as np
>>> def fjac(x):
... fj = np.zeros((len(x), len(x)))
... fj[0, 0] = 3. - 4.*x[0]
... fj[0, 1] = -2.
... for i in range(1, n-2):
... fj[i, i-1] = -1.
... fj[i, i] = 3. - 4.*x[i]
... fj[i, i+1] = -2.
... fj[n-1, n-2] = -1.
... fj[n-1, n-1] = 3. - 4.*x[n-1]
... return fj
```

Function `sys_deriv_easy()`

can be used to take advantage of these exact derivatives:

```
>>> from naginterfaces.library import roots
>>> soln = roots.sys_deriv_easy(f, fjac, x)
>>> show_soln()
Final approximate solution is
(
-0.5707,
-0.6816,
-0.7017,
-0.7042,
-0.7014,
-0.6919,
-0.6658,
-0.5960,
-0.4164,
)
```

When the Jacobian is known to contain a large number of zero entries, but isn’t available in an exact form, `sparsys_func_easy()`

may be appropriate.
With this solver we are asked to compute values of the system for a supplied base- index mask. (In the contrived example below we use precomputed ‘full’ values for the system, using our definition of above, to populate the requested entries of the return vector.
For a real-world sparse problem where memory usage is a concern this would clearly be counterproductive.) The function vector is updated in place in the callback, which further optimizes memory usage.

```
>>> def sparsf(indf, x, fvec):
... fv = f(x)
... for ind in indf:
... fvec[ind-1] = fv[ind-1]
```

For our system the number of nonzeros is . Communicating this in the NAG call can reduce memory requirements.

```
>>> from naginterfaces.library import roots
>>> soln = roots.sparsys_func_easy(sparsf, x, nnz=3*n-2)
>>> show_soln()
Final approximate solution is
(
-0.5707,
-0.6816,
-0.7017,
-0.7042,
-0.7014,
-0.6919,
-0.6658,
-0.5960,
-0.4164,
)
```

Computational efficiency may also be enhanced by using the fixed-point iteration routine `sys_func_aa()`

and its Anderson acceleration facility.
The potential downsides are that convergence is less reliable than with the Powell hybrid method used in the solvers shown previously, and that the system may need to be rearranged into the form for best performance:

```
>>> import numpy as np
>>> def fixedf(x):
... fvec = np.zeros(n)
... fvec[0] = (2.*x[1] - 1.)/(3. - 2.*x[0])
... for i in range(1, n-1):
... fvec[i] = (2.*x[i+1] - 1. + x[i-1])/(3. - 2.*x[i])
... fvec[n-1] = (x[n-2] - 1.)/(3. - 2.*x[n-1])
... return fvec - x
```

The size of the iteration subspace cache to use is supplied via argument :

```
>>> from naginterfaces.library import roots
>>> soln = roots.sys_func_aa(fixedf, x, m=4)
>>> show_soln()
Final approximate solution is
(
-0.5707,
-0.6816,
-0.7017,
-0.7042,
-0.7014,
-0.6919,
-0.6658,
-0.5960,
-0.4164,
)
```

## See Also¶

`naginterfaces.library.examples.roots`

:This subpackage contains examples for the

`roots`

module. See also the Examples subsection.

## Functionality Index¶

**Lambert’s** **function**

complex values:

`lambertw_complex()`

real values:

`lambertw_real()`

**Zeros of functions of one variable**

direct communication

binary search followed by Brent algorithm:

`contfn_brent_interval()`

Brent algorithm:

`contfn_brent()`

continuation method:

`contfn_cntin()`

reverse communication

binary search:

`contfn_interval_rcomm()`

Brent algorithm:

`contfn_brent_rcomm()`

continuation method:

`contfn_cntin_rcomm()`

**Zeros of functions of several variables**

checking function

checks user-supplied Jacobian:

`sys_deriv_check()`

direct communication

Anderson acceleration:

`sys_func_aa()`

easy-to-use

derivatives required:

`sys_deriv_easy()`

no derivatives required:

`sys_func_easy()`

no derivatives required, sparse:

`sparsys_func_easy()`

sophisticated

derivatives required:

`sys_deriv_expert()`

no derivatives required:

`sys_func_expert()`

reverse communication

Anderson acceleration:

`sys_func_aa_rcomm()`

sophisticated

derivatives required:

`sys_deriv_rcomm()`

no derivatives required:

`sys_func_rcomm()`

For full information please refer to the NAG Library document

https://www.nag.com/numeric/nl/nagdoc_28.6/flhtml/c05/c05intro.html

## Examples¶

- naginterfaces.library.examples.roots.contfn_brent_interval_ex.main()[source]¶
Example for

`naginterfaces.library.roots.contfn_brent_interval()`

.Finds a zero of a continuous function using Brent’s algorithm.

>>> main() naginterfaces.library.roots.contfn_brent_interval Python Example Results. Implied volatility for an option with a given target price. The implied volatility for the target price of 12.35008695 is 0.09000000.

- naginterfaces.library.examples.roots.sys_func_rcomm_ex.main()[source]¶
Example for

`naginterfaces.library.roots.sys_func_rcomm()`

.Find a solution of a system of nonlinear equations.

Demonstrates promoting a

`NagAlgorithmicWarning`

to an exception.>>> main() naginterfaces.library.roots.sys_func_rcomm Python Example Results. Solution of a system of nonlinear equations. Final 2-norm of the residuals after 11 iterations is 1.19264e-08. Final approximate solution is ( -0.5707, -0.6816, -0.7017, -0.7042, -0.7014, -0.6919, -0.6658, -0.5960, -0.4164, )