PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox: nag_ode_sl2_breaks_funs (d02ke)
Purpose
nag_ode_sl2_breaks_funs (d02ke) finds a specified eigenvalue of a regular or singular secondorder Sturm–Liouville system on a finite or infinite interval, using a Pruefer transformation and a shooting method. It also reports values of the eigenfunction and its derivatives. Provision is made for discontinuities in the coefficient functions or their derivatives.
Syntax
[
match,
elam,
delam,
hmax,
maxit,
ifail] = d02ke(
xpoint,
match,
coeffn,
bdyval,
k,
tol,
elam,
delam,
hmax,
monit,
report, 'm',
m, 'maxit',
maxit, 'maxfun',
maxfun)
[
match,
elam,
delam,
hmax,
maxit,
ifail] = nag_ode_sl2_breaks_funs(
xpoint,
match,
coeffn,
bdyval,
k,
tol,
elam,
delam,
hmax,
monit,
report, 'm',
m, 'maxit',
maxit, 'maxfun',
maxfun)
Description
nag_ode_sl2_breaks_funs (d02ke) has essentially the same purpose as
nag_ode_sl2_breaks_vals (d02kd) with minor modifications to enable values of the eigenfunction to be obtained after convergence to the eigenvalue has been achieved.
It first finds a specified eigenvalue
$\stackrel{~}{\lambda}$ of a Sturm–Liouville system defined by a selfadjoint differential equation of the secondorder
together with appropriate boundary conditions at the two, finite or infinite, end points
$a$ and
$b$. The functions
$p$ and
$q$, which are realvalued, are defined by
coeffn. The boundary conditions must be defined by
bdyval, and, in the case of a singularity at
$a$ or
$b$, take the form of an asymptotic formula for the solution near the relevant end point.
When the final estimate
$\lambda =\stackrel{~}{\lambda}$ of the eigenvalue has been found, the function integrates the differential equation once more with that value of
$\lambda $, and with initial conditions chosen so that the integral
is approximately one. When
$q\left(x;\lambda \right)$ is of the form
$\lambda w\left(x\right)+q\left(x\right)$, which is the most common case,
$S$ represents the square of the norm of
$y$ induced by the inner product
with respect to which the eigenfunctions are mutually orthogonal. This normalization of
$y$ is only approximate, but experience shows that
$S$ generally differs from unity by only one or two per cent.
During this final integration the
report is called at each integration mesh point
$x$. Sufficient information is returned to permit you to compute
$y\left(x\right)$ and
${y}^{\prime}\left(x\right)$ for printing or plotting. For reasons described in
General Description of the Algorithm,
nag_ode_sl2_breaks_funs (d02ke) passes across to
report, not
$y$ and
${y}^{\prime}$, but the Pruefer variables
$\beta $,
$\varphi $ and
$\rho $ on which the numerical method is based. Their relationship to
$y$ and
${y}^{\prime}$ is given by the equations
A specimen
report is given in
Example below.
For the theoretical basis of the numerical method to be valid, the following conditions should hold on the coefficient functions:
(a) 
$p\left(x\right)$ must be nonzero and must not change sign throughout the interval $\left(a,b\right)$; and, 
(b) 
$\frac{\partial q}{\partial \lambda}$ must not change sign throughout the interval $\left(a,b\right)$ for all relevant values of $\lambda $, and must not be identically zero as $x$ varies, for any $\lambda $. 
Points of discontinuity in the functions
$p$ and
$q$ or their derivatives are allowed, and should be included as ‘breakpoints’ in the array
xpoint.
A good account of the theory of Sturm–Liouville systems, with some description of Pruefer transformations, is given in Chapter X of
Birkhoff and Rota (1962). An introduction to the use of Pruefer transformations for the numerical solution of eigenvalue problems arising from physics and chemistry is given in
Bailey (1966).
The scaled Pruefer method is described in a short note by
Pryce and Hargrave (1977) and in some detail in the technical report by
Pryce (1981).
References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Bailey P B (1966) Sturm–Liouville eigenvalues via a phase function SIAM J. Appl. Math. 14 242–249
Banks D O and Kurowski I (1968) Computation of eigenvalues of singular Sturm–Liouville Systems Math. Comput. 22 304–310
Birkhoff G and Rota G C (1962) Ordinary Differential Equations Ginn & Co., Boston and New York
Pryce J D (1981) Two codes for Sturm–Liouville problems Technical Report CS8101 Department of Computer Science, Bristol University
Pryce J D and Hargrave B A (1977) The scaled Prüfer method for oneparameter and multiparameter eigenvalue problems in ODEs IMA Numerical Analysis Newsletter 1(3)
Parameters
Compulsory Input Parameters
 1:
$\mathrm{xpoint}\left({\mathbf{m}}\right)$ – double array

The points where the boundary conditions computed by
bdyval are to be imposed, and also any breakpoints, i.e.,
${\mathbf{xpoint}}\left(1\right)$ to
${\mathbf{xpoint}}\left(m\right)$ must contain values
${x}_{1},\dots ,{x}_{m}$ such that
with the following meanings:
(a) 
${x}_{1}$ and ${x}_{m}$ are the left and righthand end points, $a$ and $b$, of the domain of definition of the Sturm–Liouville system if these are finite. If either $a$ or $b$ is infinite, the corresponding value ${x}_{1}$ or ${x}_{m}$ may be a moreorless arbitrarily ‘large’ number of appropriate sign. 
(b) 
${x}_{2}$ and ${x}_{m1}$ are the Boundary Matching Points (BMPs), that is the points at which the left and right boundary conditions computed in bdyval are imposed.
If the lefthand end point is a regular point then you should set ${x}_{2}={x}_{1}$ $\left(=a\right)$, while if it is a singular point you must set ${x}_{2}>{x}_{1}$. Similarly ${x}_{m1}={x}_{m}$ ($\text{}=b$) if the righthand end point is regular, and ${x}_{m1}<{x}_{m}$ if it is singular. 
(c) 
The remaining $m4$ points ${x}_{3},\dots ,{x}_{m2}$, if any, define ‘breakpoints’ which divide the interval $\left[{x}_{2},{x}_{m1}\right]$ into $m3$ subintervals
Numerical integration of the differential equation is stopped and restarted at each breakpoint. In simple cases no breakpoints are needed. However, if $p\left(x\right)$ or $q\left(x;\lambda \right)$ are given by different formulae in different parts of the interval, then integration is more efficient if the range is broken up by breakpoints in the appropriate way. Similarly points where any jumps occur in $p\left(x\right)$ or $q\left(x;\lambda \right)$, or in their derivatives up to the fifthorder, should appear as breakpoints.
Examples are given in Further Comments and Example. xpoint determines the position of the Shooting Matching Point (SMP), as explained in The Position of the Shooting Matching Point . 
Constraint:
${\mathbf{xpoint}}\left(1\right)\le {\mathbf{xpoint}}\left(2\right)<\cdots <{\mathbf{xpoint}}\left({\mathbf{m}}1\right)\le {\mathbf{xpoint}}\left({\mathbf{m}}\right)$.
 2:
$\mathrm{match}$ – int64int32nag_int scalar

Must be set to the index of the ‘breakpoint’ to be used as the matching point (see
The Position of the Shooting Matching Point ). If
match is set to a value outside the range
$\left[2,m1\right]$ then a default value is taken, corresponding to the breakpoint nearest the centre of the interval
$\left[{\mathbf{xpoint}}\left(2\right),{\mathbf{xpoint}}\left(m1\right)\right]$.
 3:
$\mathrm{coeffn}$ – function handle or string containing name of mfile

coeffn must compute the values of the coefficient functions
$p\left(x\right)$ and
$q\left(x;\lambda \right)$ for given values of
$x$ and
$\lambda $.
Description states the conditions which
$p$ and
$q$ must satisfy. See
Examples of Coding the coeffn and
Example for examples.
[p, q, dqdl] = coeffn(x, elam, jint)
Input Parameters
 1:
$\mathrm{x}$ – double scalar

The current value of $x$.
 2:
$\mathrm{elam}$ – double scalar

The current trial value of the eigenvalue argument $\lambda $.
 3:
$\mathrm{jint}$ – int64int32nag_int scalar

The index
$j$ of the subinterval
${i}_{j}$ (see specification of
xpoint) in which
$x$ lies.
Output Parameters
 1:
$\mathrm{p}$ – double scalar

The value of $p\left(x\right)$ for the current value of $x$.
 2:
$\mathrm{q}$ – double scalar

The value of $q\left(x;\lambda \right)$ for the current value of $x$ and the current trial value of $\lambda $.
 3:
$\mathrm{dqdl}$ – double scalar

The value of
$\frac{\partial q}{\partial \lambda}\left(x;\lambda \right)$ for the current value of
$x$ and the current trial value of
$\lambda $. However
dqdl is only used in error estimation and, in the rare cases where it may be difficult to evaluate, an approximation (say to within
$20\%$) will suffice.
 4:
$\mathrm{bdyval}$ – function handle or string containing name of mfile

bdyval must define the boundary conditions. For each end point,
bdyval must return (in
yl or
yr) values of
$y\left(x\right)$ and
$p\left(x\right){y}^{\prime}\left(x\right)$ which are consistent with the boundary conditions at the end points; only the ratio of the values matters. Here
$x$ is a given point (
xl or
xr) equal to, or close to, the end point.
For a
regular end point (
$a$, say),
$x=a$, a boundary condition of the form
can be handled by returning constant values in
yl, e.g.,
${\mathbf{yl}}\left(1\right)={c}_{2}$ and
${\mathbf{yl}}\left(2\right)={c}_{1}p\left(a\right)$.
For a
singular end point however,
${\mathbf{yl}}\left(1\right)$ and
${\mathbf{yl}}\left(2\right)$ will in general be functions of
xl and
elam, and
${\mathbf{yr}}\left(1\right)$ and
${\mathbf{yr}}\left(2\right)$ functions of
xr and
elam, usually derived analytically from a powerseries or asymptotic expansion. Examples are given in
Examples of Boundary Conditions at Singular Points and
Example.
[yl, yr] = bdyval(xl, xr, elam)
Input Parameters
 1:
$\mathrm{xl}$ – double scalar

If
$a$ is a regular end point of the system (so that
$a={x}_{1}={x}_{2}$), then
xl contains
$a$. If
$a$ is a singular point (so that
$a\le {x}_{1}<{x}_{2}$), then
xl contains a point
$x$ such that
${x}_{1}<x\le {x}_{2}$.
 2:
$\mathrm{xr}$ – double scalar

If
$b$ is a regular end point of the system (so that
${x}_{m1}={x}_{m}=b$), then
xr contains
$b$. If
$b$ is a singular point (so that
${x}_{m1}<{x}_{m}\le b$), then
xr contains a point
$x$ such that
${x}_{m1}\le x<{x}_{m}$.
 3:
$\mathrm{elam}$ – double scalar

The current trial value of $\lambda $.
Output Parameters
 1:
$\mathrm{yl}\left(3\right)$ – double array

${\mathbf{yl}}\left(1\right)$ and ${\mathbf{yl}}\left(2\right)$ should contain values of $y\left(x\right)$ and $p\left(x\right){y}^{\prime}\left(x\right)$ respectively (not both zero) which are consistent with the boundary condition at the lefthand end point, given by $x={\mathbf{xl}}$. ${\mathbf{yl}}\left(3\right)$ should not be set.
 2:
$\mathrm{yr}\left(3\right)$ – double array

${\mathbf{yr}}\left(1\right)$ and ${\mathbf{yr}}\left(2\right)$ should contain values of $y\left(x\right)$ and $p\left(x\right){y}^{\prime}\left(x\right)$ respectively (not both zero) which are consistent with the boundary condition at the righthand end point, given by $x={\mathbf{xr}}$. ${\mathbf{yr}}\left(3\right)$ should not be set.
 5:
$\mathrm{k}$ – int64int32nag_int scalar

$k$, the index of the required eigenvalue when the eigenvalues are ordered
Constraint:
${\mathbf{k}}\ge 0$.
 6:
$\mathrm{tol}$ – double scalar

The tolerance argument which determines the accuracy of the computed eigenvalue. The error estimate held in
delam on exit satisfies the mixed absolute/relative error test
where
elam is the final estimate of the eigenvalue.
delam is usually somewhat smaller than the righthand side of
(1) but not several orders of magnitude smaller.
Constraint:
${\mathbf{tol}}>0.0$.
 7:
$\mathrm{elam}$ – double scalar

An initial estimate of the eigenvalue $\stackrel{~}{\lambda}$.
 8:
$\mathrm{delam}$ – double scalar

An indication of the scale of the problem in the
$\lambda $direction.
delam holds the initial ‘search step’ (positive or negative). Its value is not critical, but the first two trial evaluations are made at
elam and
${\mathbf{elam}}+{\mathbf{delam}}$, so the function will work most efficiently if the eigenvalue lies between these values. A reasonable choice (if a closer bound is not known) is half the distance between adjacent eigenvalues in the neighbourhood of the one sought. In practice, there will often be a problem, similar to the one in hand but with known eigenvalues, which will help one to choose initial values for
elam and
delam.
If ${\mathbf{delam}}=0.0$ on entry, it is given the default value of $0.25\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1.0,\left{\mathbf{elam}}\right\right)$.
 9:
$\mathrm{hmax}\left(2,{\mathbf{m}}\right)$ – double array

${\mathbf{hmax}}\left(1,\mathit{j}\right)$ should contain a maximum step size to be used by the differential equation code in the
$\mathit{j}$th subinterval
${\mathit{i}}_{\mathit{j}}$ (as described in the specification of argument
xpoint), for
$\mathit{j}=1,2,\dots ,m3$. If it is zero the function generates a maximum step size internally.
It is recommended that
${\mathbf{hmax}}\left(1,j\right)$ be set to zero unless the coefficient functions
$p$ and
$q$ have features (such as a narrow peak) within the
$j$th subinterval that could be ‘missed’ if a long step were taken. In such a case
${\mathbf{hmax}}\left(1,j\right)$ should be set to about half the distance over which the feature should be observed. Too small a value will increase the computing time for the function. See
Further Comments for further suggestions.
The rest of the array is used as workspace.
 10:
$\mathrm{monit}$ – function handle or string containing name of mfile

monit is called by
nag_ode_sl2_breaks_funs (d02ke) at the end of each rootfinding iteration and allows you to monitor the course of the computation by printing out the arguments (see
Example for an example).
If no monitoring is required, the dummy (sub)program nag_ode_sl2_reg_finite_dummy_monit (d02kay) may be used. (nag_ode_sl2_reg_finite_dummy_monit (d02kay) is included in the NAG Toolbox.)
monit(nit, iflag, elam, finfo)
Input Parameters
 1:
$\mathrm{nit}$ – int64int32nag_int scalar

The current value of the argument
maxit of
nag_ode_sl2_breaks_funs (d02ke), this is decreased by one at each iteration.
 2:
$\mathrm{iflag}$ – int64int32nag_int scalar

Describes what phase the computation is in.
 ${\mathbf{iflag}}<0$
 An error occurred in the computation at this iteration; an error exit from nag_ode_sl2_breaks_funs (d02ke) with ${\mathbf{ifail}}={\mathbf{iflag}}$ will follow.
 ${\mathbf{iflag}}=1$
 The function is trying to bracket the eigenvalue $\stackrel{~}{\lambda}$.
 ${\mathbf{iflag}}=2$
 The function is converging to the eigenvalue $\stackrel{~}{\lambda}$ (having already bracketed it).
 3:
$\mathrm{elam}$ – double scalar

The current trial value of $\lambda $.
 4:
$\mathrm{finfo}\left(15\right)$ – double array

Information about the behaviour of the shooting method, and diagnostic information in the case of errors. It should not normally be printed in full if no error has occurred (that is, if
${\mathbf{iflag}}>0$), though the first few components may be of interest to you. In case of an error (
${\mathbf{iflag}}<0$) all the components of
finfo should be printed.
The contents of
finfo are as follows:
 ${\mathbf{finfo}}\left(1\right)$
 The current value of the ‘missdistance’ or ‘residual’ function $f\left(\lambda \right)$ on which the shooting method is based. (See General Description of the Algorithm for further information.) ${\mathbf{finfo}}\left(1\right)$ is set to zero if ${\mathbf{iflag}}<0$.
 ${\mathbf{finfo}}\left(2\right)$
 An estimate of the quantity $\partial \lambda $ defined as follows. Consider the perturbation in the missdistance $f\left(\lambda \right)$ that would result if the local error in the solution of the differential equation were always positive and equal to its maximum permitted value. Then $\partial \lambda $ is the perturbation in $\lambda $ that would have the same effect on $f\left(\lambda \right)$. Thus, at the zero of $f\left(\lambda \right),\left\partial \lambda \right$ is an approximate bound on the perturbation of the zero (that is the eigenvalue) caused by errors in numerical solution. If $\partial \lambda $ is very large then it is possible that there has been a programming error in coeffn such that $q$ is independent of $\lambda $. If this is the case, an error exit with ${\mathbf{ifail}}={\mathbf{5}}$ should follow. ${\mathbf{finfo}}\left(2\right)$ is set to zero if ${\mathbf{iflag}}<0$.
 ${\mathbf{finfo}}\left(3\right)$
 The number of internal iterations, using the same value of $\lambda $ and tighter accuracy tolerances, needed to bring the accuracy (that is, the value of $\partial \lambda $) to an acceptable value. Its value should normally be $1.0$, and should almost never exceed $2.0$.
 ${\mathbf{finfo}}\left(4\right)$
 The number of calls to coeffn at this iteration.
 ${\mathbf{finfo}}\left(5\right)$
 The number of successful steps taken by the internal differential equation solver at this iteration. A step is successful if it is used to advance the integration.
 ${\mathbf{finfo}}\left(6\right)$
 The number of unsuccessful steps used by the internal integrator at this iteration.
 ${\mathbf{finfo}}\left(7\right)$
 The number of successful steps at the maximum step size taken by the internal integrator at this iteration.
 ${\mathbf{finfo}}\left(8\right)$
 Not used.
 ${\mathbf{finfo}}\left(9\right)$ to ${\mathbf{finfo}}\left(15\right)$
 Set to zero, unless ${\mathbf{iflag}}<0$ in which case they hold the following values describing the point of failure:
 ${\mathbf{finfo}}\left(9\right)$
 The index of the subinterval where failure occurred, in the range $1$ to $m3$. In case of an error in bdyval, it is set to $0$ or $m2$ depending on whether the left or right boundary condition caused the error.
 ${\mathbf{finfo}}\left(10\right)$
 The value of the independent variable, $x$, the point at which the error occurred. In case of an error in bdyval, it is set to the value of xl or xr as appropriate (see the specification of bdyval).
 ${\mathbf{finfo}}\left(11\right)$, ${\mathbf{finfo}}\left(12\right)$, ${\mathbf{finfo}}\left(13\right)$
 The current values of the Pruefer dependent variables $\beta $, $\varphi $ and $\rho $ respectively. These are set to zero in case of an error in bdyval.
 ${\mathbf{finfo}}\left(14\right)$
 The localerror tolerance being used by the internal integrator at the point of failure. This is set to zero in the case of an error in bdyval.
 ${\mathbf{finfo}}\left(15\right)$
 The last integration mesh point. This is set to zero in the case of an error in bdyval.
 11:
$\mathrm{report}$ – function handle or string containing name of mfile

report provides the means by which you may compute the eigenfunction
$y\left(x\right)$ and its derivative at each integration mesh point
$x$. (See
Further Comments for an example.)
report(x, v, jint)
Input Parameters
 1:
$\mathrm{x}$ – double scalar

The current value of the independent variable
$x$. See
The Position of the Shooting Matching Point for the order in which values of
$x$ are supplied.
 2:
$\mathrm{v}\left(3\right)$ – double array

${\mathbf{v}}\left(1\right)$, ${\mathbf{v}}\left(2\right)$, ${\mathbf{v}}\left(3\right)$ hold the current values of the Pruefer variables $\beta $, $\varphi $, $\rho $ respectively.
 3:
$\mathrm{jint}$ – int64int32nag_int scalar

Indicates the subinterval between breakpoints in which
x lies exactly as for
coeffn,
except that at the extreme lefthand end point (when
$x={\mathbf{xpoint}}\left(2\right)$)
jint is set to
$0$ and at the extreme righthand end point (when
$x={x}_{r}={\mathbf{xpoint}}\left(m1\right)$)
jint is set to
$m2$.
Optional Input Parameters
 1:
$\mathrm{m}$ – int64int32nag_int scalar

Default:
the dimension of the arrays
xpoint,
hmax. (An error is raised if these dimensions are not equal.)
The number of points in the array
xpoint.
Constraint:
${\mathbf{m}}\ge 4$.
 2:
$\mathrm{maxit}$ – int64int32nag_int scalar
Default:
$0$
A bound on
${n}_{r}$, the number of rootfinding iterations allowed, that is the number of trial values of
$\lambda $ that are used. If
${\mathbf{maxit}}\le 0$, no such bound is assumed. (See also
maxfun.)
 3:
$\mathrm{maxfun}$ – int64int32nag_int scalar
Suggested value:
${\mathbf{maxfun}}=0$.
maxfun and
maxit may be used to limit the computational cost of a call to
nag_ode_sl2_breaks_funs (d02ke), which is roughly proportional to
${n}_{r}\times {n}_{f}$.
Default:
$0$
A bound on
${n}_{f}$, the number of calls to
coeffn made in any one rootfinding iteration. If
${\mathbf{maxfun}}\le 0$, no such bound is assumed.
Output Parameters
 1:
$\mathrm{match}$ – int64int32nag_int scalar

The index of the breakpoint actually used as the matching point.
 2:
$\mathrm{elam}$ – double scalar

The final computed estimate, whether or not an error occurred.
 3:
$\mathrm{delam}$ – double scalar

If
${\mathbf{ifail}}={\mathbf{0}}$,
delam holds an estimate of the absolute error in the computed eigenvalue, that is
$\left\stackrel{~}{\lambda}{\mathbf{elam}}\right\simeq {\mathbf{delam}}$. (In
General Description of the Algorithm we discuss the assumptions under which this is true.) The true error is rarely more than twice, or less than a tenth, of the estimated error.
If
${\mathbf{ifail}}\ne {\mathbf{0}}$,
delam may hold an estimate of the error, or its initial value, depending on the value of
ifail. See
Error Indicators and Warnings for further details.
 4:
$\mathrm{hmax}\left(2,{\mathbf{m}}\right)$ – double array

${\mathbf{hmax}}\left(1,m1\right)$ and
${\mathbf{hmax}}\left(1,m\right)$ contain the sensitivity coefficients
${\sigma}_{l},{\sigma}_{r}$, described in
The Sensitivity Parameters and . Other entries contain diagnostic output in the case of an error exit (see
Error Indicators and Warnings).
 5:
$\mathrm{maxit}$ – int64int32nag_int scalar
Default:
$0$
Will have been decreased by the number of iterations actually performed, whether or not it was positive on entry.
 6:
$\mathrm{ifail}$ – int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
 ${\mathbf{ifail}}=1$

A argument error. All arguments (except
ifail) are left unchanged. The reason for the error is shown by the value of
${\mathbf{hmax}}\left(2,1\right)$ as follows:
${\mathbf{hmax}}\left(2,1\right)=1$: 
${\mathbf{m}}<4$; 
${\mathbf{hmax}}\left(2,1\right)=2$: 
${\mathbf{k}}<0$; 
${\mathbf{hmax}}\left(2,1\right)=3$: 
${\mathbf{tol}}\le 0.0$; 
${\mathbf{hmax}}\left(2,1\right)=4$: 
${\mathbf{xpoint}}\left(1\right)$ to ${\mathbf{xpoint}}\left(m\right)$ are not in ascending order. ${\mathbf{hmax}}\left(2,2\right)$ gives the position $i$ in xpoint where this was detected. 
 ${\mathbf{ifail}}=2$

At some call to
bdyval, invalid values were returned, that is, either
${\mathbf{yl}}\left(1\right)={\mathbf{yl}}\left(2\right)=0.0$, or
${\mathbf{yr}}\left(1\right)={\mathbf{yr}}\left(2\right)=0.0$ (a programming error in
bdyval). See the last call of
monit for details.
This error exit will also occur if
$p\left(x\right)$ is zero at the point where the boundary condition is imposed. Probably
bdyval was called with
xl equal to a singular end point
$a$ or with
xr equal to a singular end point
$b$.
 ${\mathbf{ifail}}=3$

At some point between
xl and
xr the value of
$p\left(x\right)$ computed by
coeffn became zero or changed sign. See the last call of
monit for details.
 ${\mathbf{ifail}}=4$

${\mathbf{maxit}}>0$ on entry, and after
maxit iterations the eigenvalue had not been found to the required accuracy.
 ${\mathbf{ifail}}=5$

The ‘bracketing’ phase (with argument
iflag of the
monit equal to
$1$) failed to bracket the eigenvalue within ten iterations. This is caused by an error in formulating the problem (for example,
$q$ is independent of
$\lambda $), or by very poor initial estimates of
elam and
delam.
On exit,
elam and
${\mathbf{elam}}+{\mathbf{delam}}$ give the end points of the interval within which no eigenvalue was located by the function.
 ${\mathbf{ifail}}=6$

${\mathbf{maxfun}}>0$ on entry, and the last iteration was terminated because more than
maxfun calls to
coeffn were used. See the last call of
monit for details.
 ${\mathbf{ifail}}=7$

To obtain the desired accuracy the local error tolerance was set so small at the start of some subinterval that the differential equation solver could not choose an initial step size large enough to make significant progress. See the last call of
monit for diagnostics.
 ${\mathbf{ifail}}=8$

At some point inside a subinterval the step size in the differential equation solver was reduced to a value too small to make significant progress (for the same reasons as with
${\mathbf{ifail}}={\mathbf{7}}$). This could be due to pathological behaviour of
$p\left(x\right)$ and
$q\left(x;\lambda \right)$ or to an unreasonable accuracy requirement or to the current value of
$\lambda $ making the equations ‘stiff’. See the last call of
monit for details.
 W ${\mathbf{ifail}}=9$

tol is too small for the problem being solved and the
machine precision is being used. The final value of
elam should be a very good approximation to the eigenvalue.
 ${\mathbf{ifail}}=10$

nag_roots_contfn_brent_rcomm (c05az), called by
nag_ode_sl2_breaks_funs (d02ke), has terminated with the error exit corresponding to a pole of the residual function
$f\left(\lambda \right)$. This error exit should not occur, but if it does, try solving the problem again with a smaller value for
tol.
 ${\mathbf{ifail}}=11$
 ${\mathbf{ifail}}=12$

A serious error has occurred in an internal call. Check all (sub)program calls and array dimensions. Seek expert help.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
Note: error exits with
${\mathbf{ifail}}={\mathbf{2}}$,
${\mathbf{3}}$,
${\mathbf{6}}$,
${\mathbf{7}}$,
${\mathbf{8}}$ or
${\mathbf{11}}$ are caused by being unable to set up or solve the differential equation at some iteration and will be immediately preceded by a call of
monit giving diagnostic information. For other errors, diagnostic information is contained in
${\mathbf{hmax}}\left(2,\mathit{j}\right)$, for
$\mathit{j}=1,2,\dots ,m$, where appropriate.
Accuracy
Further Comments
Timing
The time taken by
nag_ode_sl2_breaks_funs (d02ke) depends on the complexity of the coefficient functions, whether they or their derivatives are rapidly changing, the tolerance demanded, and how many iterations are needed to obtain convergence. The amount of work per iteration is roughly doubled when
tol is divided by
$16$. To make the most economical use of the function, one should try to obtain good initial values for
elam and
delam, and, where appropriate, good asymptotic formulae. Also the boundary matching points should not be set unnecessarily close to singular points. The extra time needed to compute the eigenfunction is principally the cost of one additional integration once the eigenvalue has been found.
General Description of the Algorithm
A shooting method, for differential equation problems containing unknown parameters, relies on the construction of a ‘missdistance function’, which for given trial values of the parameters measures how far the conditions of the problem are from being met. The problem is then reduced to one of finding the values of the parameters for which the missdistance function is zero, that is to a rootfinding process. Shooting methods differ mainly in how the missdistance is defined.
nag_ode_sl2_breaks_funs (d02ke) defines a missdistance
$f\left(\lambda \right)$ based on the rotation about the origin of the point
${\mathbf{p}}\left(x\right)=\left(p\left(x\right){y}^{\prime}\left(x\right),y\left(x\right)\right)$ in the Phase Plane as the solution proceeds from
$a$ to
$b$. The
boundary conditions define the ray (i.e., twosided line through the origin) on which
$p\left(x\right)$ should start, and the ray on which it should finish. The
eigenvalue $k$ defines the total number of halfturns it should make. Numerical solution is actually done by ‘shooting forward’ from
$x=a$ and ‘shooting backward’ from
$x=b$ to a matching point
$x=c$. Then
$f\left(\lambda \right)$ is taken as the angle between the rays to the two resulting points
${P}_{a}\left(c\right)$ and
${P}_{b}\left(c\right)$. A relative scaling of the
$p{y}^{\prime}$ and
$y$ axes, based on the behaviour of the coefficient functions
$p$ and
$q$, is used to improve the numerical behaviour.
Figure 1
The resulting function $f\left(\lambda \right)$ is monotonic over $\infty <\lambda <\infty $, increasing if $\frac{\partial q}{\partial \lambda}>0$ and decreasing if $\frac{\partial q}{\partial \lambda}<0$, with a unique zero at the desired eigenvalue $\stackrel{~}{\lambda}$. The function measures $f\left(\lambda \right)$ in units of a halfturn. This means that as $\lambda $ increases, $f\left(\lambda \right)$ varies by about $1$ as each eigenvalue is passed. (This feature implies that the values of $f\left(\lambda \right)$ at successive iterations – especially in the early stages of the iterative process – can be used with suitable extrapolation or interpolation to help the choice of initial estimates for eigenvalues near to the one currently being found.)
The function actually computes a value for
$f\left(\lambda \right)$ with errors, arising from the local errors of the differential equation code and from the asymptotic formulae provided by you if singular points are involved. However, the error estimate output in
delam is usually fairly realistic, in that the actual error
$\left\stackrel{~}{\lambda}{\mathbf{elam}}\right$ is within an order of magnitude of
delam.
We pass the values of
$\beta $,
$\varphi $,
$\rho $ across through
report rather than converting them to values of
$y$,
${y}^{\prime}$ inside
nag_ode_sl2_breaks_funs (d02ke), for the following reasons. First, there may be cases where auxiliary quantities can be more accurately computed from the Pruefer variables than from
$y$ and
${y}^{\prime}$. Second, in singular problems on an infinite interval
$y$ and
${y}^{\prime}$ may underflow towards the end of the range, whereas the Pruefer variables remain wellbehaved. Third, with highorder eigenvalues (and therefore highly oscillatory eigenfunctions) the eigenfunction may have a complete oscillation (or more than one oscillation) between two mesh points, so that values of
$y$ and
${y}^{\prime}$ at mesh points give a very poor representation of the curve. The probable behaviour of the Pruefer variables in this case is that
$\beta $ and
$\rho $ vary slowly whilst
$\varphi $ increases quickly: for all three Pruefer variables linear interpolation between the values at adjacent mesh points is probably sufficiently accurate to yield acceptable intermediate values of
$\beta $,
$\varphi $,
$\rho $ (and hence of
$y,{y}^{\prime}$) for graphical purposes.
Similar considerations apply to the exponentially decaying ‘tails’ of the eigenfunctions that often occur in singular problems. Here $\varphi $ has approximately constant value whilst $\rho $ increases rapidly in the direction of integration, though the step length is generally fairly small over such a range.
If the solution is output through
report at
$x$ values which are too widely spaced, the step length can be controlled by choosing
hmax suitably, or, preferably, by reducing
tol. Both these choices will lead to more accurate eigenvalues and eigenfunctions but at some computational cost.
The Position of the Shooting Matching Point c
This point is always one of the values
${x}_{i}$ in array
xpoint. It may be specified using the argument
match. The default value is chosen to be the value of that
${x}_{i}$,
$2\le i\le m1$, that lies closest to the middle of the interval
$\left[{x}_{2},{x}_{m1}\right]$. If there is a tie, the rightmost candidate is chosen. In particular if there are no breakpoints, then
$c={x}_{m1}$ (
$\text{}={x}_{3}$); that is, the shooting is from left to right in this case. A breakpoint may be inserted purely to move
$c$ to an interior point of the interval, even though the form of the equations does not require it. This often speeds up convergence especially with singular problems.
Note that the shooting method used by the code integrates first from the lefthand end
${x}_{l}$, then from the righthand end
${x}_{r}$, to meet at the matching point
$c$ in the middle. This will of course be reflected in printed or graphical output. The diagram shows a possible sequence of nine mesh points
${\tau}_{1}$ through
${\tau}_{9}$ in the order in which they appear, assuming there are just two subintervals (so
$m=5$).
Figure 2
Since the shooting method usually fails to match up the two ‘legs’ of the curve exactly, there is bound to be a jump in
$y$, or in
$p\left(x\right){y}^{\prime}$ or both, at the matching point
$c$. The code in fact ‘shares’ the discrepancy out so that both
$y$ and
$p\left(x\right){y}^{\prime}$ have a jump. A large jump does
not imply an inaccurate eigenvalue, but implies either
(a) 
a badly chosen matching point: if $q\left(x;\lambda \right)$ has a ‘humped’ shape, $c$ should be chosen near the maximum value of $q$, especially if $q$ is negative at the ends of the interval; 
(b) 
an inherently illconditioned problem, typically one where another eigenvalue is pathologically close to the one being sought. In this case it is extremely difficult to obtain an accurate eigenfunction. 
In
Example, we find the 11th eigenvalue and corresponding eigenfunction of the equation
the boundary conditions being that
$y$ should remain bounded as
$x$ tends to
$0$ and
$x$ tends to
$\infty $. The coding of this problem is discussed in detail in
Examples of Boundary Conditions at Singular Points.
The choice of matching point
$c$ is open. If we choose
$c=30.0$ as in
nag_ode_sl2_breaks_vals (d02kd) example program we find that the exponentially increasing component of the solution dominates and we get extremely inaccurate values for the eigenfunction (though the eigenvalue is determined accurately). The values of the eigenfunction calculated with
$c=30.0$ are given schematically in
Figure 3.
Figure 3
If we choose
$c$ as the maximum of the hump in
$q\left(x;\lambda \right)$ (see item
(a) above) we instead obtain the accurate results given in
Figure 4
Figure 4
Examples of Coding the coeffn
Coding
coeffn is straightforward except when breakpoints are needed. The examples below show:
(a) 
a simple case, 
(b) 
a case in which discontinuities in the coefficient functions or their derivatives necessitate breakpoints, and 
(c) 
a case where breakpoints together with the hmax argument are an efficient way to deal with a coefficient function that is wellbehaved except over one short interval. 
(Some of these cases are among the examples in
Example.)
Example A
The modified Bessel equation
Assuming the interval of solution does not contain the origin and dividing through by
$x$, we have
$p\left(x\right)=x$ and
$q\left(x;\lambda \right)=\lambda x{\nu}^{2}/x$. The code could be
function [p, q, dqdl] = coeffn(x, elam, jint)
global nu;
...
p = x;
q = elam*x  nu*nu/x
dqdl = x;
where
nu (standing for
$\nu $) is a double global variable declared in the calling program.
Example B
The Schroedinger equation
where
over some interval ‘approximating to
$\left(\infty ,\infty \right)$’, say
$\left[20,20\right]$. Here we need breakpoints at
$\pm 4$, forming three subintervals
${i}_{1}=\left[20,4\right]$,
${i}_{2}=\left[4,4\right]$,
${i}_{3}=\left[4,20\right]$. The code could be
function [p, q, dqdl] = coeffn(x, elam, jint)
...
p = 1;
dqdl = 1;
if (jint == 2)
q = elam + x*x  10
else
q = elam + 6/abs(x)
end
The array
xpoint would contain the values
${x}_{1}$,
$20.0$,
$4.0$,
$+4.0$,
$+20.0$,
${x}_{6}$ and
$m$ would be
$6$. The choice of appropriate values for
${x}_{1}$ and
${x}_{6}$ depends on the form of the asymptotic formula computed by
bdyval and the technique is discussed in
Examples of Boundary Conditions at Singular Points.
Example C
Here
$q\left(x;\lambda \right)$ is nearly constant over the range except for a sharp inverted spike over approximately
$0.1\le x\le 0.1$. There is a danger that the function will build up to a large step size and ‘step over’ the spike without noticing it. By using breakpoints – say at
$\pm 0.5$ – one can restrict the step size near the spike without impairing the efficiency elsewhere.
The code for
coeffn could be
function [p, q, dqdl] = coeffn(x, elam, jint)
...
p = 1;
dqdl = 1  2*exp(100*x*x);
q = elam * dqdl;
xpoint might contain
$10.0$,
$10.0$,
$0.5$,
$0.5$,
$10.0$,
$10.0$ (assuming
$\pm 10$ are regular points) and
$m$ would be
$6$.
${\mathbf{hmax}}\left(1,\mathit{j}\right)$, for
$\mathit{j}=1,2,3$, might contain
$0.0$,
$0.1$ and
$0.0$.
Examples of Boundary Conditions at Singular Points
Quoting from page 243 of
Bailey (1966): ‘Usually ... the differential equation has two essentially different types of solution near a singular point, and the boundary condition there merely serves to distinguish one kind from the other. This is the case in all the standard examples of mathematical physics.’
In most cases the behaviour of the ratio
$p\left(x\right){y}^{\prime}/y$ near the point is quite different for the two types of solution. Essentially what you provide through the
bdyval is an approximation to this ratio, valid as
$x$ tends to the singular point (SP).
You must decide (a) how accurate to make this approximation or asymptotic formula, for example how many terms of a series to use, and (b) where to place the boundary matching point (BMP) at which the numerical solution of the differential equation takes over from the asymptotic formula. Taking the BMP closer to the SP will generally improve the accuracy of the asymptotic formula, but will make the computation more expensive as the Pruefer differential equations generally become progressively more illbehaved as the SP is approached. You are strongly recommended to experiment with placing the BMPs. In many singular problems quite crude asymptotic formulae will do. To help you avoid needlessly accurate formulae,
nag_ode_sl2_breaks_funs (d02ke) outputs two ‘sensitivity coefficients’
${\sigma}_{l},{\sigma}_{r}$ which estimate how much the errors at the BMPs affect the computed eigenvalue. They are described in detail in
The Sensitivity Parameters and .
Example of coding bdyval: The example below illustrates typical situations:
the boundary conditions being that
$y$ should remain bounded as
$x$ tends to
$0$ and
$x$ tends to
$\infty $.
At the end
$x=0$ there is one solution that behaves like
${x}^{2}$ and another that behaves like
${x}^{1}$. For the first of these solutions
$p\left(x\right){y}^{\prime}/y$ is asymptotically
$2/x$ while for the second it is asymptotically
$1/x$. Thus the desired ratio is specified by setting
At the end
$x=\infty $ the equation behaves like Airy's equation shifted through
$\lambda $, i.e., like
${y}^{\prime \prime}ty=0$ where
$t=x\lambda $, so again there are two types of solution. The solution we require behaves as
and the other as
Hence, the desired solution has
$p\left(x\right){y}^{\prime}/y\sim \sqrt{t}$ so that we could set
${\mathbf{yr}}\left(1\right)=1.0$ and
${\mathbf{yr}}\left(2\right)=\sqrt{x\lambda}$. The complete function might thus be
function [yl, yr] = bdyval(xl, xr, elam)
yl(1) = xl;
yl(2) = 2;
yr(1) = 1;
yr(2) = sqrt(xrelam);
Clearly for this problem it is essential that any value given by
nag_ode_sl2_breaks_funs (d02ke) to
xr is well to the right of the value of
elam, so that you must vary the righthand BMP with the eigenvalue index
$k$. One would expect
${\lambda}_{k}$ to be near the
$k$th zero of the Airy function
$\mathrm{Ai}\left(x\right)$, so there is no problem estimating
elam.
More accurate asymptotic formulae are easily found: near
$x=0$ by the standard Frobenius method, and near
$x=\infty $ by using standard asymptotics for
$\mathrm{Ai}\left(x\right)$,
${\mathrm{Ai}}^{\prime}\left(x\right)$, (see page 448 of
Abramowitz and Stegun (1972)).
For example, by the Frobenius method the solution near
$x=0$ has the expansion
with
This yields
The Sensitivity Parameters σl and σr
The sensitivity parameters
${\sigma}_{l}$,
${\sigma}_{r}$ (held in
${\mathbf{hmax}}\left(1,m1\right)$ and
${\mathbf{hmax}}\left(1,m\right)$ on output) estimate the effect of errors in the boundary conditions. For sufficiently small errors
$\Delta y$,
$\Delta p{y}^{\prime}$ in
$y$ and
$p{y}^{\prime}$ respectively, the relations
are satisfied, where the subscripts
$l$,
$r$ denote errors committed at the left and righthand BMPs respectively, and
$\Delta \lambda $ denotes the consequent error in the computed eigenvalue.
‘Missed Zeros’
This is a pitfall to beware of at a singular point. If the BMP is chosen so far from the SP that a zero of the desired eigenfunction lies in between them, then the function will fail to ‘notice’ this zero. Since the index of
$k$ of an eigenvalue is the number of zeros of its eigenfunction, the result will be that
(a) 
the wrong eigenvalue will be computed for the given index $k$ – in fact some ${\lambda}_{k+{k}^{\prime}}$ will be found where ${k}^{\prime}\ge 1$; 
(b) 
the same index $k$ can cause convergence to any of several eigenvalues depending on the initial values of elam and delam. 
It is up to you to take suitable precautions – for instance by varying the position of the BMPs in the light of knowledge of the asymptotic behaviour of the eigenfunction at different eigenvalues.
Example
This example finds the 11th eigenvalue and eigenfunction of the example of
Examples of Boundary Conditions at Singular Points, using the simple asymptotic formulae for the boundary conditions.
Comparison of the results from this example program with the corresponding results from
nag_ode_sl2_breaks_vals (d02kd) example program shows that similar output is produced from
monit, followed by the eigenfunction values from
report, and then a further line of information from
monit (corresponding to the integration to find the eigenfunction). Final information is printed within the example program exactly as with
nag_ode_sl2_breaks_vals (d02kd).
Note the discrepancy at the matching point $c$ ($\text{}=\sqrt[3]{4}$, the maximum of $q\left(x;\lambda \right)$, in this case) between the solutions obtained by integrations from left and righthand end points.
Open in the MATLAB editor:
d02ke_example
function d02ke_example
fprintf('d02ke example results\n\n');
global ykeep ncall xkeep pkeep;
xpoint = [0; 0.1; 4d0^(1d0/3d0); 30; 30];
match = 0;
k = 11;
tol = 1d4;
elam = 14;
delam = 1;
m = 5;
hmax = zeros(2,m);
ncall = 0;
ykeep = zeros(1,1);
xkeep = zeros(1,1);
pkeep = zeros(1,1);
[match, elam, delam, hmax, maxit, ifail] = ...
d02ke( ...
xpoint, int64(match), @coeffn, @bdyval, ...
int64(k), tol, elam, delam, hmax, 'd02kay', @report);
fprintf('\n\n');
fprintf('Eigenvalue number, k = %d\n',k);
fprintf(' estimated value = %6.3f\n',elam);
fprintf(' estimated error = %9.2e\n',delam);
fprintf(' sensitivity to left boundary = %6.3f\n',hmax(1,m1));
fprintf(' sensitivity to right boundary = %6.3f\n\n', hmax(1,m));
fig1 = figure;
display_plot(xkeep,pkeep,ykeep)
function [yl, yr] = bdyval(xl, xr, elam)
yl(1) = xl;
yl(2) = 2;
yl(3) = 0;
yr(1) = 1;
yr(2) = sqrt(xrelam);
yr(3) = 0;
function [p, q, dqdl] = coeffn(x, elam, jint)
p = 1;
dqdl = 1;
q = elam  x  2/x/x;
function report(x, v, jint)
global ykeep ncall xkeep pkeep;
if (jint == 0)
fprintf('\n A singular problem\n');
end
sqrtb = sqrt(v(1));
if (0.5*v(3) >= log(x02am))
r = exp(0.5*v(3));
else
r = 0d0;
end
pyp = r*sqrtb*cos(0.5*v(2));
y = r/sqrtb*sin(0.5*v(2));
ncall = ncall+1;
ykeep(ncall,1) = y;
xkeep(ncall,1) = x;
pkeep(ncall,1) = pyp;
function display_plot(xkeep, pkeep, ykeep)
n = length(xkeep);
for i = 2:n
if (xkeep(i1) > xkeep(i))
x1 = [xkeep(i1:n);rot90(xkeep(1:i2)')];
p1 = [pkeep(i1:n);rot90(pkeep(1:i2)')];
y1 = [ykeep(i1:n);rot90(ykeep(1:i2)')];
break
end
end
plot(x1,y1,'',x1,p1,'');
title({'Regular Singular Secondorder SturmLiouville System',...
'11th Eigenfunction and Derivative'});
xlabel('x'); ylabel('Eigenfunction and Derivative');
legend('y','p(x)y''','Location','Best');
d02ke example results
A singular problem
Eigenvalue number, k = 11
estimated value = 14.946
estimated error = 9.60e04
sensitivity to left boundary = 0.015
sensitivity to right boundary = 0.000
PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015