Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

nag_ode_bvp_shoot_genpar_algeq (d02sa) solves a two-point boundary value problem for a system of first-order ordinary differential equations with boundary conditions, combined with additional algebraic equations. It uses initial value techniques and a modified Newton iteration in a shooting and matching method.

Mark 22: npoint has been made optional

.nag_ode_bvp_shoot_genpar_algeq (d02sa) solves a two-point boundary value problem for a system of n$n$ first-order ordinary differential equations with separated boundary conditions by determining certain unknown parameters p_{1},p_{2}, … ,p_{m}${p}_{1},{p}_{2},\dots ,{p}_{m}$. (There may also be additional algebraic equations to be solved in the determination of the parameters and, if so, these equations are defined by eqn.) The parameters may be, but need not be, boundary values; they may include eigenvalues, parameters in the coefficients of the differential equations, coefficients in series expansions or asymptotic expansions for boundary values, the length of the range of definition of the system of differential equations, etc.

It is assumed that we have a system of n$n$ differential equations of the form

where
p
=
(p_{1},p_{2}, … ,p_{m})
^{T}
$p={({p}_{1},{p}_{2},\dots ,{p}_{m})}^{\mathrm{T}}$ is the vector of parameters, and that the derivative f$f$ is evaluated by fcn. Also, n_{1}${n}_{1}$ of the equations are assumed to depend on p$p$. For n_{1} < n${n}_{1}<n$ the n − n_{1}$n-{n}_{1}$ equations of the system are not involved in the matching process. These are the driving equations; they should be independent of p$p$ and of the solution of the other n_{1}${n}_{1}$ equations. In numbering the equations in fcn and bc the driving equations must be put **first** (as they naturally occur in most applications). The range of definition [a,b$a,b$] of the differential equations is defined by range and may depend on the parameters p_{1},p_{2}, … ,p_{m}${p}_{1},{p}_{2},\dots ,{p}_{m}$ (that is, on p$p$). range must define the points x_{1},x_{2}, … ,x_{npoint}${x}_{1},{x}_{2},\dots ,{x}_{{\mathbf{npoint}}}$, npoint ≥ 2${\mathbf{npoint}}\ge 2$, which must satisfy

(or a similar relationship with all the inequalities reversed).

y ^{′}
=
f(x,y,p)
,
$${y}^{\prime}=f(x,y,p)\text{,}$$ | (1) |

a = x _{1} < x_{2} < ⋯ < x_{npoint} = b
$$a={x}_{1}<{x}_{2}<\cdots <{x}_{{\mathbf{npoint}}}=b$$ | (2) |

If npoint > 2${\mathbf{npoint}}>2$ the points x_{1},x_{2}, … ,x_{npoint}${x}_{1},{x}_{2},\dots ,{x}_{{\mathbf{npoint}}}$ can be used to break up the range of definition. Integration is restarted at each of these points. This means that the differential equations (1) can be defined differently in each sub-interval
[x_{i},x_{i + 1}]
$[{x}_{\mathit{i}},{x}_{\mathit{i}+1}]$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$. Also, since initial and maximum integration step sizes can be supplied on each sub-interval (via the array swp), you can indicate parts of the range [a,b]$[a,b]$ where the solution y(x)$y\left(x\right)$ may be difficult to obtain accurately and can take appropriate action.

The boundary conditions may also depend on the parameters and are applied at a = x_{1}$a={x}_{1}$ and b = x_{npoint}$b={x}_{{\mathbf{npoint}}}$. They are defined (in bc) in the form

The boundary value problem is solved by determining the unknown parameters p$p$ by a shooting and matching technique. The differential equations are always integrated from a$a$ to b$b$ with initial values y(a) = g_{1}(p)$y\left(a\right)={g}_{1}\left(p\right)$. The solution vector thus obtained at x = b$x=b$ is subtracted from the vector g_{2}(p)${g}_{2}\left(p\right)$ to give the n_{1}${n}_{1}$ residuals r_{1}(p)${r}_{1}\left(p\right)$, ignoring the first n − n_{1}$n-{n}_{1}$, driving equations. Because the direction of integration is always from a$a$ to b$b$, it is unnecessary, in bc, to supply values for the first n − n_{1}$n-{n}_{1}$ boundary values at b$b$, that is the first n − n_{1}$n-{n}_{1}$ components of g_{2}${g}_{2}$ in (3). For n_{1} < m${n}_{1}<m$ then r_{1}(p)${r}_{1}\left(p\right)$. Together with the m − n_{1}$m-{n}_{1}$ equations defined by eqn,

these give a vector of residuals r$r$, which at the solution, p$p$, must satisfy

These equations are solved by a pseudo-Newton iteration which uses a modified singular value decomposition of J = ( ∂ r)/( ∂ p)
$J=\frac{\partial r}{\partial p}$ when solving the linear equations which arise. The Jacobian J$J$ used in Newton's method is obtained by numerical differentiation. The parameters at each Newton iteration are accepted only if the norm ‖D^{ − 1}J̃^{+}r‖_{2}${\Vert {D}^{-1}{\stackrel{~}{J}}^{+}r\Vert}_{2}$ is much reduced from its previous value. Here J̃^{+}${\stackrel{~}{J}}^{+}$ is the pseudo-inverse, calculated from the singular value decomposition, of a modified version of the Jacobian J$J$ (J^{+}${J}^{+}$ is actually the inverse of the Jacobian in well-conditioned cases). D$D$ is a diagonal matrix with

where pf is an array of floor values.

y(a) = g _{1}(p), > y(b) = g_{2}(p).
$$y\left(a\right)={g}_{1}\left(p\right),>y\left(b\right)={g}_{2}\left(p\right)\text{.}$$ | (3) |

r _{2}(p) = 0,
$${r}_{2}\left(p\right)=0\text{,}$$ | (4) |

$$\left(\begin{array}{l}{r}_{1}\left(p\right)\\ {r}_{2}\left(p\right)\end{array}\right)=0\text{.}$$ | (5) |

$${d}_{ii}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}(\left|{p}_{i}\right|,{\mathbf{pf}}\left(i\right))$$ | (6) |

See Deuflhard (1974) for further details of the variants of Newton's method used, Gay (1976) for the modification of the singular value decomposition and Gladwell (1979) for an overview of the method used.

Two facilities are provided to prevent the pseudo-Newton iteration running into difficulty. First, you are permitted to specify constraints on the values of the parameters p$p$ via a constr. These constraints are only used to prevent the Newton iteration using values for p$p$ which would violate them; that is, they are not used to determine the values of p$p$. Secondly, you are permitted to specify a maximum value y_{max}${y}_{\mathrm{max}}$ for ‖y(x)‖_{∞}${\Vert y\left(x\right)\Vert}_{\infty}$ at all points in the range [a,b]$[a,b]$. It is intended that this facility be used to prevent machine ‘overflow’ in the integrations of equation (1) due to poor choices of the parameters p$p$ which might arise during the Newton iteration. When using this facility, it is presumed that you have an estimate of the likely size of ‖y(x)‖_{∞}${\Vert y\left(x\right)\Vert}_{\infty}$ at all points x ∈ [a,b]$x\in [a,b]$. y_{max}${y}_{\mathrm{max}}$ should then be chosen rather larger (say by a factor of 10$10$) than this estimate.

You are strongly advised to supply a monit (or to call the ‘default’ function nag_ode_bvp_shoot_genpar_algeq_sample_monit (d02hbx), see monit) to monitor the progress of the pseudo-Newton iteration. You can output the solution of the problem y(x)$y\left(x\right)$ by supplying a suitable prsol (an example is given in Section [Example] of a function designed to output the solution at equally spaced points).

nag_ode_bvp_shoot_genpar_algeq (d02sa) is designed to try all possible options before admitting failure and returning to you. Provided the function can start the Newton iteration from the initial point p$p$ it will exhaust all the options available to it (though you can override this by specifying a maximum number of iterations to be taken). The fact that all its options have been exhausted is the only error exit from the iteration. Other error exits are possible, however, whilst setting up the Newton iteration and when computing the final solution.

If you require more background information about the solution of boundary value problems by shooting methods you are recommended to read the appropriate chapters of Hall and Watt (1976), and for a detailed description of nag_ode_bvp_shoot_genpar_algeq (d02sa)
Gladwell (1979) is recommended.

Deuflhard P (1974) A modified Newton method for the solution of ill-conditioned systems of nonlinear equations with application to multiple shooting *Numer. Math.* **22** 289–315

Gay D (1976) On modifying singular values to solve possibly singular systems of nonlinear equations *Working Paper 125* Computer Research Centre, National Bureau for Economics and Management Science, Cambridge, MA

Gladwell I (1979) The development of the boundary value codes in the ordinary differential equations chapter of the NAG Library *Codes for Boundary Value Problems in Ordinary Differential Equations. Lecture Notes in Computer Science* (eds B Childs, M Scott, J W Daniel, E Denman and P Nelson) **76** Springer–Verlag

Hall G and Watt J M (ed.) (1976) *Modern Numerical Methods for Ordinary Differential Equations* Clarendon Press, Oxford

- 1: p(m) – double array
- p(i)${\mathbf{p}}\left(\mathit{i}\right)$ must be set to an estimate of the i$\mathit{i}$th parameter, p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 2: n1 – int64int32nag_int scalar
- 3: pe(m) – double array
- pe(i)${\mathbf{pe}}\left(\mathit{i}\right)$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, must be set to a positive value for use in the convergence test in the i$i$th parameter p
_{i}${p}_{i}$. See the description of pf for further details.*Constraint*: pe(i) > 0.0${\mathbf{pe}}\left(\mathit{i}\right)>0.0$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 4: pf(m) – double array
- pf(i)${\mathbf{pf}}\left(\mathit{i}\right)$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, should be set to a ‘floor’ value in the convergence test on the i$i$th parameter p
_{i}${p}_{i}$. If pf(i) ≤ 0.0${\mathbf{pf}}\left(i\right)\le 0.0$ on entry then it is set to the small positive value sqrt(ε)$\sqrt{\epsilon}$ (where ε$\epsilon $ may in most cases be considered to be machine precision); otherwise it is used unchanged.The Newton iteration is presumed to have converged if a full Newton step is taken (istate = 1${\mathbf{istate}}=1$ in the specification of monit), the singular values of the Jacobian are not being significantly perturbed (also see monit) and if the Newton correction C_{i}${C}_{i}$ satisfieswhere p$$\left|{C}_{i}\right|\le {\mathbf{pe}}\left(i\right)\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}(\left|{p}_{i}\right|,{\mathbf{pf}}\left(i\right))\text{, \hspace{1em}}i=1,2,\dots ,m\text{,}$$_{i}${p}_{i}$ is the current value of the i$i$th parameter. The values pf(i)${\mathbf{pf}}\left(i\right)$ are also used in determining the Newton iterates as discussed in Section [Description], see equation (6). - 5: e(n) – double array
- Values for use in controlling the local error in the integration of the differential equations. If err
_{i}${\mathit{err}}_{\mathit{i}}$ is an estimate of the local error in y_{i}${y}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, thenwhere ε$\epsilon $ may in most cases be considered to be machine precision.$$\left|{\mathit{err}}_{i}\right|\le {\mathbf{e}}\left(i\right)\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}\{\sqrt{\epsilon},\left|{y}_{i}\right|\}\text{,}$$ - 6: dp(m) – double array
- A value to be used in perturbing the parameter p
_{i}${p}_{i}$ in the numerical differentiation to estimate the Jacobian used in Newton's method. If dp(i) = 0.0${\mathbf{dp}}\left(i\right)=0.0$ on entry, an estimate is made internally by settingwhere p$${\mathbf{dp}}\left(i\right)=\sqrt{\epsilon}\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}({\mathbf{pf}}\left(i\right),\left|{p}_{i}\right|)\text{,}$$(7) _{i}${p}_{i}$ is the initial value of the parameter supplied by you and ε$\epsilon $ may in most cases be considered to be machine precision. The estimate of the Jacobian, J$J$, is made using forward differences, that is for each i$\mathit{i}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, p_{i}${p}_{\mathit{i}}$ is perturbed to p_{i}+ dp(i)${p}_{\mathit{i}}+{\mathbf{dp}}\left(\mathit{i}\right)$ and the i$\mathit{i}$th column of J$J$ is estimated aswhere the other components of p$p$ are unchanged (see (3) for the notation used). If this fails to produce a Jacobian with significant columns, backward differences are tried by perturbing p$$(r({p}_{\mathit{i}}+{\mathbf{dp}}\left(\mathit{i}\right))-r\left({p}_{\mathit{i}}\right))/{\mathbf{dp}}\left(\mathit{i}\right)$$_{i}${p}_{\mathit{i}}$ to p_{i}− dp(i)${p}_{\mathit{i}}-{\mathbf{dp}}\left(\mathit{i}\right)$ and if this also fails then central differences are used with p_{i}${p}_{\mathit{i}}$ perturbed to p_{i}+ 10.0 × dp(i)${p}_{\mathit{i}}+10.0\times {\mathbf{dp}}\left(\mathit{i}\right)$. If this also fails then the calculation of the Jacobian is abandoned. If the Jacobian has not previously been calculated then an error exit is taken. If an earlier estimate of the Jacobian is available then the current parameter set, p_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, is abandoned in favour of the last parameter set from which useful progress was made and the singular values of the Jacobian used at the point are modified before proceeding with the Newton iteration. You are recommended to use the default value dp(i) = 0.0${\mathbf{dp}}\left(i\right)=0.0$ unless you have prior knowledge of a better choice. If any of the perturbations described are likely to lead to an unfortunate set of parameter values then you should use constr to prevent such perturbations (all changes of parameters are checked by a call to constr). - 7: swp(ldswp,6$6$) – double array
- ldswp, the first dimension of the array, must satisfy the constraint ldswp ≥ npoint$\mathit{ldswp}\ge {\mathbf{npoint}}$.swp(i,1)${\mathbf{swp}}\left(i,1\right)$ must contain an estimate for an initial step size for integration across the i$i$th sub-interval [x(i),x(i + 1)] $[{\mathbf{x}}\left(\mathit{i}\right),{\mathbf{x}}\left(\mathit{i}+1\right)]$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$, (see range). swp(i,1)${\mathbf{swp}}\left(i,1\right)$ should have the same sign as x(i + 1) − x(i)${\mathbf{x}}\left(i+1\right)-{\mathbf{x}}\left(i\right)$ if it is nonzero. If swp(i,1) = 0.0${\mathbf{swp}}\left(i,1\right)=0.0$, on entry, a default value for the initial step size is calculated internally. This is the recommended mode of entry.swp(i,3)${\mathbf{swp}}\left(i,3\right)$ must contain a lower bound for the modulus of the step size on the i$\mathit{i}$th sub-interval [x(i),x( i + 1 )] $[{\mathbf{x}}\left(\mathit{i}\right),{\mathbf{x}}\left(\mathit{i}+1\right)]$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$. If swp(i,3) = 0.0${\mathbf{swp}}\left(i,3\right)=0.0$ on entry, a very small default value is used. By setting swp(i,3) > 0.0${\mathbf{swp}}\left(i,3\right)>0.0$ but smaller than the expected step sizes (assuming you have some insight into the likely step sizes) expensive integrations with parameters p$p$ far from the solution can be avoided.swp(i,2)${\mathbf{swp}}\left(\mathit{i},2\right)$ must contain an upper bound on the modulus of the step size to be used in the integration on [x(i),x( i + 1 )] $[{\mathbf{x}}\left(\mathit{i}\right),{\mathbf{x}}\left(\mathit{i}+1\right)]$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$. If swp(i,2) = 0.0${\mathbf{swp}}\left(i,2\right)=0.0$ on entry no bound is assumed. This is the recommended mode of entry unless the solution is expected to have important features which might be ‘missed’ in the integration if the step size were permitted to be chosen freely.
- 8: icount – int64int32nag_int scalar
- An upper bound on the number of Newton iterations. If icount = 0${\mathbf{icount}}=0$ on entry, no check on the number of iterations is made (this is the recommended mode of entry).
- 9: range – function handle or string containing name of m-file
- range must specify the break points x
_{i}${x}_{\mathit{i}}$, for i = 1,2, … ,npoint$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}$, which may depend on the parameters p_{j}${p}_{\mathit{j}}$, for j = 1,2, … ,m$\mathit{j}=1,2,\dots ,m$.[x] = range(npoint, p, m)**Input Parameters**- 1: npoint – int64int32nag_int scalar
- 2$2$ plus the number of break points in (a,b)$(a,b)$.
- 2: p(m) – double array
- The current estimate of the i$\mathit{i}$th parameter, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$.
- 3: m – int64int32nag_int scalar
- m$m$, the number of parameters.

**Output Parameters**- 1: x(npoint) – double array
- The i$\mathit{i}$th break point, for i = 1,2, … ,npoint$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}$. The sequence (x(i))$\left({\mathbf{x}}\left(i\right)\right)$ must be strictly monotonic, that is eitheror$$a={\mathbf{x}}\left(1\right)<{\mathbf{x}}\left(2\right)<\cdots <{\mathbf{x}}\left({\mathbf{npoint}}\right)=b$$

- 10: bc – function handle or string containing name of m-file
- [g1, g2] = bc(p, m, n)
**Input Parameters**- 1: p(m) – double array
- An estimate of the i$\mathit{i}$th parameter, p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 2: m – int64int32nag_int scalar
- m$m$, the number of parameters.
- 3: n – int64int32nag_int scalar
- n$n$, the number of differential equations.

**Output Parameters**- 1: g1(n) – double array
- The value of y
_{i}(a)${y}_{\mathit{i}}\left(a\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, (where this may be a known value or a function of the parameters p_{j}${p}_{\mathit{j}}$, for j = 1,2, … ,m$\mathit{j}=1,2,\dots ,m$). - 2: g2(n) – double array
- The value of y
_{i}(b)${y}_{\mathit{i}}\left(b\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, (where these may be known values or functions of the parameters p_{j}${p}_{\mathit{j}}$, for j = 1,2, … ,m$\mathit{j}=1,2,\dots ,m$). If n > n_{1}$n>{n}_{1}$, so that there are some driving equations, then the first n − n_{1}$n-{n}_{1}$ values of g2 need not be set since they are never used.

- 11: fcn – function handle or string containing name of m-file
- fcn must evaluate the functions f
_{i}${f}_{\mathit{i}}$ (i.e., the derivatives y_{i}^{ ′ }${y}_{\mathit{i}}^{\prime}$), for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$.[f] = fcn(x, y, n, p, m, ii)**Input Parameters**- 1: x – double scalar
- x$x$, the value of the argument.
- 2: y(n) – double array
- y
_{i}${y}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,\mathit{n}$, the value of the argument. - 3: n – int64int32nag_int scalar
- n$\mathit{n}$, the number of equations.
- 4: p(m) – double array
- The current estimate of the i$\mathit{i}$th parameter p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 5: m – int64int32nag_int scalar
- m$m$, the number of parameters.
- 6: ii – int64int32nag_int scalar
- Specifies the sub-interval [x
_{i},x_{i + 1}]$[{x}_{i},{x}_{i+1}]$ on which the derivatives are to be evaluated.

**Output Parameters**- 1: f(n) – double array
- The derivative of y
_{i}${y}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, evaluated at x$x$. f(i)${\mathbf{f}}\left(i\right)$ may depend upon the parameters p_{j}${p}_{\mathit{j}}$, for j = 1,2, … ,m$\mathit{j}=1,2,\dots ,m$. If there are any driving equations (see Section [Description]) then these must be numbered first in the ordering of the components of f.

- 12: eqn – function handle or string containing name of m-file
- eqn is used to describe the additional algebraic equations to be solved in the determination of the parameters, p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. If there are no additional algebraic equations (i.e., m = n_{1}$m={n}_{1}$) then eqn is never called and the string 'd02hbz' should be used as the actual argument.[e] = eqn(q, p, m)**Input Parameters**- 1: q – int64int32nag_int scalar
- The number of algebraic equations, m − n
_{1}$m-{n}_{1}$. - 2: p(m) – double array
- The current estimate of the i$\mathit{i}$th parameter p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 3: m – int64int32nag_int scalar
- m$m$, the number of parameters.

**Output Parameters**- 1: e(q) – double array
- The vector of residuals, r
_{2}(p)${r}_{2}\left(p\right)$, that is the amount by which the current estimates of the parameters fail to satisfy the algebraic equations.

- 13: constr – function handle or string containing name of m-file
- constr is used to prevent the pseudo-Newton iteration running into difficulty. constr should return the value
**true**if the constraints are satisfied by the parameters p_{1},p_{2}, … ,p_{m}${p}_{1},{p}_{2},\dots ,{p}_{m}$. Otherwise constr should return the value**false**. Usually the dummy function nag_ode_bvp_shoot_genpar_algeq_dummy_constr (d02hby), which returns the value**true**at all times, will suffice and in the first instance this is recommended as the actual parameter.[result] = constr(p, m)**Input Parameters**- 1: p(m) – double array
- An estimate of the i$\mathit{i}$th parameter, p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 2: m – int64int32nag_int scalar
- m$m$, the number of parameters.

**Output Parameters** - 14: ymax – double scalar
- A non-negative value which is used as a bound on all values ‖y(x)‖
_{∞}${\Vert y\left(x\right)\Vert}_{\infty}$ where y(x)$y\left(x\right)$ is the solution at any point x$x$ between x(1)${\mathbf{x}}\left(1\right)$ and x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ for the current parameters p_{1},p_{2}, … ,p_{m}${p}_{1},{p}_{2},\dots ,{p}_{m}$. If this bound is exceeded the integration is terminated and the current parameters are rejected. Such a rejection will result in an error exit if it prevents the initial residual or Jacobian, or the final solution, being calculated. If ymax = 0${\mathbf{ymax}}=0$ on entry, no bound on the solution y$y$ is used; that is the integrations proceed without any checking on the size of ‖y‖_{∞}${\Vert y\Vert}_{\infty}$. - 15: monit – function handle or string containing name of m-file
- monit enables you to monitor the values of various quantities during the calculation. It is called by nag_ode_bvp_shoot_genpar_algeq (d02sa) after every calculation of the norm ‖d
^{ − 1}J̃^{+}r‖_{2}${\Vert {{\mathbf{d}}}^{-1}{\stackrel{~}{J}}^{+}r\Vert}_{2}$ which determines the strategy of the Newton method, every time there is an internal error exit leading to a change of strategy, and before an error exit when calculating the initial Jacobian. string 'd02hbx'monit(istate, iflag, ifail1, p, m, f, pnorm, pnorm1, eps, d)**Input Parameters**- 1: istate – int64int32nag_int scalar
- The state of the Newton iteration.
- istate = 0${\mathbf{istate}}=0$
- The calculation of the residual, Jacobian and ‖d
^{ − 1}J̃^{+}r‖_{2}${\Vert {{\mathbf{d}}}^{-1}{\stackrel{~}{J}}^{+}r\Vert}_{2}$ are taking place. - istate = 1${\mathbf{istate}}=1$ to 5$5$
- During the Newton iteration a factor of 2
^{( − istate + 1)}${2}^{(-{\mathbf{istate}}+1)}$ of the Newton step is being used to try to reduce the norm. - istate = 6${\mathbf{istate}}=6$
- The current Newton step has been rejected and the Jacobian is being re-calculated.
- istate = − 6${\mathbf{istate}}=-6$ to − 1$-1$
- An internal error exit has caused the rejection of the current set of parameter values, p$p$. − istate$-{\mathbf{istate}}$ is the value which istate would have taken if the error had not occurred.
- istate = − 7${\mathbf{istate}}=-7$
- An internal error exit has occurred when calculating the initial Jacobian.

- 2: iflag – int64int32nag_int scalar
- Whether or not the Jacobian being used has been calculated at the beginning of the current iteration. If the Jacobian has been updated then iflag = 1${\mathbf{iflag}}=1$; otherwise iflag = 2${\mathbf{iflag}}=2$. The Jacobian is only calculated when convergence to the current parameter values has been slow.
- 3: ifail1 – int64int32nag_int scalar
- 4: p(m) – double array
- The current estimate of the i$\mathit{i}$th parameter p
_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$. - 5: m – int64int32nag_int scalar
- m$m$, the number of parameters.
- 6: f(m) – double array
- 7: pnorm – double scalar
- A quantity against which all reductions in norm are currently measured.
- 8: pnorm1 – double scalar
- 9: eps – double scalar
- Gives some indication of the convergence rate. It is the current singular value modification factor (see Gay (1976)). It is zero initially and whenever convergence is proceeding steadily. eps is ε
^{3 / 8}${\epsilon}^{3/8}$ or greater (where ε$\epsilon $ may in most cases be considered machine precision) when the singular values of J$J$ are approximately zero or when convergence is not being achieved. The larger the value of eps the worse the convergence rate. When eps becomes too large the Newton iteration is terminated. - 10: d(m) – double array
- J$J$, the singular values of the current modified Jacobian matrix. If d(m)${\mathbf{d}}\left(m\right)$ is small relative to d(1)${\mathbf{d}}\left(1\right)$ for a number of Jacobians corresponding to different parameter values then the computed results should be viewed with suspicion. It could be that the matching equations do not depend significantly on some parameter (which could be due to a programming error in fcn, bc, range or eqn). Alternatively, the system of differential equations may be very ill-conditioned when viewed as an initial value problem, in which case nag_ode_bvp_shoot_genpar_algeq (d02sa) is unsuitable. This may also be indicated by some singular values being very large. These values of d(i)${\mathbf{d}}\left(\mathit{i}\right)$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, should not be changed.

- 16: prsol – function handle or string containing name of m-file
- prsol can be used to obtain values of the solution y$y$ at a selected point z$z$ by integration across the final range [x(1),x(npoint)] $[{\mathbf{x}}\left(1\right),{\mathbf{x}}\left({\mathbf{npoint}}\right)]$. If no output is required nag_ode_bvp_shoot_genpar_algeq_dummy_prsol (d02hbw) can be used as the actual parameter.[z] = prsol(z, y, n)
**Input Parameters**- 1: z – double scalar
- Contains x
_{1}${x}_{1}$ on the first call. On subsequent calls z contains its previous output value. - 2: y(n) – double array
- The solution value y
_{i}${y}_{\mathit{i}}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, at z$z$. - 3: n – int64int32nag_int scalar
- n$\mathit{n}$, the total number of differential equations.

**Output Parameters**- 1: z – double scalar
- The next point at which output is required. The new point must be nearer x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ than the old.If z is set to a point outside [x(1),x(npoint)] $[{\mathbf{x}}\left(1\right),{\mathbf{x}}\left({\mathbf{npoint}}\right)]$ the process stops and control returns from nag_ode_bvp_shoot_genpar_algeq (d02sa) to the (sub)program from which nag_ode_bvp_shoot_genpar_algeq (d02sa) is called. Otherwise the next call to prsol is made by nag_ode_bvp_shoot_genpar_algeq (d02sa) at the point z, with solution values y
_{1},y_{2}, … ,y_{n}${y}_{1},{y}_{2},\dots ,{y}_{n}$ at z contained in y. If z is set to x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ exactly, the final call to prsol is made with y_{1},y_{2}, … ,y_{n}${y}_{1},{y}_{2},\dots ,{y}_{n}$ as values of the solution at x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ produced by the integration. In general the solution values obtained at x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ from prsol will differ from the values obtained at this point by a call to bc. The difference between the two solutions is the residual r$r$. You are reminded that the points x(1),x(2), … ,x(npoint)${\mathbf{x}}\left(1\right),{\mathbf{x}}\left(2\right),\dots ,{\mathbf{x}}\left({\mathbf{npoint}}\right)$ are available in the locations swp(1,4),swp(2,4), … ,swp(npoint,4)${\mathbf{swp}}\left(1,4\right),{\mathbf{swp}}\left(2,4\right),\dots ,{\mathbf{swp}}\left({\mathbf{npoint}},4\right)$ at all times.

- 1: m – int64int32nag_int scalar
*Default*: The dimension of the arrays p, pe, pf, dp. (An error is raised if these dimensions are not equal.)m$m$, the number of parameters.- 2: n – int64int32nag_int scalar
*Default*: The dimension of the array e.n$n$, the total number of differential equations.- 3: npoint – int64int32nag_int scalar
*Default*: The first dimension of the array swp.2 plus the number of break points in the range of definition of the system of differential equations (1).

- ldswp ldw sdw

- 1: p(m) – double array
- The corrected value for the i$i$th parameter, unless an error has occurred, when it contains the last calculated value of the parameter.
- 2: pf(m) – double array
- The values actually used.
- 3: dp(m) – double array
- The values actually used.
- 4: swp(ldswp,6$6$) – double array
- ldswp ≥ npoint$\mathit{ldswp}\ge {\mathbf{npoint}}$.swp(i,1)${\mathbf{swp}}\left(\mathit{i},1\right)$ contains the initial step size used on the last integration on [x(i),x( i + 1 )] $[{\mathbf{x}}\left(\mathit{i}\right),{\mathbf{x}}\left(\mathit{i}+1\right)]$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$, (excluding integrations during the calculation of the Jacobian).swp(i,2)${\mathbf{swp}}\left(\mathit{i},2\right)$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$, is usually unchanged. If the maximum step size swp(i,2)${\mathbf{swp}}\left(i,2\right)$ is so small or the length of the range [x(i),x( i + 1 )] $[{\mathbf{x}}\left(i\right),{\mathbf{x}}\left(i+1\right)]$ is so short that on the last integration the step size was not controlled in the main by the size of the error tolerances e(i)${\mathbf{e}}\left(i\right)$ but by these other factors, then swp(npoint,2)${\mathbf{swp}}\left({\mathbf{npoint}},2\right)$ is set to the floating point value of i$i$ if the problem last occurred in [x(i),x( i + 1 )] $[{\mathbf{x}}\left(i\right),{\mathbf{x}}\left(i+1\right)]$. Any results obtained when this value is returned as nonzero should be viewed with caution.swp(i,3)${\mathbf{swp}}\left(\mathit{i},3\right)$, for i = 1,2, … ,npoint − 1$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}-1$, are unchanged.If an error exit with ifail = 4${\mathbf{ifail}}={\mathbf{4}}$, 5${\mathbf{5}}$ or 6${\mathbf{6}}$ (see Section [Error Indicators and Warnings]) occurs on the integration made from x(i)${\mathbf{x}}\left(i\right)$ to x(i + 1)${\mathbf{x}}\left(i+1\right)$ the floating point value of i$i$ is returned in swp(npoint,1)${\mathbf{swp}}\left({\mathbf{npoint}},1\right)$. The actual point x ∈ [x(i),x( i + 1 )] $x\in [{\mathbf{x}}\left(i\right),{\mathbf{x}}\left(i+1\right)]$ where the error occurred is returned in swp(1,5)${\mathbf{swp}}\left(1,5\right)$ (see also the specification of w). The floating point value of npoint is returned in swp(npoint,1)${\mathbf{swp}}\left({\mathbf{npoint}},1\right)$ if the error exit is caused by a call to bc.If an error exit occurs when estimating the Jacobian matrix (ifail = 7${\mathbf{ifail}}={\mathbf{7}}$, 8${\mathbf{8}}$, 9${\mathbf{9}}$, 10${\mathbf{10}}$, 11${\mathbf{11}}$ or 12${\mathbf{12}}$, see Section [Error Indicators and Warnings]) and if parameter p
_{i}${p}_{i}$ was the cause of the failure then on exit swp(npoint,1)${\mathbf{swp}}\left({\mathbf{npoint}},1\right)$ contains the floating point value of i$i$.swp(i,4)${\mathbf{swp}}\left(\mathit{i},4\right)$ contains the point x(i)${\mathbf{x}}\left(\mathit{i}\right)$, for i = 1,2, … ,npoint$\mathit{i}=1,2,\dots ,{\mathbf{npoint}}$, used at the solution p$p$ or at the final values of p$p$ if an error occurred.swp is also partly used as workspace. - 5: w(ldw,sdw) – double array
- ldw ≥ max (n,m)$\mathit{ldw}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}({\mathbf{n}},{\mathbf{m}})$.sdw ≥ 3 × m + 12 + max (11,m)$\mathit{sdw}\ge 3\times {\mathbf{m}}+12+\mathrm{max}\phantom{\rule{0.125em}{0ex}}(11,{\mathbf{m}})$.In the case of an error exit of the type where the point of failure is returned in swp(1,5)${\mathbf{swp}}\left(1,5\right)$, the solution at this point of failure is returned in w(i,1)${\mathbf{w}}\left(\mathit{i},1\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$.Otherwise w is used for workspace.
- 6: ifail – int64int32nag_int scalar
- ifail = 0${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

Errors or warnings detected by the function:

- The constraints have been violated by the initial parameters.

- The condition x(1) < x(2) < ⋯ < x(npoint)${\mathbf{x}}\left(1\right)<{\mathbf{x}}\left(2\right)<\cdots <{\mathbf{x}}\left({\mathbf{npoint}}\right)$ (or x(1) > x(2) > ⋯ > x(npoint)${\mathbf{x}}\left(1\right)>{\mathbf{x}}\left(2\right)>\cdots >{\mathbf{x}}\left({\mathbf{npoint}}\right)$) has been violated on a call to range with the initial parameters.

- In the integration from x(1)${\mathbf{x}}\left(1\right)$ to x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ with the initial or the final parameters, the step size was reduced too far for the integration to proceed. Consider reversing the order of the points x(1),x(2), … ,x(npoint)${\mathbf{x}}\left(1\right),{\mathbf{x}}\left(2\right),\dots ,{\mathbf{x}}\left({\mathbf{npoint}}\right)$. If this error exit still results, it is likely that nag_ode_bvp_shoot_genpar_algeq (d02sa) is not a suitable method for solving the problem, or the initial choice of parameters is very poor, or the accuracy requirement specified by e(i)${\mathbf{e}}\left(\mathit{i}\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, is too stringent.

- In the integration from x(1)${\mathbf{x}}\left(1\right)$ to x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ with the initial or final parameters, an initial step could not be found to start the integration on one of the intervals x(i)${\mathbf{x}}\left(i\right)$ to x(i + 1)${\mathbf{x}}\left(i+1\right)$. Consider reversing the order of the points. If this error exit still results it is likely that nag_ode_bvp_shoot_genpar_algeq (d02sa) is not a suitable function for solving the problem, or the initial choice of parameters is very poor, or the accuracy requirement specified by e(i)${\mathbf{e}}\left(\mathit{i}\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, is much too stringent.

- In the integration from x(1)${\mathbf{x}}\left(1\right)$ to x(npoint)${\mathbf{x}}\left({\mathbf{npoint}}\right)$ with the initial or final parameters, the solution exceeded ymax in magnitude (when ymax > 0.0${\mathbf{ymax}}>0.0$). It is likely that the initial choice of parameters was very poor or ymax was incorrectly set.

- On calculating the initial approximation to the Jacobian, the constraints were violated.

- On perturbing the parameters when calculating the initial approximation to the Jacobian, the condition x(1) < x(2) < ⋯ < x(npoint)${\mathbf{x}}\left(1\right)<{\mathbf{x}}\left(2\right)<\cdots <{\mathbf{x}}\left({\mathbf{npoint}}\right)$ (or x(1) > x(2) > ⋯ > x(npoint)${\mathbf{x}}\left(1\right)>{\mathbf{x}}\left(2\right)>\cdots >{\mathbf{x}}\left({\mathbf{npoint}}\right)$) is violated.

- On calculating the initial approximation to the Jacobian, a column of the Jacobian is found to be insignificant. This could be due to an element dp(i)${\mathbf{dp}}\left(i\right)$ being too small (but nonzero) or the solution having no dependence on one of the parameters (a programming error).

- After calculating the initial approximation to the Jacobian, the calculation of its singular value decomposition failed. It is likely that the error will never occur as it is usually associated with the Jacobian having multiple singular values. To remedy the error it should only be necessary to change the initial parameters. If the error persists it is likely that the problem has not been correctly formulated.

- The Newton iteration has failed to converge after exercising all its options. You are strongly recommended to monitor the progress of the iteration via monit. There are many possible reasons for the iteration not converging. Amongst the most likely are:
(a) there is no solution; (b) the initial parameters are too far away from the correct parameters; (c) the problem is too ill-conditioned as an initial value problem for Newton's method to choose suitable corrections; (d) the accuracy requirements for convergence are too restrictive, that is some of the components of pe (and maybe pf) are too small – in this case the final value of this norm output via monit will usually be very small; or (e) the initial parameters are so close to the solution parameters p$p$ that the Newton iteration cannot find improved parameters. The norm output by monit should be very small.

- These indicate that there has been a serious error in an internal call. Check all function calls and array dimensions. Seek expert help.

If the iteration converges, the accuracy to which the unknown parameters are determined is usually close to that specified by you. The accuracy of the solution (output via prsol) depends on the error tolerances
e(i)${\mathbf{e}}\left(\mathit{i}\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$. You are strongly recommended to vary all tolerances to check the accuracy of the parameters p$p$ and the solution y$y$.

The time taken by nag_ode_bvp_shoot_genpar_algeq (d02sa) depends on the complexity of the system of differential equations and on the number of iterations required. In practice, the integration of the differential system (1) is usually by far the most costly process involved. The computing time for integrating the differential equations can sometimes depend critically on the quality of the initial estimates for the parameters p$p$. If it seems that too much computing time is required and, in particular, if the values of the residuals (output in monit) are much larger than expected given your knowledge of the expected solution, then the coding of fcn, eqn, range and bc should be checked for errors. If no errors can be found then an independent attempt should be made to improve the initial estimates p$p$.

In the case of an error exit in the integration of the differential system indicated by
ifail = 4${\mathbf{ifail}}={\mathbf{4}}$, 5${\mathbf{5}}$, 9${\mathbf{9}}$ or 10${\mathbf{10}}$ you are strongly recommended to perform trial integrations with nag_ode_ivp_rkts_onestep (d02pf) to determine the effects of changes of the local error tolerances and of changes to the initial choice of the parameters
p_{i}${p}_{\mathit{i}}$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, (that is the initial choice of p$p$).

It is possible that by following the advice given in Section [Error Indicators and Warnings] an error exit with ifail = 7${\mathbf{ifail}}={\mathbf{7}}$, 8${\mathbf{8}}$, 9${\mathbf{9}}$, 10${\mathbf{10}}$ or 11${\mathbf{11}}$ might be followed by one with ifail = 12${\mathbf{ifail}}={\mathbf{12}}$ (or vice-versa) where the advice given is the opposite. If you are unable to refine the choice of
dp(i)${\mathbf{dp}}\left(\mathit{i}\right)$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,n$, such that both these types of exits are avoided then the problem should be rescaled if possible or the method must be abandoned.

The choice of the ‘floor’ values
pf(i)${\mathbf{pf}}\left(\mathit{i}\right)$, for i = 1,2, … ,m$\mathit{i}=1,2,\dots ,m$, may be critical in the convergence of the Newton iteration. For each value i$i$, the initial choice of p_{i}${p}_{i}$ and the choice of pf(i)${\mathbf{pf}}\left(i\right)$ should not both be very small unless it is expected that the final parameter p_{i}${p}_{i}$ will be very small and that it should be determined accurately in a **relative** sense.

For many problems it is critical that a good initial estimate be found for the parameters p$p$ or the iteration will not converge or may even break down with an error exit. There are many mathematical techniques which obtain good initial estimates for p$p$ in simple cases but which may fail to produce useful estimates in harder cases. If no such technique is available it is recommended that you try a continuation (homotopy) technique preferably based on a physical parameter (e.g., the Reynolds or Prandtl number is often a suitable continuation parameter). In a continuation method a sequence of problems is solved, one for each choice of the continuation parameter, starting with the problem of interest. At each stage the parameters p$p$ calculated at earlier stages are used to compute a good initial estimate for the parameters at the current stage (see Hall and Watt (1976) for more details).

This example intends to illustrate the use of the break point and equation solving facilities of nag_ode_bvp_shoot_genpar_algeq (d02sa). Most of the facilities which are common to nag_ode_bvp_shoot_genpar_algeq (d02sa) and nag_ode_bvp_shoot_genpar (d02hb) are illustrated in the example in the specification of nag_ode_bvp_shoot_genpar (d02hb) (which should also be consulted).

The program solves a projectile problem in two media determining the position of change of media, p_{3}${p}_{3}$, and the gravity and viscosity in the second medium (p_{2}${p}_{2}$ represents gravity and p_{4}${p}_{4}$ represents viscosity).

Open in the MATLAB editor: nag_ode_bvp_shoot_genpar_algeq_example

function nag_ode_bvp_shoot_genpar_algeq_example% For communication with prsol. global ykeep ncall xkeep; % Initialize variables and arrays. p = [1.2; 0.032; 2.5;0.2]; n1 = int32(3); pe = [0.001; 0.001; 0.001; 0.001]; pf = [1e-06; 1e-06; 1e-06; 1e-06]; e = [1e-05; 1e-05; 1e-05]; dp = [0; 0; 0; 0]; swp = zeros(3, 6); icount = 0; ymax = 0; ncall = 0; ykeep = zeros(1,3); xkeep = zeros(1,1); fprintf('nag_ode_bvp_shoot_genpar_algeq example program results\n\n'); fprintf(' x y(1) y(2) y(3)\n') [pOut, pfOut, dpOut, swpOut, w, ifail] = nag_ode_bvp_shoot_genpar_algeq(p, int64(n1), pe, pf, ... e, dp, swp, int64(icount), @range, @bc, @fcn, ... @eqn, @constr, ymax, @monit, @prsol); if ifail ~= 0 % Unsuccessful call. Print message and exit. error('Warning: nag_ode_bvp_shoot_genpar_algeq returned with ifail = %1d ',ifail); end % Plot results. fig = figure('Number', 'off'); display_plot(xkeep,ykeep)function z = prsol(z, y, n)% Obtains values for solution, outputs them and stores them for plotting. % For communication with main routine. global ykeep ncall xkeep; ncall = ncall+1; fprintf(' %6.4f',z); xkeep(ncall,1) = z; for i=1:int32(n) % can't use int64 in loop range. fprintf('%10.4f ', y(i)); end fprintf('\n'); z = z+0.5; if (abs(z-5) < 0.25) z = 5; end ykeep(ncall,:) = y;function x = range(npoint, p, m)% Specify the break points. x = zeros(npoint,1); x(1) = 0; x(2) = p(3); x(3) = 5;function [g1, g2] = bc(p, m, n)% Specify boundary conditions. g1 = [0, 0.5, p(1)]; g2 = [0, 0.45, -1.2];function f = fcn(x, y, n, p, m, interval)% Evaluate derivative vector. f = zeros(n,1); f(1) = tan(y(3)); if (interval == 1) f(2) = -0.032d0*tan(y(3))/y(2) - 0.02d0*y(2)/cos(y(3)); f(3) = -0.032d0/y(2)^2; else f(2) = -p(2)*tan(y(3))/y(2) - p(4)*y(2)/cos(y(3)); f(3) = -p(2)/y(2)^2; endfunction eq = eqn(q,p,m)% Specify additional algebraic equations. eq = zeros(q,1); eq(1) = 0.02 - p(4) - 1.0d-5*p(3);function result = constr(p,n)% Check constraints. result = true; for i=1:int32(n) % can't use int64 in loop range. if p(i) < 0 result = false; end end if p(3) > 5 result = false; endfunction monit(istate, iflag, ifail1, p, m, f, pnorm, pnorm1, eps, d)% Allows monitoring of parameter values during solution (not used here).function display_plot(xkeep,ykeep)% Formatting for title and axis labels. titleFmt = {'FontName', 'Helvetica', 'FontWeight', 'Bold', 'FontSize', 14}; labFmt = {'FontName', 'Helvetica', 'FontWeight', 'Bold', 'FontSize', 13}; set(gca, 'FontSize', 13); % for legend, axis tick labels, etc. % Plot the results. plot(xkeep, ykeep(:,1), '-+', ... xkeep, ykeep(:,2), '--x', ... xkeep, ykeep(:,3), ':*'); % Add title. title('Projectile in Two Media Problem', titleFmt{:}); % Label the axes. xlabel('x', labFmt{:}); ylabel('Solution', labFmt{:}); % Add a legend. legend('height','velocity','angle','Location','SouthWest'); % Now get the range of the x and y values, and use them to make nice axes. xmin = min(xkeep); xmax = max(xkeep); ymin = round(min(min(ykeep))-1); ymax = round(max(max(ykeep))+1); axis([xmin xmax ymin ymax]); stem(xkeep, ykeep, ymin, ymax, 'determined interface position');function stem(x, y, ymin, ymax, label)% Find the x bin that the crossover point between y2 and y3 lies in. for i = 1:length(x) if y(i,2) > y(i,3) break end end % Use interpolation to find the crossover point more accurately. a1 = abs(y(i,2) - y(i-1,2)); a2 = abs(y(i,3) - y(i-1,3)); a3 = abs(x(i) - x(i-1)); b1 = min(y(i,2),y(i-1,2)); b2 = min(y(i,3),y(i-1,3)); c1 = a1/1e5; c2 = a2/1e5; for j = 1:1e5; if round((b1+c1*j)/1e-6)*1e-6 == round((b2+c2*j)/1e-6)*1e-6; break end end d1 = (b1+c1*j) - b1; d2 = (b2+c2*j) - b2; e1 = d1/a1; e2 = d2/a2; f = x(i) -(e1*a3); hold on % Draw line, and add label. plot([f,f],[ymin,ymax],'k-s'); text('Position', [f-0.1,0.5], 'String', label, ... 'Rotation', 90.0, 'FontSize', 11)

nag_ode_bvp_shoot_genpar_algeq example program results x y(1) y(2) y(3) 0.0000 0.0000 0.5000 1.1753 0.5000 1.0881 0.4127 1.0977 1.0000 1.9501 0.3310 0.9802 1.5000 2.5768 0.2582 0.7918 2.0000 2.9606 0.2019 0.4796 2.5000 3.0958 0.1773 0.0245 3.0000 2.9861 0.1935 -0.4353 3.5000 2.6289 0.2409 -0.7679 4.0000 2.0181 0.3047 -0.9767 4.5000 1.1454 0.3759 -1.1099 5.0000 -0.0000 0.4500 -1.2000

Open in the MATLAB editor: d02sa_example

function d02sa_example% For communication with prsol. global ykeep ncall xkeep; % Initialize variables and arrays. p = [1.2; 0.032; 2.5;0.2]; n1 = int32(3); pe = [0.001; 0.001; 0.001; 0.001]; pf = [1e-06; 1e-06; 1e-06; 1e-06]; e = [1e-05; 1e-05; 1e-05]; dp = [0; 0; 0; 0]; swp = zeros(3, 6); icount = 0; ymax = 0; ncall = 0; ykeep = zeros(1,3); xkeep = zeros(1,1); fprintf('d02sa example program results\n\n'); fprintf(' x y(1) y(2) y(3)\n') [pOut, pfOut, dpOut, swpOut, w, ifail] = d02sa(p, int64(n1), pe, pf, ... e, dp, swp, int64(icount), @range, @bc, @fcn, ... @eqn, @constr, ymax, @monit, @prsol); if ifail ~= 0 % Unsuccessful call. Print message and exit. error('Warning: d02sa returned with ifail = %1d ',ifail); end % Plot results. fig = figure('Number', 'off'); display_plot(xkeep,ykeep)function z = prsol(z, y, n)% Obtains values for solution, outputs them and stores them for plotting. % For communication with main routine. global ykeep ncall xkeep; ncall = ncall+1; fprintf(' %6.4f',z); xkeep(ncall,1) = z; for i=1:int32(n) % can't use int64 in loop range. fprintf('%10.4f ', y(i)); end fprintf('\n'); z = z+0.5; if (abs(z-5) < 0.25) z = 5; end ykeep(ncall,:) = y;function x = range(npoint, p, m)% Specify the break points. x = zeros(npoint,1); x(1) = 0; x(2) = p(3); x(3) = 5;function [g1, g2] = bc(p, m, n)% Specify boundary conditions. g1 = [0, 0.5, p(1)]; g2 = [0, 0.45, -1.2];function f = fcn(x, y, n, p, m, interval)% Evaluate derivative vector. f = zeros(n,1); f(1) = tan(y(3)); if (interval == 1) f(2) = -0.032d0*tan(y(3))/y(2) - 0.02d0*y(2)/cos(y(3)); f(3) = -0.032d0/y(2)^2; else f(2) = -p(2)*tan(y(3))/y(2) - p(4)*y(2)/cos(y(3)); f(3) = -p(2)/y(2)^2; endfunction eq = eqn(q,p,m)% Specify additional algebraic equations. eq = zeros(q,1); eq(1) = 0.02 - p(4) - 1.0d-5*p(3);function result = constr(p,n)% Check constraints. result = true; for i=1:int32(n) % can't use int64 in loop range. if p(i) < 0 result = false; end end if p(3) > 5 result = false; endfunction monit(istate, iflag, ifail1, p, m, f, pnorm, pnorm1, eps, d)% Allows monitoring of parameter values during solution (not used here).function display_plot(xkeep,ykeep)% Formatting for title and axis labels. titleFmt = {'FontName', 'Helvetica', 'FontWeight', 'Bold', 'FontSize', 14}; labFmt = {'FontName', 'Helvetica', 'FontWeight', 'Bold', 'FontSize', 13}; set(gca, 'FontSize', 13); % for legend, axis tick labels, etc. % Plot the results. plot(xkeep, ykeep(:,1), '-+', ... xkeep, ykeep(:,2), '--x', ... xkeep, ykeep(:,3), ':*'); % Add title. title('Projectile in Two Media Problem', titleFmt{:}); % Label the axes. xlabel('x', labFmt{:}); ylabel('Solution', labFmt{:}); % Add a legend. legend('height','velocity','angle','Location','SouthWest'); % Now get the range of the x and y values, and use them to make nice axes. xmin = min(xkeep); xmax = max(xkeep); ymin = round(min(min(ykeep))-1); ymax = round(max(max(ykeep))+1); axis([xmin xmax ymin ymax]); stem(xkeep, ykeep, ymin, ymax, 'determined interface position');function stem(x, y, ymin, ymax, label)% Find the x bin that the crossover point between y2 and y3 lies in. for i = 1:length(x) if y(i,2) > y(i,3) break end end % Use interpolation to find the crossover point more accurately. a1 = abs(y(i,2) - y(i-1,2)); a2 = abs(y(i,3) - y(i-1,3)); a3 = abs(x(i) - x(i-1)); b1 = min(y(i,2),y(i-1,2)); b2 = min(y(i,3),y(i-1,3)); c1 = a1/1e5; c2 = a2/1e5; for j = 1:1e5; if round((b1+c1*j)/1e-6)*1e-6 == round((b2+c2*j)/1e-6)*1e-6; break end end d1 = (b1+c1*j) - b1; d2 = (b2+c2*j) - b2; e1 = d1/a1; e2 = d2/a2; f = x(i) -(e1*a3); hold on % Draw line, and add label. plot([f,f],[ymin,ymax],'k-s'); text('Position', [f-0.1,0.5], 'String', label, ... 'Rotation', 90.0, 'FontSize', 11)

d02sa example program results x y(1) y(2) y(3) 0.0000 0.0000 0.5000 1.1753 0.5000 1.0881 0.4127 1.0977 1.0000 1.9501 0.3310 0.9802 1.5000 2.5768 0.2582 0.7918 2.0000 2.9606 0.2019 0.4796 2.5000 3.0958 0.1773 0.0245 3.0000 2.9861 0.1935 -0.4353 3.5000 2.6289 0.2409 -0.7679 4.0000 2.0181 0.3047 -0.9767 4.5000 1.1454 0.3759 -1.1099 5.0000 -0.0000 0.4500 -1.2000

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013