NAGnews 122 | 15 May 2014

In this issue


NAG C Library Announcement - Exciting new functionality now available at Mark 24


NAG is delighted to announce that the NAG C Library has been updated with many new mathematical and statistical routines. This major release brings the C Library to Mark 24 and makes the number of routines in the Library total 1516. This Mark sees a great deal of new functionality; 148 routines are new at this Mark (see some of the new functionality in the list below).

  • Hypergeometric function (1f1 and 2f1)
  • Nearest correlation matrix
  • Elementwise weighted nearest correlation matrix
  • Wavelet Transforms & FFTs
    • Three dimensional discrete single level and multi-level wavelet transforms.
    • Fast Fourier Transforms (FFTs) for two-dimensional and three dimensional real data.
  • Matrix Functions
    • Matrix square roots and general powers
    • Matrix exponentials (Schur-Parlett)
    • Fréchet Derivative
    • Calculation of condition numbers
  • Interpolation
    • Interpolation for 5D and higher dimensions
  • Optimization
    • Local optimization: Non-negative least squares
    • Global optimization: Multi-start versions of general nonlinear programming and least squares routines
  • RNGs
    • Brownian bridge and random fields
  • Statistics
    • Gaussian mixture model
    • Best subsets of given size (branch and bound)
    • Vectorized probabilities and probability density functions of distributions.
    • Inhomogeneous time series analysis, moving averages Data fitting
    • Fit of 2D scattered data by two-stage approximation (suitable for large datasets)
  • Quadrature
    • 1D adaptive for badly-behaved integrals
  • Sparse eigenproblem
    • Driver for real general matrix, driver for banded complex eigenproblem
    • Real and complex quadratic eigenvalue problems
  • Sparse linear systems
    • block diagonal pre-conditioners and solvers
  • ODE solvers
    • Threadsafe initial value ODE solvers
  • Volatility
    • Heston model with term structure

In today's NAGnews we take a look at the new Quadratic Eigenvalue Problems functionality.

We encourage all users to upgrade their NAG software to the latest Mark or release - this might mean contacting someone responsible for software at your place of work or study. For help with any NAG software whether that be upgrading or a technical matter, please contact us.


New NAG Student Winners Announced at SIAM Student Chapter Conference


We are a proud long term sponsor of the SIAM Student Chapter Conference at the University of Manchester. Following tradition, NAG Senior Technical Consultant, Dr Craig Lucas was delighted to present two students with £50 Amazon vouchers and a certificate each for Conference Best Talk and Best Poster.

It was a pleasure to award the 'Best Poster' to Mario Berljafa for "Parallel Rational Krylov Methods" and the 'Best Talk' to Denny Vitasari for "Surfactant Transport onto a Foam Lamella in the presence of Surface Viscous Stress".

SIAM's (the Society of Industrial and Applied Mathematics) mission is to build cooperation between mathematics and the worlds of science and technology. There are SIAM Student Chapters all over the world. The Student Chapters encourage those studying mathematics and computational science to share ideas, explore career opportunities, make contacts and develop leadership skills.

Congratulations to this years' winners, Mario and Denny.


Mark 24 new functionality spotlight: Quadratic Eigenvalue Problems


We are delighted to highlight brand new NAG functionality from Mark 24 of the NAG C Library in today's NAGnews. Two new routines included at Mark 24 of the C Library solve the quadratic eigenvalue problem for read matrices (f02jcc) and complex matrices (f02jqc). A new mini-article authored by NAG Honorarium, Dr Sven Hammarling describes the new functionality. To read the article visit our website.


NAG Numerical Services - providing faster optimization


A leading asset and financial regulation consultancy turned to NAG to help verify some details of an economic model for a central bank and to provide compiled code for improved performance. The client needed to speed up specific optimization functions called from a generic modelling tool. NAG initially verified that the optimized approach selected was the most appropriate for the problem before looking at ways of improving performance. Once it was verified NAG went on to investigate possible improvements.

It was clear that some customization in the use of the routines would yield useful benefits. Alternative options for this approach where presented to the client together with a NAG recommendation. The NAG expert working with the client then confirmed the validity of the selected approach against the client's data. NAG built and verified a custom implementation, now as compiled code specifically for the target processor, operating system and memory configuration used in the production environment on which the economic model was run.

In addition to this NAG was able to offer further advice about potential issues with the planned choice of optimization approach to be applied to other areas of the model. The client was able to independently confirm this advice and as a consequence ensure that they, in turn, were giving the best advice to their own clients.

For more information about NAG Numerical and HPC Services click here.


Training Courses - Our expertise. Your productive team.


Your users, developers and managers can all benefit from NAG's highly regarded training courses. All of the training courses shown have been delivered successfully either from NAG offices or at client premises. Training courses can be tailored to suit your particular requirements and be targeted to novice, intermediate or experienced levels. Specialized mentoring and development programs are also available for HPC managers.
 

HPC & Software Training NAG Product Training
  • Accelerating Applications with CUDA and OpenCL
  • Algorithmic Differentiation
  • An Introduction to CUDA Programming
  • An Introduction to OpenCL Programming
  • An Introduction to Unified Parallel C (UPC)
  • Coarray Fortran
  • Core Algorithms for High Performance Scientific Computing
  • Debugging, Profiling and Optimizing
  • Developing Parallel Applications for the Intel Xeon Phi
  • Fortran 95
  • Multicore
  • Object-Oriented Programming in Fortran 2003
  • Open MP
  • Parallel I/O
  • Parallel Programming with MPI
  • Scientific Visualisation
  • Using the NAG Library in Fortran
  • Using the NAG Library in C and C++
  • Using the NAG Library in Excel and VBA
  • Using the NAG Library for Java
  • Using the NAG Toolbox for MATLAB
  • Using the NAG Library for Python
  • Multicore Programming and the NAG Library for SMP & Multicore
  • An Introduction to CUDA Programming and the NAG Numerical Routines for GPUs
     
Examples of tailored training courses
  • Best Practice in HPC Software Development
  • OpenCL introduction for CUDA programmers

For more information about our courses including tailoring a course for your exact needs please email us.

NAG will be at the following exhibitions and conferences over the next few months.


The Best of the Blog


Testing Matrix Function Algorithms Using Identities. Edvin Deadman and Nick Higham (University of Manchester) write:

In a previous blog post we explained how testing new algorithms is difficult. We discussed the forward error (how far from the actual solution are we?) and the backward error (what problem have we actually solved?) and how we'd like the backward error to be close to the unit roundoff, u.

For matrix functions, we also mentioned the idea of using identities such as sin2A + cos2A = I to test algorithms. In practice, rather than I, we might find that we obtain a matrix R close to I, perhaps with ||R-I|| ? 10-13. What does this tell us about how the algorithms for sin A and cos A are performing? In particular, does it tell us anything about the backward errors? We've just written a paper which aims to answer these questions. This work is an output of NAG's Knowledge Transfer Partnership with the University of Manchester, so we thought we'd blog about it here.

Let's consider the identity exp(log A) - A = 0. Suppose that when we evaluate the left-hand side in floating point arithmetic we get a nonzero residual R rather than 0. We'll assume that this residual is caused by some backward errors E1 and E2 so that exp(log(A + E1) + E2) = R. We'd like to investigate how big R can be when E1 and E2 are small, so we expand the left-hand side in a Taylor series to linear order. After a bit of algebra, the result is a linear operator relating R to E1 and E2: R = L(E1, E2). The operator is different for each identity considered, but it always involves the Fréchet derivatives of the matrix functions in the identity (the full gory details, including formulae for the linear operators associated with various identities, are in our paper).

Read the full post here.


NAGnews - Past Issues


We provide an online archive of past issues of NAGnews. For editions prior to 2010, please contact us.

Website Feedback

If you would like a response from NAG please provide your e-mail address below.

(If you're a human, don't change the following field)
Your first name.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.