NAGNews 99, 11 August 2011

Featuring


Top Story: Using NAG .NET Methods in LabVIEW


For some time now, we've been receiving requests from users for information about how to call the NAG Library from within the popular LabVIEW programming environment. We already have some material on our website about how to do this for the NAG Fortran Library, but have more recently revisited this question with a view to using the NAG Library for .NET.

Last month, our colleague Sorin Serban wrote a blog post which describes in some detail how to build a simple LabVIEW application which allows the user to enter a set of data values and then determine some of their statistical properties. The application invokes a NAG .NET method to calculate the statistics, and the blog post gives a full account of how to invoke the method and wire it into the application using LabVIEW's visual programming interface.

Although it's a simple application, it hopefully presents a flavour of the way NAG .NET methods can be invoked from within LabVIEW. Please let us know if you have any questions about this work, or if you're a user (or prospective user) of NAG and/or LabVIEW, as we're keen to build new examples which will be helpful to workers in this area.


NAG in Finance: Watch Portfolio Maximum Entropy and Sampling Error Control


At the latest NAG Quant Day Ely Klepfish of HSBC Bank gave an excellent presentation which you can watch and http://www.7citymedia.co.uk/NAG/Recordings/html/CQF_Extra_HB_UK_Portfolio_maximum_entropy_and_sampling_error_control.html and view the pdf here

Presentation Synopsis

Historical sample is only one realisation of returns distribution, which can lead to biased estimates of expected return and risk and, consequently, to disappointing out-of-sample performance.

I explore a method of interpolation between an optimised solution derived from historical sample and a solution with maximum indifference to estimation errors. The optimality is achieved by minimising the distance (defined in terms of information entropy) to a shrinkage target, conditioned upon investor's confidence in historical sample.

Sampling error efficiency is defined by the balance between the optimal estimated risk-return profile and the indifference to the errors of estimation.

Application of the method enhances diversification with little loss of expected utility and with reduced out of sample downside risk. Information ratio extracted from the historical sample is largely preserved in the shrinkage process.

If you're interested in using NAG in finance visit us here for more information or email us.


Who or What is the Connection Between NAG and Beatrix Potter?


The answer to this question was revealed in an entertaining and informative talk by Margaret Wright at a workshop "Advances in Numerical Computation" held at the University of Manchester on July 5th, 2011.

The answer of course is "Sven Hammarling". To find out why please look at Margaret Wright's talk which is available for download from http://www.mims.manchester.ac.uk/events/workshops/ANC11/programme.php.

Sven is highly respected in the numerical analysis community and is probably best known for his work on BLAS and LAPACK. In honour of his forthcoming 70th birthday, Fran├žoise Tisseur and Nick Higham organised this workshop.

Speakers included Jack Dongarra, Philip Gill and Anne Trefethen all of whom have close connections with NAG and so, as Sven has been an employee of NAG since 1982, NAG had no hesitation in sponsoring the workshop dinner.

Jack spoke about designing libraries to utilize? millions of cores and detailed the huge growth computing power obtained by the large computing systems. He spoke about the effect that this has on the numerical algorithms needed to exploit these architectures.

Philip's talk was titled, "What's New in Active-Set Methods for Nonlinear Optimization?" and took us through the history of sequential quadratic programming methods for nonlinear optimisation -a topic very dear to NAG's heart as our optimisation chapter contains many such algorithms. He explained why the methods had fallen out of fashion and were now coming back into favour. Philip is of course the main contributor to the NAG optimisation chapter.

Anne Trefethen is an ex-NAG employee and former Chairman of the NAG Board. Anne's talked about a topic I had never considered before - "High Performance Linear Algebra: Developing Scalable, Versatile, Energy-Efficient Numerical Libraries".

As I listened I couldn't help but reflect that energy efficiency might require another variant of the NAG Libraries. Imagine my joy at this realisation.

Pictures taken at the workshop are available at http://www.maths.manchester.ac.uk/~higham/photos/ANC11/index.htm.


New in the NAG Library Routine Spotlight


In the last NAGNews we announced the availability of the latest NAG Library, Mark 23. This mark features lots of exciting new routines, expansion of existing chapters, and updates to many routine. Over the next few months we'll shine the spotlight on some of the key new functions. If you haven't yet upgraded to Mark 23, and want to, please email us and we'll check your entitlement and set you on the upgrade path as quickly as we can. Today's new routine/functionality spotlight is on:

Bound Optimization BY Quadratic Approximation.

Mark 23 of the NAG Fortran Library expands the functionality provided by the Chapter for Minimizing or Maximizing a Function (E04). The BOBYQA (Bound Optimization BY Quadratic Approximation) algorithm of Prof. Mike Powell, University of Cambridge, is now available in the Library. This robust method is an easy-to-use algorithm that employs quadratic approximation and trust regions to minimize an objective subject to bound constraints. No derivatives of the objective function are required, and the solver's efficiency is preserved for large problem sizes.

As a simple example, the problem of distributing 50 points on a sphere to have maximal pairwise separation, starting from equispaced points on the equator (see also Powell (2009)), is solved using 4633 function evaluations. This compares with 16757 taken by the NAG Nelder-Mead simplex solver on the same problem (run on a machine using GCC 4.5.2, Fedora 10, four 2.00GHz dual-core Intel® Xeon® E5405 processors, 8Gb RAM).

Continue reading this article here.


Case Study - Functions from the NAG Library underpin leading application for marine design from QinetiQ GRC


The longstanding relationship between NAG and QinetiQ GRC has resulted in the use of several NAG library functions in Paramarine™, the QinetiQ GRC Integrated naval architecture application for ships and submarines.

Background

"Without the NAG functions we could still be working on some of the most important parts of Paramarine. Our developers know they can rely on NAG functions for reliability, the NAG documentation and examples for all the detail that is required and, if needed, an unrivalled level of support in the form of NAG's expertise in numerical mathematics and computer science. We don't have to work on the lower level at all; instead, we spend our time developing Paramarine to be one of the most useful tools in the marine industry" Vittorio Vagliani, Managing Director of QinetiQ GRC Continue reading about QinetiQ and NAG here.


Recent Blog Posts


Keep up to date with NAG's recent blog posts here:

Types in The NAG Toolbox for MATLAB
Why migrate from legacy systems?
Using NAG .NET Methods in LabVIEW
ISC Review


Out & About with NAG


HECToR Training Courses by NAG's HPC Team

Here are the courses being held over the next few months. Details of how to attend can be found on the, HECToR website.

19-21 September 2011
Fortran 95

26-30 September 2011
Algorithms for High Performance Scientific Computing

11-13 October 2011
Object-Oriented Programming in Fortran 2003

For more information on any of the above events visit NAG's ‘Out & About’ webpage


New NAG product implementations


The NAG C Library, Mark 9 is now also available for the following platform:

  • Apple 64 bit Intel Mac, Mac OS X, using the gcc compiler

In addition, the NAG Toolbox for MATLAB, Mark 22 has been repackaged for Apple 64 bit Intel Mac, Mac OS X, with a new installer that now works with MATLAB R2011a.

For full details of these and all other available implementations, visit www.nag.com site. Comprehensive technical details of each implementation are given in the relevant Installation and User Notes here.


NAGNews - Past Issues


We provide an online archive of past issues of NAGNews. For editions prior to these, please contact us.

Website Feedback

If you would like a response from NAG please provide your e-mail address below.

(If you're a human, don't change the following field)
Your first name.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.