Proven Accuracy and Reliability

NAG provides accurate, documented, numerical software to help you make sure that your results are accurate.

The validity of each NAG component is tested for each platform. Only when an implementation satisfies our stringent accuracy standards is it released. As a result, you can rely on the proven accuracy and reliability of NAG to give you the right answers.

NAG is an ISO 9001 certified organization.

QMS International ISO9001 badge
What do we do to ensure that our code meets user expectations; code that is fast, accurate and never breaks?
1/ Algorithm Selection

In many areas of mathematics and statistics, there are several methods to solve the same problem. Each is likely to have both advantages and disadvantages. For example, method "A" may be computationally faster to reach a solution while "B" is more robust in handling extreme cases of data or poorly formed problems. Also, some methods are amenable to measuring their own errors while others are not; this is especially important when the code needs to be ported to a new platform. We take the time to evaluate and summarize each of the methods. Better still, have a peer of the first reviewer independently look at the report before choosing. Finally, remembering that speed and robustness are often in competition with each other, i.e., the faster method may provide less accuracy or may break more easily, our general bias is to err on the side of robustness since machines continue to make code faster. The shorthand we use for this choice is the rhetorical question: "How fast do you want the wrong answer?"

2/ Code Engineering

At this stage, our process divides into three parallel tasks: core algorithmic coding, interface design, and documentation. The core algorithmic code where the guts of the computation take place is documented in XML (Extensible Markup Language). While a user doesn't see this, it permits the automatic adaptation of the documentation to different languages and interfaces without a manual translation (and the errors that can come with it). Abstracting the interface of an algorithm into XML permits software tools to perform the translation to a new environment and eliminates most errors that result from a manual process. We separate the interface design because the world is a "moving target" of languages and styles. 


The core algorithmic code itself is written under standards that emphasize portability. While it might be tempting to use certain trendy language extensions, we code first for portability and adhere to a set of internally developed standards for a variety of things like variable naming and interface design. This point comes home the first time a routine has to undergo a major rewrite as it is being moved to a new environment. At this stage we also subject the code to a host of software tools for validating argument lists, checking for uninitialized variables or finding memory leaks. The result is a careful blend of strict coding standards, design for portability and the use of automated tools to reduce human error.

3/ Code Engineering Quality Assurance

This is an independent peer review of the core code and interface, and a proofreading of documentation to ensure that the developer has adhered to coding standards, run required tools and properly documented the code. You could question this seeming fussiness if only a few routines were involved, but we have over 1,500 at the user-callable level alone. Even for a dozen or more complex routines, code, interface and documentation standards and automated tools reduce errors and improve the longevity of the code.

4/ Overnight "Build"

Using the base core code, interface, documentation, stringents and example programs, we build finished executables each night during the development process on six or more systems (chip hardware, operating system, compiler) simultaneously using an automated process, and logging all the results. This system tends to find both systemic code errors and ones which are unique to a particular compiler. This "short loop" system means that errors are caught earlier and portability across multiple platforms is assured.

5/ Testing

Simply put, the temptation to short-circuit this step is great. Within our code base we have 30-year-old routines and six months old routines. We plan for the latter to be around as long as the former and we accomplish this by investing more time in developing test programs (called "stringent tests") than we do writing the core code. These stringent tests exercise all error exits and code paths for a broad range of problems, including "edge cases" where the algorithm begins to fail. Stringent tests are often two to three times longer than the core code they test and errors revealed are returned to Code Engineering for further development. We also use related test programs to assure that the interfaces work properly and example programs to conduct simple tests of the integrated code. These example programs also exercise all error messages to confirm that the messages are meaningful.

6/ Implementation

After testing, we build the production version of the code with all of the base materials. Part of this process is determining proper compiler "flags" necessary to get an acceptable compromise of performance and accuracy. In addition, we will test the code on "variants" (slightly different versions of operating systems and compilers) to advise users about other workable variations. Finally, we check the installer and user advisory notes to make certain that they conform to the test system and results.

7/ Quality assurance

This is an independent check of an installation on the target system. It also includes execution of example programs and a check of stringent test results and installer/user notes. From this, a master CD and set of download installation files are created. The master and download files are then used to do a final test installation.

Why should we devote this much effort on complex code that goes into your applications?

The answer depends on many factors, including the expected longevity of the application and the financial and other consequences of getting it wrong. We take this much care because our users, especially those running the same application on multiple platforms, need to have equal confidence of correctness on any of 40-50 different implementations. We are also thinking about the next operating system version, chip architecture and compiler improvement. Having done it for over 40 years we aren't about to become short-sighted now.

Quality matters, so if you are considering some algorithmic development, take the time to think through your algorithmic needs, people resources and time horizons. The best method for your situation might come from an open source project on the web, someone on your staff, a published source or from a supported commercial library. Keep in mind where we started; complex applications are costly to develop and usually outlive both the hardware and often the developers themselves. Their life cycle costs are dominated by the development staff hours spent building, debugging, maintaining and porting to the next platform.