The routine may be called by the names h03bbf or nagf_mip_tsp_simann.
h03bbf provides a probabilistic strategy for the calculation of a near optimal path through a symmetric and fully connected distance matrix; that is, a matrix for which element is the pairwise distance (also called the cost, or weight) between nodes (cities) and . This problem is better known as the Travelling Salesman Problem (TSP), and symmetric means that the distance to travel between two cities is independent of which is the destination city.
In the classical TSP, which this routine addresses, a salesman wishes to visit a given set of cities once only by starting and finishing in a home city and travelling the minimum total distance possible. It is one of the most intensively studied problems in computational mathematics and, as a result, has developed some fairly sophisticated techniques for getting near-optimal solutions for large numbers of cities. h03bbf adopts a very simple approach to try to find a reasonable solution, for moderately large problems. The routine uses simulated annealing: a stochastic mechanical process in which the heating and controlled cooling of a material is used to optimally refine its molecular structure.
The material in the TSP is the distance matrix and a given state is represented by the order in which each city is visited—the path. This system can move from one state to a neighbouring state by selecting two cities on the current path at random and switching their places; the order of the cities in the path between the switched cities is then reversed. The cost of a state is the total cost of traversing its path; the resulting difference in cost between the current state and this new proposed state is called the delta; a negative delta indicates the proposal creates a more optimal path and a positive delta a less optimal path. The random selection of cities to switch uses random number generators (RNGs) from Chapter G05; it is thus necessary to initialize a state array for the RNG of choice (by a call to g05kfforg05kgf) prior to calling h03bbf.
The simulation itself is executed in two stages. In the first stage, a series of sample searches through the distance matrix is conducted where each proposed new state is accepted, regardless of the change in cost (delta) incurred by applying the switches, and statistics on the set of deltas are recorded. These metrics are updated after each such sample search; the number of these searches and the number of switches applied in each search is dependent on the number of cities. The final collated set of metrics for the deltas obtained by the first stage are used as control parameters for the second stage. If no single improvement in cost is found during the first stage, the algorithm is terminated.
In the second stage, as before, neighbouring states are proposed. If the resulting delta is negative or causes no change the proposal is accepted and the path updated; otherwise moves are accepted based on a probabilistic criterion, a modified version of the Metropolis–Hastings algorithm.
The acceptance of some positive deltas (increased cost) reduces the probability of a solution getting trapped at a non-optimal solution where any single switch causes an increase in cost. Initially the acceptance criteria allow for relatively large positive deltas, but as the number of proposed changes increases, the criteria become more stringent, allowing fewer positive deltas of smaller size to be accepted; this process is, within the realm of the simulated annealing algorithm, referred to as ‘cooling’. Further exploration of the system is initially encouraged by accepting non-optimal routes, but is increasingly discouraged as the process continues.
The second stage will terminate when:
–a solution is obtained that is deemed acceptable (as defined by supplied values);
–the algorithm will accept no further positive deltas and a set of proposed changes have resulted in no improvements (has cooled);
–a number of consecutive sets of proposed changes has resulted in no improvement.
Applegate D L, Bixby R E, Chvátal V and Cook W J (2006) The Traveling Salesman Problem: A Computational Study Princeton University Press
Cook W J (2012) In Pursuit of the Traveling Salesman Princeton University Press
Johnson D S and McGeoch L A The traveling salesman problem: A case study in local optimization Local search in combinatorial optimization (1997) 215–310
Press W H, Teukolsky S A, Vetterling W T and Flannery B P (2007) Numerical Recipes The Art of Scientific Computing(3rd Edition)
Rego C, Gamboa D, Glover F and Osterman C (2011) Traveling salesman problem heuristics: leading methods, implementations and latest advances European Journal of Operational Research211 (3) 427–441
Reinelt G (1994) The Travelling Salesman. Computational Solutions for TSP Applications, Volume 840 of Lecture Notes in Computer Science Springer–Verlag, Berlin Heidelberg New York
1: – IntegerInput
On entry: the number of cities. In the trivial cases , or , the routine returns the optimal solution immediately with (provided the relevant distance matrix entries are not negative).
2: – Real (Kind=nag_wp) arrayInput
On entry: the distance matrix; each is the effective cost or weight between nodes and . Only the strictly upper half of the matrix is referenced.
, for and .
3: – Real (Kind=nag_wp)Input
On entry: a lower bound on the solution. If the optimum is unknown set bound to zero or a negative value; the routine will then calculate the minimum spanning tree for dm and use this as a lower bound (returned in ). If an optimal value for the cost is known then this should be used for the lower bound. A detailed discussion of relaxations for lower bounds, including the minimal spanning tree, can be found in Reinelt (1994).
4: – Real (Kind=nag_wp)Input
On entry: a measure of how close an approximation needs to be to the lower bound. The routine terminates when a cost is found less than or equal to . This argument is useful when an optimal value for the cost is known and supplied in bound. It may be sufficient to obtain a path that is close enough (in terms of cost) to the optimal path; this allows the algorithm to terminate at that point and avoid further computation in attempting to find a better path.
If , is assumed.
5: – Integer arrayOutput
On exit: the best path discovered by the simulation. That is, path contains the city indices in path order. If on exit, path contains the indices to nc.
6: – Real (Kind=nag_wp)Output
On exit: the cost or weight of path. If on exit, cost contains the largest model real number (see x02blf).
7: – IntegerOutput
On exit: the termination mode of the routine (if on exit, tmode is set to ):
Optimal solution found, .
System temperature cooled. The algorithm returns a path and associated cost that does not attain, nor lie within targc of, the bound. This could be a sufficiently good approximation to the optimal path, particularly when lies below the optimal cost.
Initial search failed to find a single improvement (the solution could be optimal).
8: – Real (Kind=nag_wp) arrayOutput
On exit: an array of metrics collected during the initial search. These could be used as a basis for future optimization. If on exit, the elements of alg_stats are set to zero; the first five elements are also set to zero in the trival cases , or .
Standard deviation of deltas.
Cost at end of initial search phase.
Best cost encountered during search phase.
Initial system temperature. At the end of stage 1 of the algorithm, this is a function of the mean and variance of the deltas, and of the distance from best cost to the lower bound. It is a measure of the initial acceptance criteria for stage . The larger this value, the more iterations it will take to geometrically reduce it during stage 2 until the system is cooled (below a threshold value).
The lower bound used, which will be that computed internally when on input. Subsequent calls with different random states can set bound to the value returned in to avoid recomputation of the minimal spanning tree.
9: – Integer arrayCommunication Array
Note: the actual argument supplied must be the array state supplied to the initialization routines g05kff or g05kgf.
On entry: a valid RNG state initialized by g05kfforg05kgf. Since the algorithm used is stochastic, a random number generator is employed; if the generator is initialized to a non-repeatable sequence (g05kgf) then different solution paths will be taken on successive runs, returning possibly different final approximate solutions.
On exit: contains updated information on the state of the generator.
10: – IntegerInput/Output
On entry: ifail must be set to , or to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of means that an error message is printed while a value of means that it is not.
If halting is not appropriate, the value or is recommended. If message printing is undesirable, then the value is recommended. Otherwise, the value is recommended. When the value or is used it is essential to test the value of ifail on exit.
On exit: unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry or , explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
On entry, . Constraint: .
On entry, the strictly upper triangle of dm had a negative element.
On entry, state vector has been corrupted or not initialized.
An unexpected error has been triggered by this routine. Please
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
The routine will not perform well when the average change in cost caused by switching two cities is small relative to the cost; this can happen when many of the values in the distance matrix are relatively close to each other.
The quality of results from this routine can vary quite markedly when different initial random states are used. It is, therefore, advisable to compute a number of approximations using different initial random states. The best cost and path can then be taken from the set of approximations obtained. If no change in results is obtained after such trials then it is unlikely that any further improvement can be made by this routine.
8Parallelism and Performance
Running many instances of the routine in parallel with independent random number generator states can yield a set of possible solutions from which a best approximate solution may be chosen.
Memory is internally allocated for integers and real values.
In the case of two cities that are not connected, a suitably large number should be used as the distance (cost) between them so as to deter solution paths which directly connect the two cities.
Solutions which contain an artificial link (i.e., a connection with a large distance between them to indicate no actual link) may be patched, using the shortest path algorithm h03adf.
If a city is to be visited more than once (or more than twice for the home city) then the distance matrix should contain multiple entries for that city (on rows and columns ) with zero entries for distances to itself and identical distances to other cities.
An approximation to the best path through cities in the United Kingdom and Ireland, beginning and ending in Oxford, is sought. A lower bound is calculated internally.