The Sparse Travelling Salesman Problem (Sparse TSP) which is a variant of the classical Travelling Salesman Problem (TSP) is the problem of finding the shortest route of the salesman when visiting cities in a region making sure that each city is visited at least once and returning home at the end. In the Sparse TSP, the distance between cities may not obey the triangle inequality; this makes the use of algorithms and formulations designed for the TSP to require modifications in order to produce near-optimal results.
When solving the Sparse TSP, our main interest is in computing feasible tours at a reasonable computational cost. In addition because it is not possible to get an optimal solution most of the time, we would like to have some guarantee on the quality of the tours (solutions) found. Such guarantees can most of the time be provided if a lower bound on the length of a shortest possible tour is known.
It is also the case that most algorithms for finding exact solutions for large Standard TSP instances are based on methods for finding upper and lower bounds and an enumeration scheme. For a given instance, lower and upper bounds are computed. In most cases, these bounds will not be equal, and therefore, only a quality guarantee for the feasible solution can be given, but optimality cannot be proved. If the upper and lower bounds coincide, a proof of optimality is achieved.
Therefore, the determination of good tours and derivation of tight lower bounds can be keys to a successful search for optimal solutions. Whereas there have been a lot of work and progress in designing heuristic methods to produce upper bound, the situation for lower bounds is not as satisfying.
In general for the Standard TSP, lower bounds are obtained by solving relaxations of the original problem in the sense that one optimizes over some set containing all feasible solutions of the original problem as a (proper) subset. This then means, for example, that the optimal solution of the relaxed problem gives a lower bound for the value of the optimal solution of the original problem. In practice, the methods usually used for computing lower bound for the Standard TSP are the Held-Karp lower bound (Johnson et al., 1996) and Lagrangian relaxation (Reinelt, 1994).
Since the Sparse TSP is an NP-Hard combinatorial optimization problem as per Fleischmann (Fleischmann, 1985), the standard technique to solve it to optimality is based on an enumeration scheme which for large problems is computationally expensive. Therefore a natural way is to use the Sparse TSP heuristics to obtain a near-optimal solution. Solutions obtained by heuristics for the Sparse TSP provide the upper bounds. Heuristics produce feasible solutions but without any quality guarantees as to how far off they may be from the optimal feasible solution. In order to be able to assess the performance of heuristics we need to find the lower bound of the problem.
Therefore, in this chapter we are interested in exploring methods for computing tight lower bounds for the Sparse TSP. This is the case because we do not have the luxury of comparing with what other researchers have done, since most of the work in the TSP has been focused on the Standard TSP. For example, in the Standard TSP there are sample instances with optimal solutions provided in the TSPLIB for most of the problems (see (Reinelt, 1992)). The results given in TSPLIB include a provable optimal solution if available or an interval given by the best known lower bound and upper bound. As far as we are aware there are no such benchmark results for the Sparse TSP which is studied in this chapter.
A lower bound gives us the quality guarantee of the near-optimal solution obtained by using heuristic methods. The most widely used procedure for finding the lower bound for the Standard TSP is the Held and Karp lower bound (Held & Karp, 1970). Johnson et al (Johnson et al., 1996) provide empirical evidence in support of using the Held and Karp (HK) lower bound as a stand-in for the optimal tour length when evaluating the quality of near-optimal tours. They show that for a wide variety of randomly generated instances the optimal tour length averages less than 0.8% over the HK lower bound, and for the real world instances in TSPLIB the gap is always less than 2%. A tight lower bound for the Sparse TSP will play a key role in developing and assessing the performance of the Sparse TSP heuristic methods.DefinitionsA
In this chapter, we propose and give computational experience on two methods of finding lower bounds for the Sparse TSP, the chapter is organised as follows. In section 2, we give the background and related research in the Travelling Salesman Problem and specifically how this work relates to the Sparse TSP. In section 3, we discuss methods for finding the lower bound of the Sparse TSP, and give the formulation and relaxation for the Linear Programming relaxation of the Sparse TSP. Further, we introduce the Arc-cutset Partial Enumeration Strategy as a strategy for finding the lower bound of the Sparse TSP. Finally, section 4 gives the conclusions and summary.
2. Related works
The Standard TSP has been an area of intensive research since the late 1950's as demonstrated in (Lawler et al., 1985). The first major breakthrough came in 1954, in a seminal paper by (Dantzig et al., 1954) where a description of a method for solving the Standard TSP was given and its power illustrated by solving an instance of 49 cities. From there on there has been a lot of research published on the Standard TSP and its variants.
Most of the methods used for solving the Standard TSP to optimality are of the branch and bound variety where, at each node of the branch and bound tree, lower bounds are computed by solving related problems which are relaxations of the original TSP (see (Camerini et al., 1975), (Gabovich, 1970), Held and Karp (Held & Karp, 1970), Padberg and Hong (Padberg & Hong, 1977), and Rubinshtein (Rubinshtein, 1971). As for all branch and bound methods, the quality of the computed lower bounds at each node has much greater influence on the effectiveness of the algorithm than any branching rules that may be used to generate the subproblems during the search. Branch and Bound techniques have been used successfully in optimization problems since the late 1950's. Several different branch and bound algorithms are possible for the travelling salesman problem. A survey of these is given in Bellmore and Nemhauser (Bellmore & Nemhauser, 1968). A variation of Branch and Bound techniques using cutting plane techniques called Branch-and-Cut by Padberg and Rinaldi((Padberg & Rinaldi, 1989), (Padberg & Rinaldi, 1991)) is a much more powerful technique for computing solutions for the Standard TSP.
Held and Karp (Held & Karp, 1970), (Held & Karp, 1971) pioneered an iterative approach which uses a technique called Lagrangian Relaxation to produce a sequence of connected graphs which increasingly resemble tours. This technique is based on the notion of a 1-tree and the bound generated is called the Held-Karp lower bound (HK lower bound). Formulating the Standard TSP as an integer linear programming problem (see Dantzig et al (Dantzig et al., 1954), Miller et al (Miller et al., 1960), Fox et al (Fox et al., 1980), and Claus (Claus, 1984) and systematically solving its relaxations is another way of obtaining a lower bound for TSP. Bounds from the solutions of the assignment problem, the matching problem, and the shortest n-path problem have also been suggested and explored by Christofides (Christofides, 1979 ) who also gave a brief survey and references. Johnson et al (Johnson et al., 1996) use the HK lower bound as a stand-in for the optimal tour length when evaluating the quality of near-optimal tours in a number of their studies in which they solve problems of up to one million cities.
When the theory of NP-completeness was developed, the Standard TSP was one of the problems shown to be NP-hard by Karp in (Karp, 1972). This remains true even when additional assumptions such as the triangle inequality or Euclidean distances are involved (see Garey (Garey, 1976)). These results imply that a polynomially bounded exact algorithm for the TSP is unlikely to exist. Nevertheless ingenious algorithms for the TSP have been proposed by Little et al (Little et al., 1963), Held and Karp (Held & Karp, 1970), (Held & Karp, 1971), Miliotis (Miliotis, 1976), Crowder and Padberg (Crowder & Padberg, 1980). Over the past four decades the TSP has remained the prototype of a “hard” combinatorial problem. Since the introduction of NP-completeness theory in 1971 and the subsequent inclusion of the TSP in the NP-complete class, some of the mystery has gone out of the TSP. A complete treatment of NP-completeness theory and its relationship to the TSP is given in Garey and Johnson (Garey & Johnson, 1979).
The NP-complete results have given a new impetus to heuristic methods for solving “hard” combinatorial problems. Due to the difficulty of the TSP, many heuristic procedures have been proposed and developed. These heuristics have been compared analytically in the ground-breaking paper of Rosenkrantz, Stearns and Lewis (Rosenkrantz et al., 1977) by studying their worst-case behaviour. Alternatively, computational experimentation may be used to compare the performance of these heuristics, as in the work of Golden et al (Golden et al., 1980) and Stewart (Stewart Jr, 1987). For example, Stewart (Stewart Jr, 1987) describes a number of new algorithms designed specifically to perform well on Euclidean problems.
Much work has been done on fast heuristic algorithms for the Standard TSP. There is a trade-off between the speed of an algorithm and its ability to yield tours which are close to the optimal. The following studies show an enormous interest which researchers have shown to heuristic algorithms. These studies include those of Adrabinski and Syslo (Adrabinski & Syslo, 1983), Golden et al (Golden et al., 1980), Johnson and McGeoch (Johnson & McGeoch, 1995), Bentley (Bentley, 1993), Reinelt (Reinelt, 1992), and Júnger et al (Júnger et al., 1995). There are two broad classes of heuristics for the travelling salesman problem constructive and improvement heuristics. A typical tour construction heuristic method starts with a node and adds others one by one until the tour is complete. Many other variants are described in Bellmore and Nemhauser (Bellmore & Nemhauser, 1968), Rosenkrantz et al (Rosenkrantz et al., 1977), Johnson (Johnson, 1990), Golden et al (Golden et al., 1980), Golden and Stewart (Golden & Stewart, 1985), Gendreu et al. (Gendreau et al., 1992), and Bentley (Bentley, 1993).
The arc-exchange strategy was first applied to the Standard TSP by Croes (Croes, 1958). He suggested the 2-optimal algorithm for the symmetric TSP. About the same time and independently, a 3-optimal strategy was suggested by Bock in (Bock, 1958). However, it was Lin in (Lin, 1965) who truly established through extensive empirical study that the 3-optimal algorithm was indeed an excellent approximation algorithm for the Standard TSP. A substantial improvement in implementation of the 3-optimal (in general,
The ejection chains method introduced by Glover (Glover, 1992) have been used to generate compound neighbourhood structures for the Standard TSP by Rego (Rego, 1998). It should be noted that this neighbourhood has been used to solve problems taken from Travelling Salesman Problem LIBrary(TSPLIB) and performed better than the best iterated LK, but it took more time.
In order for the heuristics algorithms to work well efforts should be spent on designing efficient data structures. Data structures play an important role in the design and implementation of the Standard TSP algorithms. The following data structures have been used in the Standard TSP study, the
We have witnessed remarkable progress in Mathematical Programming, Polyhedral Theory, and significant advances in computer technology that have greatly enhanced the chances of finding exact solutions to reasonably large combinatorial optimization problems. Nevertheless, these problems still pose a real challenge in the sense that it is hard to solve realistically large problem instances in reasonable computational times. This challenge has led to the development of many approximate algorithms and has made the science of heuristics a fast growing area of research. For more details on combinatorial optimization problems we refer the reader to Nemhauser and Wolsey (Nemhauser & Wolsey, 1988), Papadimitriou and Steiglitz (Papadimitriou & Steiglitz, 1982), Garey and Johnson (Garey & Johnson, 1979).
Metaheuristics are a class of approximate solution methods, which have developed dramatically since their inception in the early 1980s. They are designed to solve complex optimization problems where classical heuristics and other optimization methods have failed to be effective and efficient. A
The solution produced by using local search procedures can be very far off from optimality. In order to avoid such disadvantages while maintaining the simplicity and generality of the approach, the following concepts which form the basis of most metaheuristics are considered see Reeves (Reeves, 1993). Start from good initial solutions which can be generated intelligently using a greedy random adaptive search designed by Feo and Resende (Feo & Resende, 1995), or space-search methods in Storer, Wu and Vaccari (Storer et al., 1995). Use the learning strategies of neural networks discussed in Sharda (Sharda, 1994), and Tabu search in Glover (Glover, 1995 ) that gather information during the algorithms execution in order to guide the search to find possibly better solutions. Employ non-monotonic search strategies that sample/accept neighbours based on hybrid modifications of simulated annealing and Tabu Search methods or others see ( Glover (Glover, 1995 ), ( Glover, 1995 )), (Hu, Khang and Tsao (Hu et al., 1995)), (Osman (Osman, 1993), (Osman, 1995 ), ( Osman, 1995 )) and (Osman and Christofides (Osman & Christofides, 1994)).
Sahni and Gonzales in (Sahni & Gonzales, 1976) showed that if a triangle inequality is not satisfied, the problem of finding an approximate solution to the TSP within any fixed bound ratio of the optimum is as hard as finding an exact solution. A comprehensive treatment of various aspects of the TSP can be found in the collection of research papers in Lawler et al (Lawler et al., 1985).
3. Methods for finding Lower bound for the Sparse TSP
The standard technique for obtaining lower bounds on the Standard TSP is to use a relaxation that is easier to solve than the original problem. These relaxations can have either discrete or continuous feasible sets. Several relaxations have been considered over years for the Standard TSP. We are going to introduce modifications to these relaxations, so that they can be used to find lower bounds for the Sparse TSP at a reasonable computational effort.
We can illustrate the relaxation of an optimization problem by figure 1 in which the region ABCD is the feasible region of the relaxed problem, while the region MNOPQRS is the feasible region of the original “true” problem, which in this case happens to contain integer points. Solution ABCD may be obtained when we relax integrality constraints. Relaxation of the combinatorial optimization has become so popular because in most cases the problems can be solved with reasonable computational effort, if not easier than the original problem.
The HK lower bound is the solution to the LP relaxation of the integer programming formulation of the Standard TSP (see Dantzig et al (Dantzig et al., 1954), Reinelt (Reinelt, 1994) and Johnson et al (Johnson et al., 1996). That is, it is the Integer Linear Programming with the integrality constraints relaxed. The HK lower bound provides a very good estimate of optimal tour length for the Standard TSP. This measure has enormous practical value when evaluating the quality of near optimal solutions for large problems where the true optimal solutions are not known or are computationally expensive to find. The HK lower bound has been used as a stand-in for the optimal tour length when evaluating the quality of near-optimal tours in a lot of studies (for example, in Johnson et al (Johnson et al., 1996)).
Although the HK lower bound can be evaluated exactly by Linear Programming techniques, code for doing this efficiently for problems larger than a few hundred cities is not readily available or easy to produce (see Valenzuela and Jones (Valenzuela & Jones, 1996)). In addition linear programming implementations (even efficient ones) do not scale well and rapidly become impractical for problems with many thousands of cities. To be able to find the HK lower bound, a procedure for finding violated inequalities must be provided. This is not a simple matter of automatically generating violated inequalities. It is because of the above mentioned difficulties that most researchers have preferred to use the iterative estimation approach for finding lower bound for the Standard TSP proposed by Held and Karp (Held & Karp, 1970), (Held & Karp, 1971). In this chapter we use this method and modify it to solve the Sparse TSP problems.
3.1. The LP Relaxations for the Sparse TSP
The formulation for the Integer Linear Programming (ILP) Sparse TSP is given as:
By relaxing the integrality constraint (1.5) we get the Linear Programming (LP) relaxation for the ILP Sparse TSP where equations (1.1, 1.2, 1.3 and 1.4) remains the same and constraint (1.5) becomes,
Note: that any integral solution to the LP relaxation is a tour.
We solved the LP relaxation for the modified ILP Sparse TSP formulation problems, results given in figure 7 were obtained by using violated arc-cutset constraints.
3.2. The LP relaxation for the EFF for the Sparse TSP
From the single commodity flow formulation, we present its modification which we call the Embedded Flow Formulation (EFF) for the Sparse TSP. This formulation involves a polynomial number of constraints, even though the number of variables is increased considerably. The EFF for the Sparse TSP is given as:
The LP relaxation for the EFF for the Sparse TSP formulation is obtained by relaxing the integrality constraint (1.13). All other equations remain the same except constraint (1.13) changes to:
It was interesting to know how close the lower bound
The results we obtained for the EFF for the Sparse TSP are presented in figure 8. In general the LP relaxation is not equal to the minimum tour length but it is very close. LP relaxation for the ILP Sparse TSP gives a much tighter lower bound than the LP relaxation for the EFF for the Sparse TSP and requires less iterations.
3.3. An Arc-cutset Partial Enumeration Strategy (APES)
In this section we are proposing a strategy for quickly generating some of the violated arc-cutset constraints which we call an
Our algorithm for the APES is given below.
An Arc-cutset Partial Enumeration Strategy algorithm
Step 1: Formulate the problem using evenness condition
Constraints and integrality constraints only.
Let nodes be the number nodes in the starting path
Let nodesv be the number of nodes to be visited
Let k := 2
Step 2: For nodes := k to n do
For nodesv := 3 to n – k do
List all arcs incident to the path
with end node 1 and nodesv
add the arc-cutset constraint to the formulation
Step 3: Solve the new formulation using any LP solver
Step 4: stop
In forming the arc-cutset constraints, we first used a subtour component consisting only of two end nodes i and j with (i,j) as a connected component. The violated arc-cutset constraints were constructed by listing all arcs incident to node i and node j to form one violated arc-cutset constraint, i.e., all arcs incident with a subtour component. The arc connecting node i and node j was not included in the arc-cutset constraints. Figure 2 shows how using component (i,j) arc-cutset constraints was formed. This is what takes place in step 2 of the APES algorithm.
The arc-cutset constraints which are generated by the APES are used to connect components. Since the APES starts by using the evenness condition constraints and integrality constraints, while omitting the connectivity constraints. For example the twenty nodes problem shown in figure 3, is used to demonstrate how the APES works.
Solving the twenty nodes problems before adding the violated arc-cutset constraints generated by the APES gives the disconnected tours illustrated in figure 4 and whose objective function is 524.
After adding the violated arc-cutset constraints generated by the APES we got a tour as illustrated in figure 5 and its objective function was 765, which in this case happened to be the optimal tour.
For all the problems reported in this study, we used the APES to generate violated arc-cutset constraints from each arc. This strategy was used to ensure that disconnected components were connected to form a tour. The APES was able to produce optimal tours for all small problems we solved of up to 1000 nodes. It is interesting to note how the algorithm performed in some graphs. For example, in the case of the 30 nodes problem we had to add eleven violated arc-cutset constraints before getting a tour, while we had to add only two violated arc-cutset constraints for the 67 nodes problem to get a tour. As pointed out earlier the number of violated arc-cutset required to be identified seem to be a function of the nature of the graph and not its size.
We then extended this technique of identifying violated arc-cutset constraints. These new violated arc-cutset constraints were formed by visiting a path of three or more nodes together. The first violated arc-cutset constraint was formed by visiting any three nodes which formed a path. When forming these constraints, we included all arcs which were incident to nodes 1 or 2 or 3 and ignore arcs connecting nodes 1, 2, and 3. The next constraint was formed by adding the fourth node to the path and the violated arc-cutset constraint was identified by listing all nodes incident to the path consisting of four nodes ignoring the arcs which form the path. The process continued until we got a tour. Figure 6 shows how these violated arc-cutset constraints were formed.
From figure 6 the following violated arc-cutset constraint (1.17) will be formed:
The results in figure 6 shows how the APES method performed when the starting path visited 3, 4, …, 10 nodes together. In other words at first the starting path had 3 nodes and we extended the path by adding one node at a time. The second time the starting path had 4 nodes and we extended the path by adding one node at a time. We continued increasing the number of nodes in a starting path, until at last our starting path had 10 nodes to start with. We got in a good number of cases substantial improvements in the lower bound as we increased the number of nodes in the starting path. However, as the number of nodes in the starting path went beyond 10 we got marginal improvement.
The order of complexity of our APES is
4. Summary and future work
When optimization problems arise in practice we want to have confidence in the quality of the solutions. Quality guarantees are required because in most cases and especially for large problems, it is not always possible to find an optimal solution. Quality guarantees become possible by being able to compute tight lower bounds at a reasonable computational cost. In this chapter we have proposed two methods for finding tight lower bound for the Sparse TSP using the modified Linear Programming relaxation method and the Arc-cutset Partial Enumeration Strategy.
When the LP relaxation method is used to find a lower bound for the ILP Sparse TSP, finding arc-cutset constraints is a headache especially for large problems. There are procedures for identifying violated arc-cutset constraints automatically in practice, such as the separation routines. These procedures are computational expensive and therefore were not used in this study.
The Arc-cutset Partial Enumeration Strategy proposed is a simple and fast way of getting a lower bound without spending time in a separation algorithm. Computational results show that the lower bounds obtained by using this method are as comparable as those obtained using Separation algorithms.
A lower bound on the optimal value (assuming a minimization problem) is obtained from a relaxation of the integer program. In the past fifteen years attention has shifted from Lagrangian Relaxation to Linear Programming relaxation, since the latter type of relaxation can be strengthened more easily by using cutting planes. Combining cutting planes and Lagrangian relaxation usually causes convergence problems as discussed by Aardal et al (Aardal et al., 1997). Utilising various modified LP relaxation to suit the Sparse TSP has given us the tightest lower bound of all lower bound techniques we have discussed in this chapter.
Aardal K. et al. 1997 A decade of Combinatorial Optimization, Working paper, Uetrecht University, Computer Science Department, Netherlands.
Adrabinski s. A. Syslo M. M. 1983Computational experiments with some approximation algorithms for the travelling salesman problem. , 18 1 91 95.
Bellmore M. Nemhauser G. L. 1968 The Traveling salesman problem: A survey. , 16 538 558.
Bentley J. J. 1993 Fast Algorithms for Geometric Traveling Salesman Problems. , 4 4 387 411.
Bentley J. L. 1990 Experiments on traveling salesman heuristics, The 1stAnnual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia, PA., 91 99.
Bentley J. L. 1990K-d trees for semidynamic point sets, , 187 197.
Bentley J. L. 1975 Multidimensional binary search trees used for associative search. Comm. ACM, 18 309 517.
Bock F. 1958An algorithm for solving Traveling Salesman and related network optimization problems, , St. Louis, Mo., St. Louis, Mo.
Reeves C.R. 1993 Modern Heuristic Techniques for Combinatorial Problems, McGraw-Hill Book Company.
Camerini P. et al. 1975Travelling salesman problem: Heuristically guided search and modified gradient techniques.
Christofides N. 1979The Travelling Salesman Problem, , N. Christofides, Mingozzi, A., Toth, P. and Sandi, C., 131 149, John Wiley and Sons. John Wiley and Sons.
Christofides N. Eilon S. 1979Algorithm for large-scale Traveling Salesman Problems. , 23 4 511 518.
Claus A. 1984A new formulation for the traveling salesman problem. , 5 21 25.
Cornuéjols G. et al. 1985The travelling salesman problem on a graph and some related integer polyhedra. , 33 1 27.
Croes G. A. 1958 A method for solving traveling salesman problems. , 6 791 812.
Crowder H. Padberg M. 1980Solving large-scale symmetric travelling salesman problem to optimality. , 26 495 509.
Dantzig G. et al. 1954 Solution of a large-scale Traveling-Salesman Problem. , 2 393 410.
Feo T. A. Resende M. G. C. 1995 Greedy randomized adaptive search procedures. , 6 109 118.
Fleischmann B. 1985 A cutting plane procedure for travelling salesman problem on road networks., 21 307 317.
Fox K. et al. 1980 An n-constraint formulation of the (time-dependent) traveling salesman problem. , 28 1018 1021.
Fredman M. L. et al. 1995Data Structures for Travelling Salesmen. , 18 432 475.
Gabovich E. J. 1970 19 27 51.
Garey M. 1976Some NP-complete geometric problems, , 10 22.
Garey M. R. Johnson D. S. 1979, W.H. Freeman, San Francisco.
Gendreau M. et al. 1992New Insertion and Postoptimization procedure for the traveling salesman problem. , 40 1086 1094.
Glover F. 1995Tabu thresholding: improved search by non-monotonic trajectories. , 7 426 436.
Glover F. 1995 Tabu search fundamentals and uses, Graduate School of Business, University of Colorado, Boulder.
Glover F. 1992New ejection chain and alternating path methods for the traveling salesman problems, , O. Balci, Sharda, R. and Zenios, S. A., 27 49, Pergamon Press, Oxford., Pergamon Press, Oxford.
Goemans M. X. 1995 Worst-case Comparison of valid Inequalities for the TSP. , 69 335 349.
Golden B. et al. 1980Approximate travelling salesman algorithms. , 28 3 694 711.
Golden B. L. Stewart W. R. 1985Empirical analysis of heuristics, , E. L. Lawler, Lenstra, J. K., Kan, A. H. G. R. and Shmoys, D. B., 207 249, John Wiley and Sons Ltd., John Wiley and Sons Ltd.
Held M. Karp R. M. 1970The Travelling Salesman Problem and Minimum spanning Trees, Part I. , 18 1138 1162.
Held M. Karp R. M. 1971The Travelling Salesman Problem and Minimum spanning Trees, Part II. , 1 6 25.
Hu T. C. et al. 1995Old bachelor acceptance: A new class of non-monotonic threshold accepting methods. , 7 417 425.
Johnson D. S. Mc Geoch L. A. 1995The Traveling Salesman Problem: A case study, , E. Aarts and Lenstra, J. K., 215 310, John Wiley and Sons Ltd., John Wiley and Sons Ltd.
Johnson D. S. et al. 1996Asymptotic Experimental Analysis for the Held-Karp Traveling Salesman Bound, , 341 350, January 1996.
Johnson D. S. 1990Local optimization and the traveling salesman problem, ICALP’ 90 446 461, Springer-Verlag.
Júnger M. et al. 1995 The traveling salesman problem, , M. O. Ball, Magnanti, T. L., Monma, C. L. and Nemhauser, G. L., 225 330, North-Holland. Amsterdam, North-Holland, Amsterdam.
Karp R. M. 1972 Reducibility among Combinatorial Problems,, R. E. Miller and Thatcher, J. W., 85 103, Plenum Press, New York., Plenum Press, New York.
Lawler E. L. et al. 1985, John Wiley and Sons Ltd.
Lin S. 1965Computer solutions of the traveling salesman problem. , 44 2245 2269.
Lin S. Kernighan B. W. 1973 An effective heuristic algorithm for the traveling salesman problem. , 21 498 516.
Little J. et al. 1963 An algorithm for the traveling salesman problem. , 11 6 972 989.
Miliotis P. 1976Integer programming approaches to the traveling salesman problem. , 15 367 378.
Miller C. et al. 1960Integer programming formulations and traveling salesman problem. , 7 326 329.
Naddef D. 1992 The binested inequalities for the symmetric traveling salesman Polytope. , 17 4 882 900.
Nemhauser G. L. Wolsey L. A. 1988 Integer and Combinatorial Optimization, John Wiley and Sons Ltd.
Osman I. H. Kelly J. P. 1996 Meta-heuristics : Theory and Applications, Kluwer Academic Publishers.
Osman I. H. 1995An Introduction to Meta-Heuristics, , M. Lawerence and Wilson, C., 92 122, Operational Research Society Press. Birmigham, U.K., Operational Research Society Press, Birmigham, U.K.
Osman I. H. 1993 Metastrategy simulated annealing and Tabu search for the vehicle routing problem., 41 421 432.
Osman I. H. 1995 Heuristics for generalized assignment problem, simulated annealing and Tabu search approaches. ORSpektrum, 17 211 228.
Osman I. H. Christofides N. 1994 Capacitated clustering problems by hybrid simulated annealing and tabu search. , 1 317 329.
Padberg M. Rinaldi G. 1989 A branch-and-cut approach to a traveling salesman problem with side constraints. , 35 11 1393 1414.
Padberg M. Rinaldi G. 1991 A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. , 33 1 60 100.
Padberg M. W. Hong S. 1977 On the symmetric travelling salesman problem: A computational study.
Papadimitriou C. H. Steiglitz K. 1982 Combinatorial Optimization : Algorithms and Complexity, Prentice-Hall, Inc.
Reeves C. R. 1993 Modern Heuristic Techniques for Combinatorial Problems, McGraw-Hill Book Company.
Rego C. 1998 Relaxed tours and path ejections for the traveling salesman problem. , 2 522 538.
Reinelt G. 1992 Fast Heuristics for Large Geometric Traveling Salesman Problems. , 4 2 206 217.
Reinelt G. 1994 The traveling salesman: Computational solutions for TSP applications,Springer Verlag, 0-387-58334-3.
Rosenkrantz D. J. et al. 1977An analysis of several heuristics for the travelling salesman problem. , 6,3, 1977.
Rubinshtein M. I. 1971On the symmetric TSP. , 6 126 126.
Sahni S. Gonzales T. 1976P-complete approximation problem. , 23 555 565.
Sharda R. 1994 Neural networks for the MS/OR analyst. An application bibliography. , 24 116 125.
Shmoys D. A. Williamson D. P. 1990 Analyzing the Held-Karp TSP bound: A monotonicity property with application. , 35 281 285.
Stewart Jr W. R. 1987Accelerated branch exchange heuristics for symmetric travelling salesman problems. , 17 423 437.
Storer R. H. et al. 1995Local search in problem and heuristic space for job shop scheduling. , 7 453 458.
Swamy M. N. S. Thulasiraman K. 1981, John Wiley and Sons Ltd.
Valenzuela C. L. Jones A. J. 1996 Estimating the Held-Karp lower bound for the geometric TSP. , 102 157 175.
Wolsey L. A. 1980Heuristic analysis, linear programming and Branch and Bound. , 13 121 134.