Experiment Results
1. Introduction
Intelligent heuristic optimization methods have increasingly attracted the attentions and interests of many scholars in recent years. Such as genetic algorithm, ant colony algorithm, particle swarm optimization, simulated annealing,
Bionics appeared in the mid 50's in 20th century, people were inspired from the mechanism of organic evolution, and put forward many new methods to solve complex optimization problems. In these methods, the evolutionary computation including evolution strategies, evolutionary programming, and genetic algorithms is the most remarkable. With people's research to biological group behavior and bio-social, algorithms based on swarm intelligence theory have appeared including Ant Colony Optimization (ACO). Since the Ant System (AS) which is the first algorithm in line with the ACO framework was put forward, the researchers have begun their attempts to improve the design. The first one is Elitist Strategy for Ant System (EAS). The EAS mainly gives special pheromone deposit to the artificial ants, which perform so far the best in constructing solutions followed by the Ant-Q algorithm which combines ant colony algorithm with the Q learning algorithm, uses the synergies of artificial ants. Then there appears the Ant Colony System (ACS), Rank based version AS (ASrank) and Max-Min Ant System (MMAS). These three improvements have greatly improved the performance of AS, in particular the MMAS gets a lot of expansion and becomes an algorithm of highly practical application and one of the best ACO algorithms at present. In recent years, there have been some new improvements of ACO such as Approximate Nondeterministic Tree Search (ANTS). The ANTS is extended to a deterministic algorithm later, and it has a good performance in solving the Quadratic Assignment Problem (QAP); Another new improved algorithm is the Hyper-Cube Framework for ACO, and its purpose is automatically adjusting the value of pheromone trails to ensure that the pheromone trails lie always in the interval [0,1]. The current study for ACO has extended from TSP range to many other fields, and it has developed into solving the multi-dimensional and dynamic combinatorial optimization problems instead of the static one-dimensional optimization problem. The research of ACO has also developed from discrete domain into continuous domain. It has got fruitful research results in improving the performance of the ACO and grafting on bionic natural evolutionary algorithms or local search algorithms.
This chapter is divided into three parts. In part one, in order to solve the shortcoming of easily being trapped in local minimum points, we respectively introduced mutation and simulated annealing (SA) algorithm (Kang et al.,1998) to the PSO, and proposed a hybrid algorithm by combined with the advantages of the strong global search ability of PSO and good local search ability of SA. The hybrid algorithm proposed was applied to solve the Chinese Traveling Salesman Problem with 31 cities (C-TSP). The comparative study on the experimental results with SA, elementary PSO (Zhong et al., 2007; Xiao et al., 2004, Li et al., 2008) and PSO with mutation were given. In part two, the mechanisms and properties of the five ant colony algorithms were synthesized, compared and analyzed including basic ant colony algorithm (ant system, AS), elitist strategy of ant system (EAS), a new rank-based version of the ant system (ASrank), max-min ant system (MMAS) and ant colony system (ACS). The efficiency of five algorithms was also compared through their applications in the C-TSP. The investigations of the performances were done in the aspects of the effects of different parameters and the relations between parameters of the algorithms. The third part is conclusions.
2. PSO and its application in C-TSP
2.1. PSO with mutation
In the PSO, each single solution is a “bird” in the search space. The particles fly through the problem space by following the current optimum particles. In every iteration, each particle is updated by following two “best” values. The first one is the best solution it has achieved so far, it is a
Suppose that the searching space is
in which, i = 1,2 …,m; d = 1,2,…,D; k is iteration number; c1, c2 are called acceleration coefficients, which are used to adjust the maximum flight step of personal best value and global best value, rand() returns a random number between (0,1);
In Zhong et al. in 2007 a large number of experiments proved that once
Initialize generation and all particles, viz. set the initial position X of each particle and the initial velocity V randomly.
For each particle, calculate the fitness value.
For each particle, if the fitness value is better than the best fitness value
Choose the particle with the best fitness value of all the particles as
For each particle, calculate the particle velocity according to Eq. (1), and update particle position according to Eq. (2).
If the maximum iteration or the minimum error criteria is not attained, return to
The PSO has been successfully applied in many continuous optimization problems. The Traveling Salesman Problem (TSP) is a typical discrete combinatorial problem. If one wants to solve the TSP with PSO, some improvements of basic PSO must be done. In Huang et al., in 2003 the concept of swap operator and swap sequence were introduced for solving the TSP. Suppose that the solution sequence of the TSP with n nodes is
Based on vectors, functions and operations defined above, the traditional updating equations will be changed in the following versions (Huang et al., 2003):
in which,
The steps of the PSO algorithm for solving the TSP can be described as follows:
Step 1.Initialize generation and all the particles, set each particle a random initial solution and a random swap sequence.
If the maximum iteration or the minimum error criteria is met, turn to Step 5.
According to the particle’s current position
Calculate the difference between
Calculate the difference between
Calculate the velocity
Calculate the new solution
Choose the particle with the best fitness value of all the particles as
Show the result that obtained.
In the settlement of solving TSP, basic PSO generates new individual through Eq. (3) and Eq.(4), from which one can see that a basic swap sequence generated by Eq. (3) is in fact equivalent to the swap operator to a route, but the route between the two swap cities do not change, so it is easy to generate cross route which is illegal solution. To deal with the problem, inspired by the mutation operator in evolutionary algorithm, we add the mutation operator to the PSO. The specific approach is: after generating a new route by basic PSO approach during each iteration one does the mutation operator to the new route. More specific, change
Step 4: Generate two mutate cities randomly, then reverse the order of all the cities between two mutate cities. If the length of new route is less than the original route, set the new route as
2.2. A hybrid algorithm of PSO and SA
The SA algorithm derived from the principle of solid annealing. Firstly, heat the solid to a sufficiently high temperature, and then cool it slowly. This process is based on an analogy from thermodynamics where a system is slowly cooled in order to achieve its lowest energy state. According to Metropolis criteria, the probability of particles balance at temperature T is
The SA algorithm used to solve the TSP can be described as follows:
The SA algorithm in hybrid algorithm of PSO and SA proposed calculates alternately by two steps:
1. Generate a new solution by stochastic perturbation and calculate the change of the objective function.
Decide whether the new solution is accepted or not. In the case at a high temperature, the solution that increases the objective function may be accepted by means of decreasing the temperature slowly, which may avoid to trap in local minima. In such a way the algorithm can converge to the global best solution.
The nature of basic PSO is the use of individual and global maximum to guide the position of the next iteration, which can converge fast at the early iteration. But, after several iterations current solution, personal best value and global best value tend to the same, which leads to the loss of population diversity, and makes the solution be trapped in local minima. In order to avoid this phenomenon, inspired by the SA algorithm, we redesign the algorithm’s framework: when basic PSO converges to a solution
2.3. The C-TSP application and results analysis
The TSP is a well-known combinatorial optimization problem with typical NP-hard and is often used to verify the superiority of some intelligent heuristic algorithm. The mathematical description of the TSP is: Given a list of n cities in order of visiting as
Chinese Traveling Salesman Problem (C-TSP) is a typical symmetric TSP problem. Its simple description is: Given a list of 31 Chinese capital cities and their pairwise distances, the task is to find the shortest possible tour that visits each city exactly once. The C-TSP is a medium-scale TSP problem, which has
The algorithms of solving TSP problem are divided into two categories: Exact algorithms and approximation algorithms. The exact algorithms used frequently includes branch and bound algorithms, linear programming and exhaustion method, etc.. The running time for this approach lies within a polynomial factor of O(
The hybrid algorithm proposed is applied into the C-TSP. At the same time, basic PSO, SA and PSO with mutation are also applied to do the comparisons. The programming language is Matlab 7.0, and it runs on Win XP with Intel Core2 Duo 2.10 GHz CPU. We use the discrete PSO (DPSO) that proposed by Huang et al. in 2003 as the basic PSO, parameter settings are as follows: Particle number
One can see from Table 1 that the result obtained by DPSO is not satisfactory. It is unable to find the optimal solution of 15404, and its average value of solutions is also away from the optimal solution, which is because current solution
Algorithm | Worst | Best | Average | Optimal rate |
DPSO | 20152 | 16665 | 18035.3 | 0 |
PSO With Mutation | 16194 | 15404 | 15662.3 | 0.1 |
SA | 15606 | 15404 | 15467.8 | 0.15 |
Hybrid Algorithm | 15587 | 15404 | 15453.4 | 0.2 |
From Figure 1 it is clear that SA converges to the optimal solution in about 500 iterations. Figure 2 indicates that PSO with mutation algorithm has oscillation at the early iteration, and it converges to the optimal solution in about 1000 iterations. From Figure 3 one can see that the hybrid algorithm converges to the optimal solution in about 100 iterations. The
problem of local optimal solution is not only been solved, but also the convergence speed is greatly decreased. Figure 4 is the optimal route made by the hybrid algorithm for C-TSP problem. The optimal route is: Beijing - Harbin - Changchun - Shenyang - Tianjin - Jinan - Hefei - Nanjing - Shanghai - Hangzhou - Taipei - Fuzhou - Nanchang - Wuhan - Changsha - Guangzhou - Haikou - Nanning - Guiyang - Kunming - Chengdu - Lhasa - Urumchi - Xining - Lanzhou - Yinchuan - Xian - Zhengzhou - Shijiazhuang - Taiyuan - Hohhot – Beijing. The total distance length of the optimal route is 15404 km.
3. Ant colony optimization algorithms and their improvements to the C-TSP application
3.1. Ant colony optimization algorithms
Artificial ants of the ACO algorithms build solutions by performing random walk which use a certain stochastic rules on a completely connected graph
It is important to note that artificial ants are in parallel movement independently. Although each ant is complex enough to find a (probably poor) solution to the problem under consideration, good-quality solutions can only emerge as the result of the collective interaction among the ants. This collaborative interaction is obtained via indirect communication mediated by the information ants read or write in the variables storing pheromone trail values. To some extent, this is a distributed learning process in which the single ant is not self-adaptive but, on the contrary, it can modify adaptively the way represented and perceived by other ants. Informally an ACO algorithm can be imagined as the interplay of three procedures (Dorigo & Stützle, 2004): Construction of Ants Solutions, Pheromone updating, and Daemon Actions.
1.Construction of Ants Solutions manages a colony of ants that concurrently and asynchronously visit adjacent states of the problem considered by moving through neighbor nodes of the problem’s construction graph
Pheromone updating is the procedure in which the pheromone trails are modified. The trails’ values move either increase, as ants deposit pheromone on the components or connections they use, or decrease due to pheromone evaporation. From a practical point of view, the deposit of new pheromone increases the probability whose components/ connections are used by either many ants or at least one ant. A very good solution produced will be used again in future ants. Differently, pheromone evaporation carries out a forgetting factor in order to avoid a too rapid convergence to a sub-optimal region, so pheromone evaporation has the advantage of generating new search areas.
Daemon Actions procedure is used to centralize the actions which can not be performed by single ants. Examples of daemon actions are the activation of a local optimization procedure, or the collection of global information used to decide whether it is useful or not to deposit additional pheromone to update the search process.
These three procedures should interact and take into account the characteristics of the problem considered. The TSP can be represented by a complete weighed graph
In the TSP,
The ACO can be applied to the TSP in a straightforward way. The construction graph
3.2. Ant System (AS)
Ant System is created by Marco Dorigo and others in 1991 (Dorigo & Stützle, 2004, Dorido et al., 1991, Colorini et al., 1992, Dorigo 1992), and it is the first algorithm which is in line with the ACO algorithm framework. Initially three different versions of AS were proposed which are called
3.2.1. Tour construction
In AS,
By this probabilistic rule, the probability of choosing a particular arc(
3.2.2. Pheromone trails updating
After all the
The parameter ρ (
After pheromone evaporation, all the
where
where
3.3. Elitist Ant System (EAS)
Elitist Ant System is the first improvement on the initial AS, which is called elitist strategy for Ant system (EAS) (Dorigo & Stützle, 2004, Dorido et al., 1991, Colorini et al., 1992, Dorigo, 1992). EAS gives the ant which has constructed so far the best path solution the elite logo, sets the so far the best path
In tour construction, the methods in EAS is the same as the methods in AS. In pheromone updating, the pheromone evaporation formula is the same as (3.4).The additional reinforcement of tour
where
The key is that the EAS has adopted a daemon action, which is the additional incentive of the elite ants. Although this operation belongs to the pheromone updating steps, it is a kind of additional guidance to the overall operation. One can image like this: After all the ants including the elite ant depositing pheromone on the arcs they have crossed over their tour, the elite ant give the
Compared with EAS, the pheromone updating mechanism in the AS is weak indeed. Sometimes, the optimal path may be with a very small difference between the paths which are not so satisfactory, and the mechanism in the AS can not make a good distinction between them. This is because of the simple form of the pheromone depositing formula which allows all ants use the same weight for depositing pheromone. Usually, the level for the algorithm to explore the overall optimal solution is not enough. The EAS with the parameter e determining the weight gives the best-so-far tour, that is, the best-so-far solution has been improved in the course of the search status, and the algorithm attempts to search a better solution which around the best-so-far solution. From the other side of the coin, the EAS concentrates a smaller search space which is compressed from original search space. Such a smaller search space may have better solutions. The mechanism increases the probability for finding overall optimal solution, and at the same time it also speeds up the convergence. Experiments later in this chapter show that parameter e needs to be selected with a reasonable value: an appropriate value for parameter e allows EAS to both find better tours and have a lower number of iterations. If parameter e is too small, the elitist strategy will not have much effect because of the low discrimination for better ants, while if the parameter e is too large, the algorithm will be too dependent on the initial best-so-far tour, and have rapid convergence to a small number of local optimal solutions, which weaken algorithm’s ability of exploration.
3.4. Rank-Based Ant System (ASrank)
Another improvement over AS is the rank-based version of AS (Dorigo & Stützle, 2004; Bullnheimer et al., 1997): ASrank. In ASrank, before updating the pheromone trails, the ants are sorted by increasing tour length and the quantity of pheromone deposited by an ant is weighted according to the rank of the ant. Usually in each iteration, only the (w-1) best ranked ants and the ant produced the best-so-far tour (this ant is not necessarily is belong to the set of ants of the current algorithm iteration) are allowed to deposit pheromone. The best-so-far tour gives the strongest feedback, with weight w; the r-th best ant in the current iteration contributes the pheromone updating with the value
In tour construction, the methods in ASrank are the same as the methods in AS. In pheromone updating, the pheromone evaporation formula is the same as in Eq. (8). The ASrank pheromone update rule is:
where
Compared with AS the ASrank selects w ants to deposit pheromone according to the rank of solutions’ quality, which is a new improved formula. It completely abolishes the
ASrank and EAS are different on the reward strategy of elitist ants. EAS just give the best-so-far solution an additional incentive, although it can find the good solutions, indirectly it is greatly weakened or even abandoned the second-best-so-far solutions whose neighborhood may have better solutions. In ASrank, the algorithm gives a group of elitist ants award, but the award is according to the rank of solutions. On the one hand the mechanism takes into account the importance of the best-so-far solution, which ensures that the experience of the leading elitist ant is retained; on the other hand it considers the neighborhood of sub-optimal solutions, that is, it increases ability to explore the optimal solution.
3.5. MAX-MIN Ant System (MMAS)
Max-Min Ant System is a constructive amendment to AS (Dorigo & Stützle, 2004, , Stützle & Hoos 1996, Stützle & Hoos, 1997, Stützle & Hoos, 2000). It is one of the best ACO algorithms. There are four main modifications with respect to AS:
1.It strongly exploits the best tours found: only either the iteration-best ant, that is, the ant produced the best tour in the current iteration, or the best-so-far ant is allowed to deposit pheromone.
It limits the possible range of pheromone trail values in the interval
The pheromone trails are initialized to the upper pheromone trail limit together with a small pheromone evaporation rate.
Pheromone trails are reinitialized when the system approaches are stagnated or when no improved tour has been generated for a certain number of consecutive iterations.
The first point strongly exploits the best tours found, but may lead to a stagnation situation in which all the ants follow the same tour. To increase the effect, the second modification is introduced by MMAS. It makes that the pheromone in each arc does not accumulate too large or consume too small, so that it could ensure the sustainability of exploration. The third point makes MMAS have a stronger ability to explore at the initial stage. The fourth point introduces a new mechanism which can be applied to all the ACO algorithms, that is, through the resumption of initial pheromone and re-search thus it increases the possibility of finding the optimal solution.
Since only the optimal solution is allowed to deposit pheromones, the formula about pheromones depositing is very concise:
where
In general, in MMAS implementations both the iteration-best and the best-so-far updating rules are used, in an alternate way. Obviously, the choice of the relative frequency with which the two pheromone updating rules are applied has an influence on how greedy the search is: When pheromone updating is always performed by the best-so-far ant, the search focuses very quickly around the best-so-far solution, whereas when it is the iteration-best ant that updates pheromones, the number of arcs received pheromone is larger and the search is less directed.
In MMAS, there are two default daemon actions. One is the pheromone trails limits, the other one is the pheromone trails re-initialization. In MMAS, lower and upper limits
3.6. Ant Colony System (ACS)
Ant Colony System (ACS) is an extension of AS (Dorigo & Stützle, 2004; Dorigo & Gambardella, 1997). The ACS exploits the search experience accumulated by the ants more strongly than AS. The rule is called pseudorandom proportional rule. When located at city
where q is a random variable uniformly distributed in
This pseudorandom proportional has strong artificial operability because the parameters q0 can be set to guide the algorithm’s preference. By tuning the parameter q0, it is allowed to modify the degree of exploration and to select whether to concentrate the search of the system around the best-so-far solution or to explore other tours.
In ACS only the best-so-far ant is allowed to update pheromone after each iteration including the pheromone deposit and pheromone evaporation. In each time an ant uses an arc to move from city i to city j, which is called the local pheromone updating to remove some pheromone from the arc to increase the exploration of alternative paths. The global pheromone trail updating is described as the following equation:
where
The local pheromone trail updating is described as the following equation:
where ε,
3.7. Application of five ACO algorithms in C-TSP and results analysis
In order to demonstrate and verify the properties of five ACO algorithms mentioned above, in this section, we’ll apply them in the Chinese TSP with 31 capital cities, and compare their advantages and disadvantages in the medium-scale discrete optimization problem.
The important parameters of Ant System are as follows.
A candidate list is first built to restrict the number of available choices considered at each construction step. In general, candidate lists contain a number of the best rated choices according to some heuristic criterion. First, configure for each city a nearest neighbor list which records the other cities sorted in ascending order by distance; Second, build the candidate lists for each city and set the parameter
The steps of Ant System (AS) algorithm is as follows (in the case no candidate lists):
Step 1.Enter an actual TSP, get the scale n of the problem, and transform the instance into a symmetric distance matrix distance (set the diagonal elements with small values to prevent the situation of NAN.)
Initialize all the parameters, including m, α, β, ρ, q,
Initialize storage variable including best-so-far solution
Begin to circulate, and set the iteration number nc = nc +1.
The starting cities of ants are randomly distributed as begin_city = randperm (
Ant walking steps
Artificial ants’ label
Choose the next city in accordance with the probability formula, taboo the selected city in the ant taboo list tabu (k, :), and add this chosen city into the ant path sequence
If k < m, then turn to step 7; otherwise turn to step 10.
IfAccording to the pheromone updating formula of AS, update the pheromone trails τ, and update the optimal solution as best-so-far solution nowbest_opt and the optimal path as best-so-far tour nowbest_path which includes the cities of the best-so-far tour.
If the iteration number nc < NC, then turn to step 4; otherwise end the operation, export the data and image results.
Remark: The steps described above are just the AS algorithm. The improved algorithms mentioned above should have some changes, which are mainly in the steps of the parameter settings, constructing solutions and pheromone updating. In the process of building a solution, they use parallel mechanism. To add the candidate lists to the algorithm, first of all one needs to add a step in one of the first two steps in the algorithm: list the neighbor cities in ascending order by distance and configure nearest neighbor list for each city, set the parameter
Table 2 is the summary of the experimental results of the five ACO algorithms including adding the mechanism of the candidate list and pheromone re-initialization, etc.. Each system runs for 100 times. In Table 2, the algorithm with "_C" has the mechanism of candidate list, the algorithm with “_R” has the mechanism of pheromone re-initialization, Best or Worst mean the best or worst solution of the 100 solutions. “Opt. rate” is the rate of global optimal solution, Average is the average solution of all solutions. “Average converg.” is the average convergence iteration number, and Average time is the average time in 100 running solutions. Relative error is described as follows:
where the
Algorithm | Best | Worst | Opt. rate | Average | R elative error | Average converg. | Itera. No. | Ave rage time |
AS | 15420 | 15669 | 0 | 15569.05 | 1.071% | 1405.95 | 3000 | 12.221 |
AS_ C | 15420 | 15620 | 0 | 15548.3 | 0.937% | 1380.11 | 3000 | 6.04 4 |
EAS | 15404 | 15625 | 48% | 15447.4 | 0.282% | 1620.48 | 4000 | 16.409 |
EAS_ C | 15404 | 15593 | 52% | 15437.62 | 0.218% | 1606.95 | 4000 | 7.95 4 |
AS rank | 15404 | 15593 | 63% | 15413.74 | 0.063% | 1857.19 | 4000 | 16.662 |
AS rank _ C | 15404 | 15520 | 65% | 15408.05 | 0.026% | 1747.07 | 4000 | 8.349 |
MMAS | 15404 | 15593 | 55% | 15428.54 | 0.159% | 2371.96 | 5000 | 20.38 6 |
MMAS_ C | 15404 | 15593 | 57% | 15424.32 | 0.132% | 2166.56 | 5000 | 10.384 |
MMAS_R | 15404 | 15593 | 68% | 15418.48 | 0.094% | 2984.5 | 8000 | 23.95 1 |
MMAS_ C _R | 15404 | 15520 | 73% | 15418.75 | 0.096% | 2655.68 | 8000 | 10.799 |
ACS | 15404 | 15779 | 40% | 15442.42 | 0.249% | 2889.67 | 10000 | 5.54 6 |
ACS_ C | 15404 | 15745 | 40% | 15445.51 | 0.269% | 2708.9 | 10000 | 4.817 |
One can obtain from Table 2 the following results:
1. In the test of all 12 kinds of algorithms, from the column "Best" solutions one can see except AS and AS_C which add the mechanism of candidate list to AS, the other 10 kinds of algorithms can detect the global optimal solution 15404.
Along all the algorithms, from the column "Opt. rate" one can see max-min ant system which has the mechanisms of the candidate list and pheromone initialization added in MMAS_C_R owns the highest rate of excellent, that is 73%, followed by MMAS_R; Except AS and AS_C, ACS and ACS_C which add the mechanism of candidate list to ACS have the worse performance, that is 40%.
Compare these five algorithms without any mechanism, from the "Opt. rate " one can also see that rank based version AS, ASrank has the highest rate of excellent, that is 63%, followed by MMAS, EAS, ACS, and AS.
Compare these five algorithms without any mechanism to those five algorithms with mechanisms of the candidate list respectively, that is, compare XXX with XXX_N, one can see from the optimal rate and average running time except ACS and ACS_C are not obvious, the mechanism of the candidate list slightly improves the performance of the algorithms and greatly reduces the running time.
Compare the four cases of MMAS, that is, MMAS, MMAS_C, MMAS_R and MMAS_C_R, one can see from the optimal rate that every mechanism can improve the performance of algorithms, and max-min ant system only with the mechanisms of pheromone initialization the candidate list has a better performance than only with the mechanisms of the candidate list. Of course, max-min ant system with two mechanisms at the same time has the best performance.
4. Conclusions
This chapter has analyzed the characteristics of the SA and PSO for solving the C-TSP problem. Combined the fast convergence speed of PSO with the good local search ability of SA, a hybrid algorithm has been proposed. Numerical simulations show that the proposed algorithm is more efficient in C-TSP than single PSO and SA, respectively. Generally speaking, when ACO algorithms are applied to the TSP instance C-TSP, elitist strategy for ant system, rank based version AS, max-min ant system, ant colony system show better performance, they have a certain percentage to find the global optimal solution, but ant system fails to find global optimal solution. In all these systems, max-min ant system which has the mechanisms of the candidate list and pheromone initialization added in shows the best performance in the C-TSP.
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grant No. 60774098.
References
- 1.
Bullnheimer B. Hartl R. F. Strauss C. 1997 A new rank based version of the Ant System: A computational study. ,7 1 25 38 - 2.
Colorini A. Dorigo M. Maniezzo V. 1992 Distributed optimization by ant colonies. In F.J. Varela&P.Bourgine (Eds), ,134 142 . Cambrige, MA, MIT Press - 3.
Dorigo M. Gambardella L. M. 1997 Ant colonies for the traveling salesman problem. ,43 2 73 81 - 4.
Dorigo M. Gambardella L. M. 1997 Ant Colony System: A cooperative learning approach to the traveling salesman problem. ,1 1 53 66 - 5.
Dorigo M. 1992 Optimization, Learning and Natural Algorithms [in Italian].PhD thesis, Politecnico di Milano, Milan - 6.
Dorigo M. Stützle T. 2004 Ant Colony Optimization. MIT Press, 978-0-262-04219-2 Boston - 7.
Dorigo M. Maniezzo V. Colorni A. 1991 Positive feedback as a search strategy. Technical report, , Politecnico di Milano, Milan,91 96 - 8.
Eberhart R. C. Kennedy J. 1995 A New Optimizer Using Particles Swarm Theory. ,39 43 , 0-7803-2676-8, Nagoya, Oct. 4-6 - 9.
Huang L. Wang K. P. Zhou C. G. 2003 Particle Swarm Optimization For Traveling Salesman Problems. ,41 10 477 480 - 10.
Kang L. H. Xie Y. 1998 Non-numerical Parallel Algorithms-Simulated Annealing Algorithm, Science Press, 703003736, Beijing - 11.
Li L. L. Zhu Z. K. Wang W. F. 2008 A Reinforced Self-Escape Discrete Particle Swarm Optimization for TSP.467 470 , 978-0-7695-3334-6, Jinzhou, China, Sept. 25-26 - 12.
Lovbjerg M. Rasmussen T. K. Krink T. 2001 Hybrid Particle Swarm Optimizaters with Breeding and Subpopulations.24 469 476 , San Francisco, USA - 13.
Stützle T. Hoos H. H. 1996 Improving the Ant System: A detailed report on the MAX-MIN Ant System. Technical report AIDA-96 12 , FG Intellektik, FB Informatik, TU Darmstadt, Germany - 14.
Stützle T. Hoos H. H. 2000 MAX-MIN Ant System. ,16 8 889 914 - 15.
Stützle T. Hoos H. H. 1997 The MAX-MIN Ant System and local search for the traveling salesman problem. In T.Bäck, Z.Michalewicz, & X.Yao (Eds.),309 314 , Piscataway, NJ, IEEE Press - 16.
Xiao J. M. Li J. J. Wang X. H. 2004 A Modified Particle Swarm Optimization for Travelings Salesman Problems. ,40 35 50 52 - 17.
Xu Y. H. Wang Q. W. etc 2008 An Improved Discrete Particle Swarm Optimization Based on Cooperative Swarms. ,79 82 , 978-0-7695-3496-1, Sydney, Australia, Dec. 9-12 - 18.
Yuan Z. L. Yang L. L. Liao L. 2007 Chaotic Particle Swarm Optimization Algorithm for Traveling Salesman Problem. ,1121 1124 , 978-1-4244-1531-1, Jinan, China, Aug. 18-21 - 19.
Zhong W. L. Zhang J. Chen W. N. 2007 A Novel Discrete Particle Swarm Optimization to Solve Traveling Salesman Problem. ,3283 3287 , 978-1-4244-1339-3, Singapore, Sept. 25-28