Evolutionary Algorithm based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems

This paper states a novel, Evolutionary Metaheuristic Based on the Automata Theory (EMODS) for the multiobjective optimization of combinatorial problems. The proposed algorithm uses the natural selection theory in order to explore the feasible solutions space of a combinatorial problem. Due to this, local optimums are often avoided. Also, EMODS exploits the optimization process from the Metaheuristic of Deterministic Swapping to avoid finding unfeasible solutions. The proposed algorithm was tested using well known multi-objective TSP instances from the TSPLIB. Its results were compared against others Automata Theory inspired Algorithms using metrics from the specialized literature. In every case, the EMODS results on the metrics were always better and in some of those cases, the distance from the true solutions was 0.89%.


Introduction
In this chapter we are going to study metaheuristics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. As well known, Combinatorial Optimization is a branch of optimization. Its domain is optimization problems where the set of feasible solutions is discrete or can be reduced to a discrete one, and the goal is to find the best possible solution (Yong-Fa & Ming-Yang, 2004). In this field it is possible to find a lot of problems denominated NP-Hard, that is mean that the problem does not have a solution in Polynomial Time. For instance, problems such as Multi-depot vehicle routing problem (Lim & Wang, 2005), delivery and pickup vehicle routing problem with time windows (Wang & Lang, 2008), multi-depot vehicle routing problem with weight-related costs (Fung et al., 2009), Railway Traveling Salesman Problem (Hu & Raidl, 2008), Heterogeneous, Multiple Depot, Multiple Traveling Salesman Problem (Oberlin et al., 2009) and Traveling Salesman with Multi-agent (Wang & Xu, 2009) are categorized as NP-Hard problems.
One of the most classical problems in the Combinatorial Optimization Field is the Traveling Salesman Problem (TSP), it has been analyzed for years (Sauer & Coelho, 2008) either in a Mono or Multi-objective way. It is defined as follows: "Given a set of cities and a departure city, visit each city only once and go back to the departure city with the minimum cost". Basically, that is mean, visiting each city once, to find an optimal tour in a set of cities, an instance of TSP problem can be seen in figure 1. Formally, TSP is defined as follows: Subject to: n ∑ j=1 X ij = 1, ∀i = 1,...,n (2) n ∑ j=1 X ij = 1, ∀j = 1,...,n ∑ i∈κ ∑ j∈κ X ij ≤|κ|−1, ∀κ ⊂ {1,...,n} (4) 4 www.intechopen.com Where C ij is the cost of the path X ij and κ is any nonempty proper subset of the cities 1, . . . , m.
(1) is the objective function. The goal is the optimization of the overall cost of the tour.
(2), (3) and (5) fulfills the constrain of visiting each city only once. Lastly, Equation (4) set the subsets of solutions, avoiding cycles in the tour.

Fig. 1. TSP instance of ten cities
TSP has an important impact on different sciences and fields, for instance in Operations Research and Theoretical Computer Science. Most problems related to those fields, are based in the TSP definition. For instance, problems such as Heterogeneous Machine Scheduling (Kim & Lee, 1998), Hybrid Scheduling and Dual Queue Scheduling (Shah et al., 2009), Project Management(de Pablo, 2009, Scheduling for Multichannel EPONs(McGarry et al., 2008), Single Machine Scheduling (Chunyue et al., 2009), Distributed Scheduling Systems (Yu et al., 1999), Relaxing Scheduling Loop Constraints (Kim & Lipasti, 2003), Distributed Parallel Scheduling , Scheduling for Grids (Huang et al., 2010), Parallel Scheduling for Dependent Task Graphs (Mingsheng et al., 2003), Dynamic Scheduling on Multiprocessor Architectures (Hamidzadeh & Atif, 1996), Advanced Planning and Scheduling System (Chua et al., 2006), Tasks and Messages in Distributed Real-Time Systems (Manimaran et al., 1997), Production Scheduling (You-xin et al., 2009), Cellular Network for Quality of Service Assurance (Wu & Negi, 2003), Net Based Scheduling (Wei et al., 2007), Spring Scheduling Co-processor (Niehaus et al., 1993), Multiple-resource Periodic Scheduling (Zhu et al., 2003), Real-Time Query Scheduling for Wireless Sensor Networks (Chipara et al., 2007), Multimedia Computing and Real-time Constraints (Chen et al., 2003), Pattern Driven Dynamic Scheduling (Yingzi et al., 2009), Security-assured Grid Job Scheduling (Song et al., 2006), Cost Reduction and Customer Satisfaction (Grobler & Engelbrecht, 2007), MPEG-2 TS Multiplexers in CATV Networks (Jianghong et al., 2000), Contention Awareness (Shanmugapriya et al., 2009) and The Hard Scheduling Optimization (Niño, Ardila, Perez & Donoso, 2010) had been derived from TSP. Although several algorithms have been implemented to solve TSP, there is no one that optimal solves it. For this reason, this chapter discuss novel metaheuristics based on the Automata Theory to solve the Multi-objective Traveling Salesman Problem.
This chapter is structured as follows: Section 2 shows important definitions to understand the Multi-objective Combinatorial Optimization and the Metaheuristic Approximation. Section 3, 4 and 5 discuss Evolutionary Metaheuritics based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. Finally, Section 6 and 7 discuss the Experimental Results of each proposed Algorithm using Multi-objective Metrics from the specialized literature.

Multi-objective optimization
The Multi-objective optimization consists in two or more objectives functions to optimize and a set of constraints. Mathematically, the Multi-objective Optimization model is defined as follows: Subject to: Where F(X) is the set of objective functions, H(X) and G(X) are the constraints of the problem. Lastly, X l and X u are the bounds for the set of variables X.  Figure 2 shows a Pareto Front for a particular Tri-objective Problem. Lastly, it is probably that some Multi-objective Problems {0.8, 0.9, 1.0}

Tabu search
Tabu Search (Glover & Laguna, 1997) is a basic local search strategy for the Optimization of Combinatorial Problems. It is defined as follows: Given S as the Initial Solutions Set.
Step 1. Selection. Select x ∈ S Step 2. Perturbation. Perturbs the solution x for the purpose of knowing its Neighborhood (N(x)). Perturbing a solution means to modify the solution x in order to obtain a new solution (x i ′ ). The solutions found are called Neighbors, and those represent the Neighborhood.
For instance, figure 3 shows three perturbations for a x solutions and the new solutions

Genetic algorithms
Genetic Algorithms are Algorithms based on the Theory of Natural Selection (Wijkman, 1996). Thus, Genetic Algorithms mimics the realBehavior Genetic Algorithms (Fisher, 1930) through three basic steps: Given a set of Initial Solutions S Step 1. Selection. Select solutions from a population. In pairs, select two solutions x, y ∈ S Step 2. Crossover. Cross the selected solutions avoiding local optimums.
Step 3. Mutation. Perturbs the new solutions found for increasing the population. The perturbation can be done according to the representation of the solution. In this step, good solutions are added to S Figure 4 shows the basics steps of a Genetic Algorithm. The most known Genetic Algorithms from the literature (Dukkipati & Narasimha Murty, 2002) are the Non-Dominated Sorting Genetic Algorithm (Deb et al., 2002) (Gacto et al., 2011), Real-coded Quantum Clones (Xiawen & Yu, 2011), Optimization Problems with Correlated Objectives (Ishibuchi et al., 2011), Production Planning , Optical and Dynamic Networks Designs (Araujo et al., 2011;Wismans et al., 2011), Benchmark multi-objective optimization (McClymont & Keedwell, 2011) and Vendor-managed Inventory (Azuma et al., 2011) have been solved using SPEA and NSGA-II.

Simulated Annealing algorithms
Simulated Annealing (Kirkpatrick et al., 1983) is a generic probabilistic metaheuristic based in the Annealing in Metallurgy. Similar to Tabu Search, Simulated Annealing explores the neighborhood of solutions being flexible with no-good solutions. That is mean, accepting bad solutions as well as good solution, but only in the first iterations. The acceptation of a bad solution is based on the Boltzmann Probabilistic Distribution: Where E is the change of the Energy and T i is the temperature in the moment i.I nt h efi r s t level of the temperature, bad solutions are accepted as well, anyways, when the temperature go down, Simulated Annealing behaves similar to Tabu Search (only accept good solutions).
Recentrly, similar to Genetic Algoritms and Tabu Search, many problems have been solved using Simulated Annealing metaheuristic. For instance, Neuro Fuzzy -SystemsCzabaski (2006)

Deterministic Finite Automata
Formally, a Deterministic Finite Automata is a Quint-tuple defined as follows: Set of transitions δ. The set of transitions (δ) describes the behavior of the automata. Let a ∈ S and q, r ∈ Q, then the function is defined as follows: table 3, the representation of A using a state diagram can be derived as shown in figure 5. Notice that each state of DFA has transitions with all the elements of Σ. Table 3. Set of transitions for the DFA of example 1

Metaheuristic Of Deterministic Swapping (MODS)
Metaheuristic Of Deterministic Swapping (MODS) (Niño et al., 2011) is a local search strategy that explores the Feasible Solution Space of a Combinatorial Problem supported in a data structure named Multi Objective Deterministic Finite Automata (MDFA) (Niño, Ardila, Donoso & Jabba, 2010). A MDFA is a Deterministic Finite Automata that allows the representation of the feasible solution space of a Combinatorial Problem. Formally, a MDFA is defined as follows: Where Q represents all the set of states of the automata (feasible solution space), Σ is the input alphabet that is used for δ (transition function) to explore the feasible solution space of a combinatorial problem, Q 0 contains the initial set of states (initial solutions) and F(X) are the objectives to optimize.

Example 1. MDFA for a Scheduling Parallel Machine Problem:
A Company has three machines. It is necessary to schedule three processes in parallel P 1 ,P 2 and P 3 . Each process has a duration of 5, 10 y 50 minutes respectively. If the processes can be executed in any of the machines, how many manners the machines can be assigned to the processes? Given the Bi-objective function in (10), what is the optimal Pareto Front?
First of all, we need to build the MDFA. For doing this, we must define the states of the MDFA setting the structure of the solution for each state. Therefore, if we state that X q =( P k , P i , P j ) represents the solution for the state q: machine 1 executes the process P k , machine 2 executes the process P i and machine 3 executes the process P j then the arrays solution for each state will be X q 0 =( P 1 , P 2 , P 3 ), X q 1 =( P 1 , P 3 , P 2 ), X q 2 =( P 2 , P 1 , P 3 ), X q 3 =( P 2 , P 3 , P 1 ), X q 4 = (P 3 , P 1 , P 2 ) y X q 5 =(P 3 , P 2 , P 1 ). Now, we have six states q 0 ,q 1 ,q 2 ,q 3 ,q 4 and q 5 , those represent the feasible solution space of the Scheduling problem proposed. The set of states for the MDFA of this problem can be seen in figure 6. Once the set of states is defined, the Input Alphabet (Σ) and the Transition Function (δ) be done. It is very important to take into account, first, the bond of both allows to perturb the solutions in all the possible manners, in other words, we can change of state using the combination of Σ and δ. Obviously, doing this, we avoid unfeasible solutions. Regarding the proposed problem, we propose the set Σ as follows: Hence, it is elemental that δ(q 0 , (P 1 , P 2 )) = q 2 , δ(q 0 , (P 1 , P 3 )) = q 5 , ... , δ(q 5 , (P 2 , P 3 )) = q 3 .A t this part, the transitions has been defined therefore the MDFA can be seen in figure 7.
Finally, the solution of each state is replaced in (10). The results can be seen in table 4 and the Optimal Pareto Front is shown in figure 8.
10 50 5 125 36.66 q 1 P 1 P 3 P 2 10 5 50 170 29.16 q 2 P 2 P 1 P 3 50 10 5 85 56.66 q 3 P 2 P 3 P 1 50 5 10 90 55.83 q 4 P 3 P 1 P 2 5 10 50 175 26.66 q 5 P 3 P 2 P 1 5 50 10 135 33.33 Table 4. Values of F(X) for the states of example 2 As can be seen in figure 7, the feasible solution space for this problem was described using a MDFA. Also, unfeasible solutions are not allowed because of the definition of Σ. Nevertheless, the general problem was not solved, only a particular case of three variables (machines) was done. For this reason, it was easy to draw the entire MDFA. However, problems like this are intractable for a large number of variables, in other words, when the number of variables grow the feasible solution space grows exponentially. In this manner, it is not a good idea to draw the entire feasible solution space and pick the best solutions. Thus, what should we do in order to solve any combinatorial problem, without taking into account its size, using a MDFA? Looking an answer to this question, MODS was proposed.
MODS explores the feasible solution space represented through a MDFA using a search direction given by an elitist set of solutions (Q * ). The elitist solution are states that, when were visited, their solution dominated at least one solution of an element in Q φ . Q φ contains all the states with non-dominated solutions. Due to this, it can be inferred that the elements of Q * are contained in Q φ , for this reason is true that: Lastly, the template algorithm of MODS is defined as follows: Step 1. Create the initial set of solutions Q 0 using a heuristic relative to the problem to solve.
Step 2. Set Q φ as Q 0 and Q * as φ.
Step 3. Select a random state q ∈ Q φ or q ∈ Q * Step 4. Explore the neighborhood of q using δ and Σ.A d dt oQ φ the solutions found that are not dominated by elements of Q f . In addition, add to Q * those solutions found that dominated at least one element from Q φ .
Step 5. Check stop condition, go to 3.

Simulated Annealing Metaheuristic Of Deterministic Swapping (SAMODS)
Simulated Annealing & Metaheuristic Of Deterministic Swapping (Niño, 2012) (SAMODS) is a hybrid local search strategy based on the MODS theory and Simulated Annealing Algorithm for the Multiobjective Optimization of combinatorial problems. Its main propose consists in optimizing a combinatorial problem using a Search Direction and an Angle Improvement. SAMODS is based in the next Automata: Alike MODS, Q 0 is the set of initial solutions, Q is the feasible solution space and F(X) are the functions of the combinatorial problem. P(q) and A(n) are defined as follows: P(q) is the Permutation Function, formally it is defined as follows: P receives a solution q ∈ Q and perturbs it returning a new solution r i ∈ Q. The perturbation can be done based on the representation of the solutions. An example of some perturbations based on the representation of the solution can be seen in figure 15. A(n) is the Weight Function. Formally, it is defined as follow: Where n is the number of objectives of the problem.
Function A receives a natural number as parameter and it returns a vector with the weights.
The weight values are randomly generated with an uniform distribution. Those represent the weight to assign to each function of the combinatorial problem. The weight values returned by the function fulfill the next constrain: Where α i is the weight assigned to function i. But, what is the importance of those weights? The weights, in an implicit manner, allow setting the angle direction to the solutions. The angle direction is the course being followed by the solutions for optimizing F(X). Hence, when the weights values are changed, the angle of optimization is changed and a new search direction is obtained. For instance, different search directions for different weight values are shown in figure 16 in a Bi-objective combinatorial problem. Due to this, (6) is rewritten as follows: Where n is the number of objectives of the problem and α i is the weight assigned to the function i. The weights fulfills the constrain established in (20).
SAMODS main idea is simple: it takes advantage of the search directions given by MODS and it proposed an angle direction given by the function A(n). Thus, there are two directions; the first helps in the convergence of the Pareto Front and the second helps the solutions to find neighborhoods where F(X) is optimized. Due to this, SAMODS template is defined as follows: Step 1. Setting sets. Set Q 0 as the set of Initial Solutions. Set Q φ and Q * as Q 0 . Step 2. Settings parameters. Set T as the initial temperature, n as the number of objectives of the problem and ρ as the cooler factor.
Step 4. Perturbing Solutions. Set s ′ = P(s),addtoQ φ and Q * according to the next rules: If Q φ has at least one element that dominated to s ′ go to step 5, otherwise go to step 7.
Step 5. Guess with dominated solutions. Randomly generated a number n ∈ [0, 1].S e tz as follows: Where T i is the temperature value in moment i and γ is defined as follows: Where s X is the vector X of solution s, s ′ X is the vector X of solution s ′ , w i is the weight assigned to the function i and n is the number of objectives of the problem. If n < z then set s as s ′ and go to step 4 else go to step 6.
Step 6. Change the search direction. Randomly select a solution s ∈ Q * and go to step 4.
Step 7. Removing dominated solutions. Remove the dominated solution for each set (Q * and Q φ ). Go to step 3.
Step 8. Finishing. Q φ has the non-dominated solutions.
As can be seen in figure 11, alike MODS, SAMODS removes the dominated solutions when the new solution found is not dominated. Besides, if the new solution found dominated at least one element from the solution set (Q φ ) then it will be added to the elitisms set (Q * ) that works as a search direction for the Pareto Front. As far as here, SAMODS could sounds as a simple local search strategy but not, when a new solution found is dominated, SAMODS tries to improve it using guessing. Guessing is done accepting dominated solution as good solutions. Alike Simulated Annealing inspired algorithms, the dominated solutions are accepted under the Boltzmann Distribution Probability assigning weights to the objectives of the problem. It is probably that perturbing a dominated solution, a non-dominated solution can be found as can be seen in figure 12. Due to this, local optimums are avoided. When the temperature is low, the bad solutions are avoided because z value is low therefore SAMODS accepts only non-dominated solutions. However, by that time, Q φ will be leaded on by Q * . Fig. 11. Behavior of SAMODS when the new solution found is not dominated. Once a new solution found is non-dominated, it is added to the elitism set Q * and the dominated solutions from Q φ are removed.

Simulated Annealing, Genetic Algorithm & Metaheuristic Of Deterministic Swapping(Niño, 2012) (SAGAMODS) is a hybrid search strategy based on the Automata Theory, Simulated
Annealing and Genetics Algorithms. SAGAMODS is an extension of the SAMODS theory. It comes up as result of the next question: could SAMODS avoid quickly local optimums? Although, SAMODS avoids local optimums guessing, it can take a lot of time accepting dominated solutions for finding non-dominated. Thus, the answer to this question is based on the Evolutionary Theory. SAGAMODS proposes crossover step before SAMODS template is executed. Due to this, SAGAMODS supports to SAMODS for exploring distant regions of the solution space.
Formally, SAGAMODS is based on the next automata: Where Q is the feasible solutions space, Q S is the initial solutions and F(X) are the objectives of the problem. C(q, r, k) is defined as follows: Formally, Cross Function K is defined as follows: Where q, r ∈ Q and k ∈ N. q and r are named parents solutions and k is the cross point. The main idea of this function is cross two solutions in the same point and returns a new solution. For instance, two solutions of 4 variables are cross in figure 13. Obviously, the crossover is made regarding the representation of the solutions. Lastly, SAGAMODS template is defined Fig. 13. Crossover between two solutions. Solutions of the states q k and q j are crossed in order to get state q i as follows: Step 1. Setting parameters. Set Q S as the solution set, x as the number of solutions to cross for each iteration.
Step 2. Selection. Set Q C (crossover set) as selection of x solutions in Q S , Q M (mutation set) as φ and k as a random value.
Step 3. Crossover. For each s i , s i+1 ∈ Q C /1 ≤ i < |Q C |: Step 4. Mutation. Set Q 0 as Q M . Execute SAMODS as a local search strategy.
Step 5. Check stop conditions. Go to 2.

Evolutionary Metaheuristic Of Deterministic Swapping (EMODS)
Evolutionary Metaheuristic of Deterministic Swapping (EMODS), is a novel framework that allows the Multiobjective Optimization of Combinatorial Problems. Its framework is based on MODS template therefore its steps are the same: create Initial Solutions, Improve the Solutions (Optional) and Execute the Core Algorithm. Unlike SAMODS and SAGAMODS, EMODS avoids the slowly convergence of Simulated Annealing's method. EMODS explores different regions from the feasible solution space and search for non-dominated solution using Tabu Search.
The Core Algorithm is defined as follows: Step 1. Set θ as the maximum number of iterations, β as the maximum number of state selected in each iteration, ρ as the maximum number of perturbations by state and Q φ as Q 0 Step 2. Selection. Randomly select a state q ∈ Q φ or q ∈ Q * Step 3. Mutation -Tabu Search. Set N as the new solutions found as result of perturbing q.A dd to Q φ and Q * according to the next equations: Remove the states with dominated solutions for each set.
Step 4. Crossover. Randomly select states from Q φ and Q * . Generate a random point of cross.
Step 5. Check stop condition, go to 3.
Step 2 and 3 support the algorithm in removing dominated solutions from the set of solutions Q φ as can be seen in figure 3. However, one of the most important steps in the EMODS algorithm is step 4. There, similar to SAGAMODS, the algorithm applies an Evolutionary Strategy based in the crossover step of Genetic Algorithms for avoiding Local Optimums. Due to the crossover is not always made in the same point (the k-value is randomly generated in each state analyzed) the variety of solutions found are diverse avoiding local optimums. An overview of EMODS behavior for a Tri-objective Combinatorial Optimization problem can be seen in figure 14 6. Experimental analysis

Experimental settings
The algorithms were tested using well-known instances from the Multi-objective Traveling Salesman Problem taken from TSPLIB (Heidelberg, n.d.). The instances worked are shown in table 6 and the input parameters for the algorithms are shown in table 7. The test of the algorithms was made using a Dual Core Computer with 2 Gb RAM. The optimal solutions were constructed based in the best non-dominated solutions of all algorithms in comparison for each instance worked.

Performance metrics
There are metrics that allow measuring the quality of a set of optimal solutions and the performance of an Algorithm (Corne & Knowles, 2003). Most of them use two Pareto Fronts. The first one is PF true and it refers to the real optimal solutions of a combinatorial problem. The second is PF know and it represents the optimal solutions found by an algorithm.

Generation of Non-dominated Vectors (GNDV) It measures the number of No Dominates
Solutions generated by an algorithm.  A value closer to |PF true | for this metric is desired.
Generational Distance (GD) This metric measures the distance between PF know and PF true .I t allows to determinate the error rate in terms of the distance of a set of solutions relative to the real solutions.
Where d i is the smallest Euclidean distance between the solution i of FP know and the solutions of FP true . p is the dimension of the combinatorial problem, it means the number of objective functions. Inverse Generational Distance (IGD) This is another distance measurement between FP know and FP true : Where d i is the smallest Euclidean distance between the solution i of PF know and the solutions of PF true . Spacing (S) It measures the range variance of neighboring solutions in PF know Where d i is the smallest Euclidean distance between the solution i of PF know and the rest of solutions of PF know . d is the mean of all d i . p is the dimension of the combinatorial problem.
A value closer to 0 for this metric is desired. A value of 0 means that all the solutions are equidistant.
Error Rate (ε) It estimates the error rate respect to the precision of the Real Algorithms Solutions (33) as follows: A value of 0% in this metric means that the values of the Real Pareto Front are constructed from the values of the Algorithm Pareto Front.
Lastly, notice that every metric by itself does not have sense. It is necessary to support in the other metrics for a real judge about the quality of the solutions. For instance, if a Pareto F r o n th a sah i g h e rv a l u ei nGNDV but a lower value in ReGNDV then the solutions has a poor-quality.

Experimental results
The tests made with Bi-objectives, Tri-objectives, Quad-objectives and Quin-objectives TSP instances are shown in tables 8, 9, 10 and 11 respectively. The average of the measurement is shown in table 12. Furthermore, a graphical comparison for bi-objectives and tri objectives instances worked is shown in figures 15 and 16 respectively.

Analysis
It can be concluded, that, in the case of two and three objectives, metrics such as S, IGD, GD and ε determine the best algorithm. In this case, the measurement of the metrics is similar for SAMODS and SAGAMODS. On the other hand, MODS has the most poor-quality measurement for the metrics used and EMODS has the best quality measurement for the same metrics.
Lastly, why are the results of the metrics similar for quint-instances? In this case, all the solutions for each solution set are in the optimal set. The answer to this question is based in the angle improvement. MODS as a local search strategy explore a part of the feasible solution using its search direction (Q * ). However, SAMODS and SAGAMODS, in addition, use a search direction given by the change of the search angle. While SAMODS was looking in a

Conclusion
SAMODS, SAGAMODS and EMODS are algorithms based on the Automata Theory for the Multi-objective Optimization of Combinatorial Problems. All of them are derived from the MODS metaheuristic, which is inspired in the Theory of Deterministic Finite Swapping. SAMODS is a Simulated Annealing inspired Algorithm. It uses a search direction in order to optimize a set of solution (Pareto Front) through a linear combination of the objective functions. On the other hand, SAGAMODS, in addition to the advantages of SAMODS, is an Evolutionary inspired Algorithm. It implements a crossover step for exploring far regions of a solution space. Due to this, SAGAMODS tries to avoid local optimums owing to it takes a general look of the solution space. Lastly, in order to avoid slow convergence, EMODS is proposed. Unlike SAMODS and SAGAMODS, EMODS does not explore the neighborhood of a solution using Simulated Annealing, this step is done using Tabu Search. Thus, EMODS gets optimal solution faster than SAGAMODS and SAMODS. Lastly, the algorithms were tested using well known instances from TSPLIB and metrics from the specialized literature. The results shows that for instances of two, three and four objectives, the proposed algorithm has the best performance as the metrics values corroborate. For the last instance worked, quint-objective, the behavior of MODS, SAMODS and SAGAMODS tend to be the same, them have similar error rate but, EMODS has a the best performance. In all the cases, EMODS shows the best performance. However, for the last test, all the algorithms have different solutions sets of non-dominated solutions, and those form the optimal solution set.

Acknowledgment
First of all, I want to thank to God for being with me in my entire life, He made this possible. Secondly, I want to thank to my parents Elias Niño and Arely Ruiz and my sister Carmen Niño for their enormous love and support. Finally, and not less important, to thank to my beautiful wife Maria Padron and our baby for being my inspiration. The book addresses some of the most recent issues, with the theoretical and methodological aspects, of evolutionary multi-objective optimization problems and the various design challenges using different hybrid intelligent approaches. Multi-objective optimization has been available for about two decades, and its application in real-world problems is continuously increasing. Furthermore, many applications function more effectively using a hybrid systems approach. The book presents hybrid techniques based on Artificial Neural Network, Fuzzy Sets, Automata Theory, other metaheuristic or classical algorithms, etc. The book examines various examples of algorithms in different real-world application domains as graph growing problem, speech synthesis, traveling salesman problem, scheduling problems, antenna design, genes design, modeling of chemical and biochemical processes etc.