Open access peer-reviewed chapter

A Hybrid Genetic, Differential Evolution Optimization Algorithm

Written By

Peter Stubberud

Reviewed: 30 June 2022 Published: 07 August 2022

DOI: 10.5772/intechopen.106204

From the Edited Volume

Ubiquitous and Pervasive Computing - New Trends and Opportunities

Edited by Rodrigo da Rosa Righi

Chapter metrics overview

148 Chapter Downloads

View Full Metrics

Abstract

This chapter presents a heuristic evolutionary optimization algorithm that is loosely based on the principles of evolution and natural genetics. In particular, this chapter describes an evolutionary algorithm that is a hybrid of a genetic algorithm and a differential evolution algorithm. This algorithm uses an elitist, ranking, random selection method, several mutation methods and both two level and three level Taguchi crossover. This algorithm is applied to 13 commonly used global numerical optimization test functions, including a spherical, three hyper-ellipsoid, the sum of different powers, Rastrigin’s, Schwefel’s, Griewank’s, Rosenbrock’s valley, Styblinski-Tang, Ackley’s Path, Price-Rosenbrock, and Eggholder’s functions. This algorithm is applied 1000 times to each of the 13 test functions, and the results shows that this algorithm always converges to each of the 13 test function’s global minimum.

Keywords

  • optimization algorithm
  • differential evolution
  • genetic algorithm
  • hybrid
  • Taguchi crossover

1. Introduction

Optimization algorithms are systems that determine an optimal set of parameters that minimize or maximize a cost, or objective, function subject to constraints. Optimization applications are common in engineering and other scientific and mathematical fields. For a typical engineering optimization application, a cost, or objective, function mathematically describes a metric of the error between a desired performance and actual performance over a constrained solution space. For such applications, optimization algorithms would determine an optimal set of parameters that minimize the cost function subject to physical constraints, such as the optimal parameters result in a stable system. As computing power has increased, many multi-modal optimization problems have been solved using heuristic evolutionary optimization algorithms. An evolutionary algorithm is an optimization search algorithm that is loosely based on the principles of evolution and natural genetics and uses operators such as reproduction, selection, recombination and mutation [1]. Popular evolutionary algorithms include genetic algorithms [2], differential evolution [3], particle swarm optimization [4], simulated annealing [5] and colony optimization [6, 7]. Although no algorithm can solve all types of optimization problems [8, 9], genetic algorithms and differential evolution algorithms have become popular in engineering optimization applications because these method are simple, effective and flexible.

Because no algorithm can solve all types of optimization problems [8, 9], hybrid algorithms that combine the elements of an evolutionary algorithm with one or more evolutionary algorithms or search algorithms have been developed and have been shown to be effective search algorithms [10]. Because genetic algorithms and differential evolution algorithms have become popular in engineering optimization applications, this chapter presents a hybrid genetic, differential evolution algorithm. The algorithm uses an elitist, ranking, random selection method. Elitist selection methods assure the survival of the fittest individual, which is the candidate solution with the best optimization criterion cost, during the selection process. The fittest individual is also assured selection in all recombination and mutation operations. Except for the fittest individual which is guaranteed selection, the candidate solutions that survive the selection process are randomly selected for a differential evolution operator to improve convergence, a differential evolution mutation operator to improve diversity and a recombination operator that improves both convergence and diversity. The selection probabilities for the mutation and recombination operators are dynamic and change each generation, or algorithm iteration, to maintain a constant population size. After generating new candidate solutions using these operators, the new candidate solutions are added to the set of candidate solutions that survived the selection process. Except for the fittest individual (which is guaranteed selection), candidate solutions are randomly selected for Taguchi crossover [11] which is an effective recombination operator that creates near optimal new candidate solutions from two or more parent candidate solutions. Section 2 of this chapter describes the basic elements of genetic and differential evolution algorithms. Section 3 describes this chapter’s algorithm in detail. In Section 4, this algorithm is applied to 13 commonly used global numerical optimization test functions, including a spherical, three hyper-ellipsoid, the sum of different powers, Rastrigin’s, Schwefel’s, Griewank’s, Rosenbrock’s valley, Styblinski-Tang, Ackley’s Path, Price-Rosenbrock, and Eggholder’s functions.

Advertisement

2. Elements of genetic and differential evolution algorithms

Genetic algorithms and differential evolution algorithms are evolutionary algorithms that typically define objective functions so that the set of parameters being optimized are represented in a vector [3]. Parameter constraints are implemented by restricting the available solution spaces for each parameter in the vector. The basic design strategy for such genetic and differential evolution algorithms is to determine evolutionary operators that balance the algorithm’s ability to both effectively search the solution space and converge to an optimal solution.

Genetic algorithms and differential evolution algorithms typically begin by randomly selecting K0 vectors, called candidate solutions or individuals, in the solution space. This initial set of candidate solutions is called the initial population. A subset of M1 of these initial candidate solutions, or individuals, are selected as a function of their fitness, or cost, when evaluated with respect to the optimization criterion. Although many selection operators exist [12, 13], selection operators are typically designed to select a subset of M1 candidate solutions from the population in a manner that should improve the overall fitness of the population. The candidate solutions that survive the selection operator form the initial mating pool of M1 individuals where 1M1K0. Candidate solutions from the mating pool are then selected for recombination and mutation. Recombination operators create new candidate solutions by combining vector elements from two or more of the candidate solutions from the mating pool. Mutation operators in genetic algorithms create new candidate solutions by altering elements of candidate solution vectors. In differential evolution, mutation operators create new candidate solutions by using vector operations, such as addition and scaling, on two or more of the candidate solutions from the mating pool. Regardless of the algorithm, the new candidate solutions created by recombination and mutation are added to the mating pool to create a new population of K1 candidate solutions. This process of selection, recombination and mutation forms one generation, or iteration, of a genetic or differential evolution algorithm. This process iterates until a convergence criteria is met and an optimal solution is determined. A generic genetic algorithm or differential evolution algorithm can be summarized as follows:

Initialize population {K0 candidate solutions)}

n0

   repeat

      nn+1

      Select Mn candidate solutions using a selection operator

      Generate new candidate solutions using a mutation operator

      Generate new candidate solutions using a recombination operator

      Add the new KnMn candidate solutions to the mating pool

   until convergence condition is met

where n is the algorithm’s iteration number.

Genetic operators such as recombination and mutation generate a combination of new candidate solutions that can be either similar or diverse from the candidate solutions in the mating pool. Controlling the ratio of the diversity and similarity of new candidate solutions added to a population each generation is a fundamental design parameter of any search algorithm [14]. Creating diversity, or exploration, is the process of generating candidate solutions that lie in previously unevaluated regions of the search space. Creating similarity, or exploitation, on the other hand, is the process of generating candidate solutions within a neighborhood of previously visited points so as to converge to an optimal point in the neighborhood. Exploration and exploitation are typically conflicting processes of a search algorithm because a lack of diversity can result in a population converging to a local minima or maxima and a lack of similarity can impede convergence. Therefore, every search algorithm needs to design an effective ratio of exploration and exploitation of a search space. In general, an optimal ratio of diversity and similarity is not only dependent on the search algorithm but also the cost, or objective, function. For example, determining the optimal solution of a unimodal objective function typically requires less exploration than determining an optimal solution of a multi-modal objective function that typically requires more exploration. Also, different generations, or iterations, of a search algorithm typically have a different optimal ratios of exploration and exploitation. For example, a search algorithm’s early generations require more exploration than exploitation until the neighborhood of the optimal solution is found. After the neighborhood of the optimal solution is found, a search algorithm’s generations require more exploitation and less exploration. Therefore, the goal of any search algorithm is to design a ratio of adding diverse new candidate solutions and similar new candidate solutions to each generation so that the algorithm can effectively determine optimal solutions for different types of objective functions.

Genetic algorithms and differential evolution algorithms typically use three operators, selection, mutation and recombination, for controlling the ratio of adding diverse and similar new candidate solutions to their populations. Selection operators control the ratio of exploration and exploitation by varying the selection process. A selection operator that is designed to select the most fit candidate solutions, the candidate solutions with the best costs when evaluated with respect to the optimization criterion, biases the selection process away from exploration and towards exploitation. A selection operator that is designed to select the least fit candidate solutions biases the selection process away from exploitation and towards exploration.

Mutation operators for a typical genetic algorithm or differential evolution algorithm randomly modify individuals from the mating pool to increase the diversity of a population. As a result, a typical mutation operator increases the exploration of unevaluated regions of the search space. However, some mutation operators only slightly alter individuals from the mating pool. In such cases, these types of mutation operators can be classified as an exploitation operator because most of the mutated individual is preserved and the mutated individual still remains in the neighborhood of the parent candidate solution. A recombination operator for a typical genetic algorithm or differential evolution algorithm combines two or more parent individuals, or candidate solutions, from the mating pool to generate a new and possibly more fit candidate solution. As a result, a typical recombination operator generates new candidate solutions within a neighborhood of the parent candidate solutions. From this perspective, a recombination operator increases exploitation and improves the convergence of the algorithm. However, when recombination uses a mutated candidate solution, recombination can create a new candidate solution in a previously unevaluated region of the search space. In such cases, recombination operators can improve exploration of the search space. In most cases, mutation operators improve a population’s diversity and recombination operators improve an algorithm’s convergence rate; however, in practice, the combination of selection, mutation and recombination determines the ratio of exploration and exploitation in both genetic algorithms and differential evolution algorithms.

Advertisement

3. A hybrid genetic, differential evolution algorithm

This hybrid genetic, differential evolution algorithm determines a best solution with respect to an optimization criterion that has a solution space which is the subset of an N dimensional hyper-rectangular solution space although the algorithm can be adapted for other types of N dimensional spaces. The algorithm generates an initial population of K0 candidate solutions by randomly selecting K0 candidate solutions within the solution space. Each candidate solution in the initial population is evaluated by an optimization criterion and ranked. Except for the top ranked candidate solution that is assured selection, a subset of these candidate solutions are randomly selected as a function of their rank. The surviving M1 candidate solutions solutions from the initial mating pool, where 1M1K0, are randomly selected and employed in creating new candidate solutions using a differential evolution operator to improve convergence, a differential evolution operator to improve diversity and a crossover operator that improves both convergence and diversity. To ensure that the new candidate solutions lie in the solution space, one of two methods is used to move any unfeasible candidate solutions into the solution space. All the new candidate solutions are then added to the mating pool and candidate solutions from this set are randomly selected for Taguchi crossover, a type of recombination operator. After Taguchi crossover, the population should have an average of approximately K0 candidate solutions. After the Taguchi crossover operation, each candidate in the population is evaluated, ranked, and randomly selected for the next iteration. The algorithm can be summarized as follows:

Initialize population {K0 candidate solutions)}

n0

   repeat

      nn+1

      Ranking and stochastic selection {Mn candidate solutions}

      Differential evolution operator to improve convergence

      Differential evolution mutation operator to improve diversity

      Recombination operator to improve convergence and diversity

      Ensure new solutions are in the solution space

      Taguchi crossover {Kn candidate solutions}

   until convergence condition is met

where n is the algorithm’s iteration number.

3.1 Population initialization

To generate the initial population of K0 candidate solutions, candidate solutions are randomly selected so that the population is uniformly distributed over the solution space. For a hyper-rectangular solution space, an initial population of candidate solutions can be generated in Matlab using

G0=kronXcones1K0+diagXsrandNK00.5;E1

where G0 is a matrix containing the K0 initial candidate solutions, Xc is a vector containing the solution space’s center in rectangular coordinates, Xs is a vector containing the solution space’s size for each dimension in rectangular coordinates, and N is the solution space’s dimension. For example, for a two dimensional (N=2) hyper-rectangular solution space of 01Tx61T, Xc=30T and Xs=62T.

This selection of initial candidate solutions can be adapted for other types of N dimensional spaces. For example, the initial population of K0 candidate solutions for a hyper-ellipsoid solution space can be generated in Matlab using

G1:2:N:=kronXrones1K.randceilN/2K;G2:2:N:=2pirandfloorN/2K;E2

where Xr is a vector containing the the hyper-elliptical solution space’s radii, the terms, G1:2:N:, represent the magnitudes of each candidate solution and the terms, G2:2:N:, represent their respective phases. If the solution space is not centered at the origin, then candidate solutions of the form rθT and centered at 00T can be moved to RΘT and centered at r0γT using the transformations,

R=r2+r02+2rr0cosθγΘ=arctanrsinθ+r0sinγrcosθ+r0cosγ.E3

If appropriate, these initial candidate solutions could be converted to rectangular coordinates using

xk=RcosΘandxk+1=RsinΘ.E4

3.2 Ranking and stochastic selection

This algorithm uses an elitist, linear ranking, random selection method. Because the selection operator is elitist, the fittest individual, or the candidate solution vector with the best optimization criterion cost, is guaranteed to survive the selection process. Elitist selection algorithms can increase an algorithm’s exploitation and therefore increase the algorithm’s ability to converge, especially when steady-state misadjustment is significant [15]. Linear ranking selection methods evaluate each candidate solution by the cost function and rank the candidate solutions according to their costs [16, 17]. Starting with the candidate solution with the best cost, each candidate solution is assigned a selection probability in linearly decreasing increments so that all candidate solutions have a nonzero probability of selection. This method of selection allows diverse candidate solutions that might contain useful vector elements but have a poor cost to survive the selection process. This can improve an algorithm’s exploration and prevent the algorithm from converging in a local minima or maxima.

The selection operator is the first operation performed for each generation, or iteration of the algorithm. At the start of the algorithm’s nth iteration, the selection operator evaluates the Kn candidate solutions, xkn, with respect to the cost function, J, and ranks the candidate solutions according to their cost. For a minimization problem, the ranked candidate solutions are sorted from highest cost to lowest cost and are assigned consecutive integers from 1 to Kn so that x1n is candidate solution with the highest, or worst, cost and xKnn is assigned to the candidate solution with the lowest, or best, cost. After ranking, each candidate solution is assigned a selection probability, Pxkn, so that

Pxkn=m=1kΔpmE5

where

Δpm=1Knη+η+ηm1Kn1,E6

η+ is a constant, and η is a constant that is selected so Px1n=η/Kn which is the selection probability of the worst candidate solution [16, 17].

Because this algorithm uses an elitist selection method, the best candidate solution is assured survival during the selection process which implies that

PxKnn=1.E7

Substituting Eq. (6) into Eq. (5), and the resulting equation into Eq. (7),

m=1Kn1Knη+η+ηm1Kn1=1.E8

Solving Eq. (8), an elitist selection method requires that

η+=2ηE9

where 0<η<η+.

The set of surviving candidate solutions are referred to as the mating pool. After this selection method, the mating pool’s mean size is

EMn=k=1KnPxkn=2η+η+6Kn+1E10

where E is the expectation operator and Mn is the number candidate solutions in the mating pool after selection during the nth iteration. Because this algorithm uses an elitist selection method, Eq. (10) can be simplified by substituting Eq. (9) into Eq. (10) which results in

EMn=2+η6Kn+1E11

which is the expected number of candidate solutions that survive this elitist linear ranking selection process during the nth iteration. Ref. [17] shows that setting η0.9 often provides an adequate balance between selective pressure which allows for exploitation of the objective function and population diversity which allows for exploration of the objective function.

3.3 Differential evolution operator to improve convergence

Differential evolution algorithms generate new candidate solutions by adding a weighted difference between two randomly selected candidate solutions to a third randomly selected candidate solution. For this algorithm, the differential evolution operator to improve convergence generates a new candidate solution, v, using

v=xkn+RxmnxjnE12

where xkn is the candidate solution randomly selected for differential evolution, xmn and xjn are two randomly selected candidate solutions from the mating pool, and R is a uniformly distributed random number from the interval [0,1]. The two candidate solutions, xmn and xjn, should be distinct and chosen so that xmnxjn; however, this can become difficult when the algorithm is converging.

Because this algorithm is an elitist algorithm, the best candidate solution, xKnn, is always selected for this differential evolution operator. The other candidate solutions are selected randomly for differential evolution with a probability of PDE1n. Because R in Eq. (12) is a uniformly distributed random number from the interval [0,1], the value R attenuates the difference between the two randomly selected candidate solutions. When this attenuated difference is added to the candidate solution selected for differential evolution, it creates a new candidate solution within a neighborhood of the candidate solution selected for differential evolution. As a result, this differential evolution operator improves the algorithm’s ability converge to an optimal point in the neighborhood. Figure 1 shows a plot of the contours of a two dimensional cost function, J, and three candidate solutions selected for differential evolution. The figure illustrates how this differential evolution operator creates candidate solutions within a neighborhood of the candidate solution, xkn, selected for differential evolution.

Figure 1.

A plot showing an example of the differential evolution operator that improves convergence for a two dimensional cost function, J. The plot shows the contour lines of the cost function and the candidate solution vectors involved in the differential evolution operation.

On average, this operator creates

Mn1PDE1n+1E13

new candidate solutions.

3.4 Differential evolution mutation operator to improve diversity

Because differential evolution algorithms generate new candidate solutions by adding a weighted difference between two randomly selected candidate solutions to a third randomly selected candidate solution, the differential evolution operator creates new candidate solution vectors that contain elements that are different from the candidate solutions that formed the new solution vector. As a result, the differential evolution operator is often referred to as a mutation operator whether the operator creates similarity or diversity. In this chapter, the differential evolution operator that increases diversity is referred to as the differential evolution mutation operator.

For this algorithm, the differential evolution mutation operator generates a new candidate solution, v, using

v=xkn+14diagRdiagXsxmnxjnE14

where xkn is the candidate solution randomly selected for differential evolution mutation, xmn and xjn are two randomly selected candidate solutions from the mating pool, R is a vector whose elements are uniformly distributed random number from the interval [0,1], and Xs is a vector containing the solution space’s size for each dimension in rectangular coordinates or the diameters of an elliptic solution space. Again, the randomly selected candidate solutions, xmn and xjn, should be distinct and chosen so that xmnxjn.

Because this algorithm is an elitist algorithm, the best candidate solution, xKnn, is always selected for this differential evolution mutation operator. The other candidate solutions are selected randomly for differential evolution with a probability of PDE2n. Because the term, 14diagRdiagXs, in Eq. (14) is a diagonal matrix with uniformly distributed random numbers from the interval zero to 14 the size of each dimension of the solution space, the term, 14diagRdiagXs, typically increases each dimension of the difference between the two randomly selected candidate solutions randomly. As the entire population begins to converge and the differences between any two randomly selected candidate solutions begins to decrease, the term, 14diagRdiagXs, increases these small differences and these increased differences are added to the candidate solutions selected for differential Evolution. Therefore, the new candidate solutions typically lie outside the neighborhood of the candidate solutions selected for differential evolution. As a result, this differential evolution operator improves the algorithm’s diversity until the entire population begins to converge within very small differences.

The mean number of mutant solutions created by this process is

Mn1PDE2n+1.E15

3.5 A recombination operator to improve convergence and diversity

Taguchi crossover can greatly increase convergence rates [11, 18]. As a result, when the differential evolution operators discussed earlier are combined with Taguchi crossover, this algorithm can converge too quickly. To prevent this algorithm from converging too quickly into a local minima or maxima, a recombination operator that creates a pair of new candidates is added to this algorithm. To improve convergence, this recombination operator generates a new candidate solution, v, by averaging the selected candidate solution, xkn, with another randomly selected candidate solution, xmn, from the mating pool so that

v=xkn+xmn/2.E16

To improve diversity, this recombination operator generates the another candidate solution by circularly shifting the elements of the newly formed candidate solution, v, by a uniformly distributed integer and then randomly changing the signs of the elements. In Matlab, this new vector, w, can be created by

w=signrandi2N11.5.circshiftvrandiN.E17

Because this algorithm is an elitist algorithm, the best candidate solution, xKnn, is always selected for this recombination operator. The other candidate solutions are selected randomly with a probability of Pcrn. On average, this operator creates

Mn12Pcrn+2E18

new candidate solutions.

3.6 Solution space

A candidate solution is considered infeasible if it does not lie within the solution space. If a new candidate solution is infeasible, that solution is made feasible by one of two methods. If a convergence operator, such as the differential evolution for convergence or the recombination operator for convergence, creates an infeasible candidate solution, the infeasible solution vector is moved to the nearest edge of the solution space by changing the vector’s elements that lie outside solution space to the nearest edge of the solution space. This method attempts to generate feasible solutions within the neighborhood of the original infeasible solution so that the intent of the convergence operator that created the infeasible solution is maintained.

If a diversity operator, such as the differential evolution mutation for diversity operator or the recombination operator for diversity, creates an infeasible solution, the infeasible solution vector is moved into the solution space by performing a spatially circular shift of the infeasible solution vector’s elements. For example, if an infeasible solution, v, is created by a diversity operator in a hyper-rectangular solution space, the infeasible solution vector is moved into the solution space using

v=modvXc0.5XsXs)+Xc0.5XsE19

where Xc is a vector containing the center of the solution space in rectangular coordinates, and Xs is a vector containing the size of each dimension of the solution space in rectangular coordinates. Similarly, if an infeasible solution, v, is created by a diversity operator in an elliptical solution space centered at the origin, the infeasible solution vector, v, expressed in polar coordinates, re, is moved into the solution space using

rk=remrkrkmaxθk=θk+πE20

where rem is the remainder function, rk is a radius that places the candidate solution outside of the solution space, rkmax is the maximum value of rk that keeps the candidate solution inside the solution space and θk is the angle associated with the radius, rk. This method attempts to generate feasible solutions away from the neighborhood of the original infeasible solution so that the intent of the diversity operator that created the infeasible solution is maintained.

3.7 Taguchi crossover

A crossover operator is a recombination operator that combines the elements from two or more parent candidate solutions to generate a new offspring candidate solution. Taguchi crossover generates new candidate solutions by intelligently selecting elements from the two or more parent solutions vectors [11]. Taguchi crossover is a simple design of experiments method that creates a near optimal candidate solution from the parent candidate solutions. Consequently, Taguchi crossover can greatly increase an algorithm’s rate of convergence [11, 18].

Before selecting candidate solutions for Taguchi crossover, all new candidate solutions created by the other operators are added to the mating pool. The mean number of candidate solutions in the mating pool at this stage can be obtained by summing Eq. (14), Eq. (15) and Eq. (18) which implies that the mean number of candidate solutions in the mating pool at this stage is

Mn+Mn1P3n+4E21

where P3n=PDE1n+PDE2n+2Pcrn.

Because this is an elitist algorithm, the best candidate solution is always selected for Taguchi crossover. The other candidate solutions from the mating pool are selected randomly for Taguchi crossover with a probability of PTc. For two level Taguchi crossover, crossover involving two parent solutions, one other candidate solution is selected randomly from the mating pool. For three level Taguchi crossover, crossover involving three parent solutions, two other candidate solutions are selected randomly from the mating pool.

On average, the Taguchi crossover operator creates

Mn11+P3n+4PTc+1E22

new candidate solutions.

3.8 Managing population size

Because the selection operator, the differential evolution operators, the recombination operators and Taguchi crossover operator generate a random number of new candidate solutions, the population size and mating pool size vary each generation, or iteration of the algorithm. After the Taguchi crossover operator, the average number, EKn, of the candidate solutions can be calculated by adding Eq. (21) and Eq. (22) which results in

EKn=Mn11+P3n+41+PTc+2.E23

To maintain the population’s size, Kn, at the population’s initial size, K0, the probabilities of at least one of the operators must vary so that

EKn=K0.E24

Substituting Eq. (23) into Eq. (24) and solving for P3n,

P3n=K064PTc1+PTcMn11E25

where PTc is assumed to be fixed and P3n varies. In this algorithm, the requirement in Eq. (25) is met by fixing PDE1n and PDE2n, and letting

Pcrn=P3nPDE1nPDE2n/2.E26
Advertisement

4. Optimization test function results

To evaluate this algorithm’s ability to solve optimization problems, the algorithm was applied to 13 commonly used global numerical optimization test functions. Table 1 lists these 13 cost functions, J1x through J13x, where x=x1x2xNT, and their solution spaces. These functions include a spherical, three hyper-ellipsoid, the sum of different powers, Rastrigin’s, Schwefel’s, Griewank’s, Rosenbrock’s valley, Styblinski-Tang, Ackley’s Path, Price-Rosenbrock, and Eggholder’s functions. The first 11 functions, J1x through J11x, are multidimensional functions and are tested for two dimensions (N=2) and 35 dimensions (N=35). Functions J12x and J13x are two dimensional functions and were only tested for N=2. For all 13 functions, Jmin=0 where Jmin is the global minimum value of the cost function. Figure 2 shows a plot of the two dimensional Schwefel function and Figure 3 shows a plot of the two dimensional Eggholder function.

FunctionSolution space xk
J1x=k=1Nxk25.12,5.12
J2x=k=1Nkxk25.12,5.12
J3x=k=1Nkxk5k2500,500
J4x=m=1Nk=1mxk265.536,65.536
J5x=k=1Nxkk+111
J6x=10N+k=1Nxk210cos2πxk5.12,5.12
J7x=418.9828872724338Nk=1Nxksinxk500,500
J8x=1+k=1Nxk24000k=1Ncosxkk600,600
J9x=k=1N1100xk2xk+12+xk12510
J10x=39.16616570377142N+k=1Nxk416xk2+5xk55
J11x=20+e20e0.21Nk=1nxk2ek=1Ncos2πxkN32.768,32.768
J12x=100x2x122+6.4x20.52x10.6255
J13x=959.640662720851x1sinx1x247x2+47sinx12+x2+47512,512

Table 1.

Optimization test functions and their solution spaces.

Figure 2.

A plot of the two dimensional Schwefel’s function.

Figure 3.

A plot of the two dimensional Eggholder’s function.

For all functions and dimensions, the initial population, K0, was set to 50, η=0.9, PDE1n=0.16, PDE2n=0.2, PTcn=0.22 and Pcrn was set each iteration according to Eq. (18). The algorithm was applied 1000 times to each function, and the algorithm was assumed to converge when a solution, xopt, was determined so that JxoptJtol where Jtol for each function is listed in Table 2. Table 2 lists the mean and standard deviation of the number of iterations that the algorithm required to converge for two level (L=2) and three level (L=3) Taguchi crossover for each function. Table 2 also lists the means and standard deviations of the number of cost function evaluations that the algorithm required to converge for two level (L=2) and three level (L=3) Taguchi crossover for each function.

Cost FnJtolN = 2
Mean Iter
St Dev Iter
N = 2
Mean J Evals
St Dev Evals
N = 35
Mean Iter
St Dev Iter
N = 35
Mean J Evals
St Dev Evals.
L=2L=3L=2L=3L=2L=3L=2L=3
J1x1030043303950425013010558,00090,300
2217200024004326004200
J2x1030045314150430013711061,00094,400
2217205024004325004300
J3x10844394000545011709755.3e58.3e5
152013502800154016507.0e51.4e6
J4x10300687162009950691729903.1e62.5e6
34333150470015207656.9e56.6e5
J5x103004233380047009223434.1e52.9e5
1715160022001084748,55040,200
J6x10300312428003300846537,50055,400
1210115014504324203450
J7x10104033365045001551006935086,350
6555065012762006600
J8x10300393535504900907740,30065,700
1918175026004425004650
J9x101065675950950014,32557766.5e64.9e6
8288750012,500692048043.1e64.1e6
J10x10113126285036501057047,00060,000
333004506326503250
J11x10154028370039001107249,60061,400
2216205022501716740014,200
J12x102191817502550NANANANA
241922002700NANANANA
J13x10118698817.9e41.2e5NANANANA
106811629.7e41.6e5NANANANA

Table 2.

Results for the optimization of the 2-D and 35-D test functions where K0=50, η=0.9, PDE1=0.16, PDE2=0.2 and PTc=0.22 for two level (L=2) and three level (L=3) Taguchi crossover. Results are averages over 1000 runs.

The number of cost function evaluations that the algorithm required to converge can also be calculated as a function of the algorithm’s average population size, average mating pool size and operator probabilities. For example, 2-D functions require four cost function evaluations for two level Taguchi crossover and nine cost function evaluations for three level Taguchi crossover. Therefore, the average number of cost function evaluations per algorithm iteration is

K¯+4+4PTcM¯11+P¯3+4E27

for two level Taguchi crossover and

K¯+9+9PTcM¯11+P¯3+4E28

for three level Taguchi crossover where K¯ is the average population size, M¯ is the average mating pool size after selection and P¯3 is the average of P3n.

Similarly, for the 35-D functions, two level Taguchi crossover requires 40 cost function evaluations, and three level Taguchi crossover requires 81 cost function evaluations. Therefore, the average number of cost function evaluations per algorithm iteration is

K¯+40+40PTcM¯11+P¯3+4E29

for two level Taguchi crossover and

K¯+81+81PTcM¯11+P¯3+4E30

for three level Taguchi crossover.

Although no algorithm can solve all types of optimization problems [8, 9], the data in Table 2 shows that the algorithm converged below the specified Jmin for 100% of the 1000 runs for all the test functions. The data in Table 2 also shows that this algorithm requires significantly more iterations to converge for Eggholder’s function, J13, and for Rosenbrock’s valley, J9 when N=35 which implies that the algorithm has not been optimized for all types of cost functions. The data also shows that although the three level Taguchi crossover algorithm typically converges using few iterations than the two level Taguchi crossover algorithm, the two level Taguchi crossover algorithm typically requires fewer cost function evaluations than the three level Taguchi crossover algorithm.

Advertisement

5. Summary and conclusions

This chapter presents a hybrid genetic, differential evolution algorithm that represents the set of parameters being optimized in a vector. The algorithm uses an elitist, ranking, random selection method to generate a mating pool. Candidate solutions from the mating pool are randomly selected for two differential evolution operators, and two recombination operators. The new candidate solutions generated by these operators are added to the mating pool. Candidate solutions from this expanded mating pool are selected randomly for Taguchi crossover.

To evaluate this algorithm’s ability to solve optimization problems, the algorithm was applied to 13 commonly used global numerical optimization test functions, including a spherical, three hyper-ellipsoid, the sum of different powers, Rastrigin’s, Schwefel’s, Griewank’s, Rosenbrock’s valley, Styblinski-Tang, Ackley’s Path, Price-Rosenbrock, and Eggholder’s functions. The algorithm was evaluated using two and three level Taguchi crossover. For both two and three level Taguchi crossover, the algorithm converged below the specified Jmin for 100% of the 1000 runs for all the test functions. Although the three level Taguchi crossover algorithm typically converged using fewer iterations than the two level Taguchi crossover algorithm, the two level Taguchi crossover algorithm typically required fewer cost function evaluations than the three level Taguchi crossover algorithm.

Although this algorithm required significantly more iterations to converge for Eggholder’s function and for 35-D Rosenbrock’s valley function, ref. [19] shows that this algorithm has been successfully used to design digital infinite impulse response (IIR) filters with arbitrary magnitude responses. As a result, it can be expected that the simple optimization algorithm described in this chapter can be used successfully for similar engineering optimization applications.

References

  1. 1. Fogel DB. What is evolutionary computation? IEEE Spectrum. 2000;37(2):26-32. DOI: 10.1109/6.819926
  2. 2. Goldberg D. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison Wesley; 1989. p. 372. DOI: 10.5860/9780201157673
  3. 3. Storn R, Price K. Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization. 1997;11:341-359. DOI: 10.1023/a:1008202821328
  4. 4. Bonyadi MR, Michalewicz Z. Particle swarm optimization for single objective continuous space problems: A review. Evolutionary Computation. 2017;25(1):1-54. DOI: 10.1162/evco r 00180
  5. 5. Chen S, Istepanian R, Luk BL. Digital IIR filter design using adaptive simulated annealing. Digital Signal Processing. 2001;11(3):241-251. DOI: 10.1006/dspr.2000.0384
  6. 6. Karaboga D. An Idea Based on Honey Bee Swarm for Numerical Optimization Technical Report TR06. Computer Engineering Department: Erciyes University; 2005
  7. 7. Niu B, Wang H. Bacterial colony optimization. Discrete Dynamics in Nature and Society. 2012;2012:1-28. DOI: 10.1155/2012/698057
  8. 8. He Y, Yuen SY, Lou Y, Zhang X. A sequential algorithm portfolio approach for black box optimization. Swarm and Evolutionary Computation. 2019;44:559-570. DOI: 10.1016/j.swevo.2018.07.001
  9. 9. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation. 1997;1(1):67-82. DOI: 10.1109/4235.585893
  10. 10. Grosan C, Abraham A. Hybrid evolutionary algorithms: Methodologies, architectures, and reviews. Studies in Computational Intelligence. 2007;75:1–17. DOI: 10.1007/978-3-540-73297-6_1
  11. 11. Tsai J-T, Liu T-K, Chou J-H. Hybrid Taguchi-genetic algorithm for global numerical optimization. IEEE Transactions on Evolutionary Computation. 2004;8(4):365-377. DOI: 10.1109/tevc.2004.826895
  12. 12. Blickle T, Thiele L. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation. 1996;4(4):361-394. DOI: 10.1162/evco.1996.4.4.361
  13. 13. Goldberg DE, Deb K. A comparative analysis of selection schemes used in genetic algorithms. Foundations of Genetic Algorithms. 1991;1:69-93. DOI: 10.1016/b978-0-08-050684-5.50008-2
  14. 14. Črepinšek M, Liu S-H, Mernik M. Exploration and exploitation in evolutionary algorithms. ACM Computing Surveys. 2013;45(3):1-33. DOI: 10.1145/2480741.2480752
  15. 15. Reeves CR, Rowe JE. Genetic algorithms—principles and perspectives. In: Operations Research/Computer Science Interfaces Series. US: Springer; 2002. DOI: 10.1007/b101880
  16. 16. Corus D, Lissovoi A, Oliveto PS, Witt C. On steady-state evolutionary algorithms and selective pressure: Why inverse rank-based allocation of reproductive trials is best. ACM Transactions on Evolutionary Learning and Optimization. 2021;1(1):1-38. DOI: 10.1145/3427474
  17. 17. Bäck T. Optimization by Means of Genetic Algorithms. In: Technical University of Ilmenau. 1989. p. 163-169. DOI: 10.1.1.40.5648. Available from: citeseer.ist.psu.edu/71967.html. [Accessed: 2022-06-01]
  18. 18. Leung Y-W, Wang Y. An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Transactions on Evolutionary Computation. 2001;5(1):41-53. DOI: 10.1109/4235.910464
  19. 19. Stubberud P. Digital IIR filter design using a differential evolution algorithm with polar coordinates. 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC) [Internet]. IEEE; 2022; DOI: 10.1109/ccwc54503.2022.9720786

Written By

Peter Stubberud

Reviewed: 30 June 2022 Published: 07 August 2022