Open access peer-reviewed chapter

Metaheuristics and Chaos Theory

Written By

Rui Tang, Simon Fong and Nilanjan Dey

Submitted: 07 May 2017 Reviewed: 31 October 2017 Published: 28 March 2018

DOI: 10.5772/intechopen.72103

From the Edited Volume

Chaos Theory

Edited by Kais A. Mohamedamen Al Naimee

Chapter metrics overview

1,478 Chapter Downloads

View Full Metrics

Abstract

Chaos theory is a novelty approach that has been widely used into various applications. One of the famous applications is the introduction of chaos theory into optimization. Note that chaos theory is highly sensitive to initial condition and has the feature of randomness. As chaos theory has the feature of randomness and dynamical properties, it is easy to accelerate the optimization algorithm convergence and enhance the capability of diversity. In this work, we integrated 10 chaotic maps into several metaheuristic algorithms in order to extensively investigate the effectiveness of chaos theory for improving the search capability. Extensive experiments have been carried out and the results have shown that chaotic optimization can be a very promising tool for solving optimization algorithms.

Keywords

  • chaos
  • optimization
  • metaheuristics algorithm

1. Introduction

Chaos theory is a novelty approach, which has been used into various applications widely [1]. One of the most famous applications is the introduction of chaos theory into optimization. Note that chaos theory is highly sensitive to initial condition and has the feature of randomness [2]. The most metaheuristic optimization algorithms belong to stochastic algorithms. The property of randomness is obtained by using probability distribution, such as uniform and Gaussian method. There is a randomness method in optimization filed called chaotic optimization (CO), which has the property of dynamical, nonrepetition, and ergodicity. The dynamical property ensures different solutions produced by algorithm and searches different modal objective search space, even on the complex multimodal landscape. Moreover, because of the ergodicity property of CO, it can perform searches at higher speeds compared to the stochastic algorithms with probability distribution. As chaos theory has the feature of randomness and dynamical properties, it is easy to accelerate the convergence of optimization algorithm and enhance the capability of diversity.

Since it has same properties of metaheuristic algorithms, it is natural that numerous metaheuristic algorithms have been combined with chaos theory. Generally, the most metaheuristic optimization algorithms are considered as stochastic approach. Compared to deterministic method, stochastic algorithm are much more flexible and universal. The simple idea of metaheuristic optimization algorithm is using greedy strategy for searching the promising solution areas to find out the optimum one. There are three categories of metaheuristic optimization algorithms: evolutionary algorithm, which mimics the evolution process, is the most popular algorithm in this kind. It contains genetic algorithm (GA) [3], different evolution (DE) [4], and the evolutionary strategy (ES) [5]. The second category is the swarm intelligence, the population-based algorithms. Particle swarm optimization algorithm (PSO) [6], wolf search algorithm (WSA) [7], and cuckoo search (CS) [8] are the well-known algorithms in this category. The third algorithm neither belongs to evolutionary algorithm nor SI, such as dynamic group optimization (DGO), [9, 10] which can be considered as the third category. In the most cases, metaheuristic algorithm has two phases: exploration and exploitation. Simply put, the exploration phase occurs when the algorithm discovers promising search area, and the exploitation phase refers to search the most promising solution obtained from the exploration phase as quickly as possible.

Although many metaheuristic algorithms can accelerate the search speed, they still have one major drawback, premature convergence. If the search space has many local optimums, it is very easy to stick into a local optimum. In order to deal with this problem, many researches proposed many methods, for example, using adaptive method adjusts parameters, using hybrid method enhances the search capability. However, balancing global exploration and local exploitation are still difficult, because better global exploration capability is usually accompanied by worse local exploitation, and vice versa. Introducing chaos is the most suitable approach to solve those problems. It has the property of the nonrepetition, ergodicity, and dynamic. The dynamic property ensures the solutions produced by algorithms variety, and searches different landscapes search space, and the ergodicity and nonrepetition enhance the speed of searching. Chaotic optimization not only accelerates the speed of algorithm but also enhances the variety of movement pattern. In this work, we integrated 10 chaotic maps into several metaheuristic algorithms to extensively investigate the effectiveness of chaos theory for improving the search capability. The performance of the approach is tested on 14 benchmark functions, which are the CEC2009 competition testing functions that contains unimodal functions and multimodal problems.

Advertisement

2. Methods

In reality, optimizations are very hard to solve, many of them belong to NP-hard problems. To solve such problems, optimization algorithms have to be used. According to the “No free lunch theorem,” there is no such efficient algorithm for all problems. As a result, many optimization algorithms have been developed and tried to use various improving techniques to enhance the capability of searching to see that if they can cope with these challenging optimization problems. Chaos can be described as a bounded nonlinear system with ergodic and stochastic properties. It is very sensitive to the initial condition and the parameters. Recently, numerous improvements, which rely on the chaos approach, have been proposed for metaheuristics algorithm.

2.1. Metaheuristics algorithms

In this section, we will introduce three most well-known chaotic optimizations which based metaheuristics optimization algorithms.

2.1.1. Particle swarm optimization algorithms (PSO)

Original particle swarm optimization is a population-based heuristic method which is discovered by mimicking social models of bird flocking and swarming to find the optimal solutions. It was proposed by Kennedy [6] in 1995.

The position of the ith particle can be described as xi=xi1xi2xiD, where D represents the number of dimensions. The velocity of the ith particle can be written as vi=vi1vi2viD, each particle coexists and evolves simultaneously based on knowledge shared with neighboring particles; it makes use of its own memory and knowledge gained by the swarm as a whole to find the best solution. The best previously encountered position of the ith particle is denoted by its individual best position pi=pi1pi2piD and the global best gi=gi1gi2giD. At each iteration/generation, the position and velocity of the ith particle are updated by p and g. The updated equations can be formulated as:

vit+1=wvit+c1r1pixit+c2r2gixitE1
xit+1=xit+vit+1E2

where r1 and r2 are random generator between (0, 1), and c1 and c2 are acceleration constants that control the speed of incensement. vit+1 means the velocity of ith particle at tth generation. w controls the impact of the previous velocity on its current one.

In chaotic particle swarm optimization algorithm [1], the random generator is replaced by sequence of chaotic maps. r1 and r2 are modified by the chaotic maps, and it can be described as follows:

Ct+1=kCt1CtE3

where Ct is the sequence generated by the chaotic map at each independent run, and k is the driving parameter which controls the behavior of Ct. When k increases, Ct goes through further bifurcations, eventually resulting in chaos. The mathematical updated formula is as follows:

vit+1=wvit+c1Cpixit+c21CgixitE4

In Eq. (4), C is a function based on the chaotic maps with value between 0 and 1.

2.1.2. Krill herd algorithm (KH)

The krill herd algorithm mimics the behavior of krill individuals in krill herds (KH) [11]. This algorithm was proposed by Gandomi in 2012. There are three main actions in KH, which is shown as follows:

Motion induction: this activity refers to the density maintenance of the herd. It can be described as follows:

Nit+1=Nmaxαi+ωnNitE5

where Nmax is the maximum induced speed, αi=αilocal+αitarget, ωn is the inertia weight. αilocal and αitarget are the local effect and target effect, respectively, and αitarget is calculated as follows:

αitarget=CbestKi,bestXi,bestE6

where Cbest is the coefficient and can be defined as follows:

Cbest=2r+1lmaxE7

where r is a random number located in (0,1).

The second activity is foraging, and the mathematic equation is shown as follows:

Fit+1=vfβi+ωfFitE8

where βi=βifood+Bibest, vf is the foraging speed, ωf is the weight of foraging movement, βifood shows the attractive of food, and Bibest is the best solution obtained so far.

The third activity is the diffusion, which is a random activity, and it can be defined as follows:

Dit+1=Dmax1IImaxδE9

where Dmax is the maximum diffusion speed and δ is a random vector in [−1, 1].

The position of krill i from t to t+t, which can be formulated as follows:

Xit+t=Xit+tdXidtE10

Note: t can be regarded as a scale factor of the speed vector.

In the chaotic KH [12], researchers used chaotic maps in tuning the random vector; it improves the ability of KH to avoid local optimum. In the classical KH, the most important random value is calculated in Eq. (7); therefore, the parameter r is substituted by chaotic maps as follows:

Cbest=2Ct+IImaxE11

where C(t) is the value of chaotic maps in the tth iteration.

2.1.3. Biogeography-based optimization algorithm (BBO)

The biogeography-based optimization algorithm (BBO) was inspired by biogeography [13]. It simulates relations between different species which are located in different habitats, such as immigration, mutation, and emigration. BBO can be summarized into three rules.

  • Individuals living with high habitat suitability index (HSI) are more likely to immigrate to habitats with low HIS.

  • Habitants living with low HSO are more likely to allow immigrations from high HSI.

  • The HSI value may change randomly.

For each habitat, it has three rates: immigration λ, emigration μ, and mutation m. These three rates can be calculated in the following equations:

μ=EnNE12
λ=I1nNE13

where n is the number of habitant, N is the maximum number of habitants, E is the maximum emigration rate, and I is the maximum immigration rate. The mutation rate is defined as follows:

m=M1pPE14

where M is defined by user, p is the mutation probability, and P equals to arg max(p).

In chaotic BBO [14], researchers used chaotic map to provide chaotic behaviors for selection operator, emigration, and mutation.

For selection operator, the rand value is substituted by the value from chaotic maps. The pseudo code is as follows:

if C(t) <λi then

Emigrate from Hi to Hj chosen with probability to λi

End if

where C(t) is the value from the chaotic map in the tth iteration.

For emigration, it can be calculated as follows:

if C(t) <μi then

select a random habitant in xi and replace it with xj

End if

The probability of mutation is defined by the chaotic map as follows:

for i = 1 to number of habitants at kth habitat

if C(t) <mutation_ratek then

mutate the ith habitants

End if

End for

where mutation_rate(k) shows the mutation rate of kth habitat.

2.2. Phase in chaos embedded metaheuristics algorithms

From Section 2.1, we can find that the most chaos embedded metaheuristic algorithms have three key phases: initialization, operators, and random generator. In this section, we describe them as follows:

Initialization: the starting positions in metaheuristics algorithm are generated randomly. Diversity of initial population is very important for helping the population spread in search space. Therefore, the initial populations are generated by chaotic maps, which can produce a well distribution by the properties of random and ergodicity of chaos. The chaotic sequence can accelerate the convergence and enhance the global search capability. The pseudo code of initialization is as follows:

fori=1to size of populationxi=ωiCtULEndforE15

where the ωi is the weight of ith weight, and U and L are the boundaries of the upper and lower, respectively. Ct is the chaotic sequence generated by chaotic maps.

Operators: generally, metaheuristics algorithms have several operators, such as selection operator, crossover operator, and mutation operator. Most of them are controlled by probabilities. In order to improve the capability of searching optimum, the probabilities can be substituted by chaotic sequence. The mathematical formula can be described as follows:

For a crossover operator:

xit+1=xit+Ctxitxjt,Ct<0xit,otherwiseE16

where C(t) is the chaotic sequence produced by chaotic map.

For a mutation operator:

xi,kt+1=Ctxi,kt,Ct<0xi,kt,otherwiseE17

Random generator: Random parameters in metaheuristics algorithms, for instance, polynomial variation, are replaced by chaotic sequences. For a solution xs, the polynomial mutation is described as

xit+1=xit+CtxitxjtE18

The phase for random generator is that Ct is calculated by chaotic maps in iterations. For example, if the chaotic map is logistic map, then in the (i + 1)th iteration, C(t + 1) = 4 × C(t) ∗ (1 – C(t)).

2.3. Chaotic maps

In this section, we present the chaotic maps used, which generate chaotic sequences in the process of evolutionary algorithms. Ten chaotic maps are one-dimensional maps.

The first is the Chebyshev map, which is a common chaotic map and used in digital communication and neural network widely. It can be defined as follows:

xk+1=coskcos1xk,E19

where the range is (−1,1). Note that xk is the kth chaotic number, with k denoting the iteration number.

Circle map is a simplified model for both driven mechanical rotors. Furthermore, it is a one-dimensional map which maps a circle onto itself. Circle map is presented as follows:

xk+1=xk+ba2πsin2πxkmod1,E20

where a = 0.5 and b = 0.2, the range is (0,1), and the parameters b and a can be regarded as a strength to nonlinearity and externally applied frequency, separately. The circle map produces much unexpected behavior with the change of parameters.

Gauss/Mouse map can be described as follows:

xk+1=0xk1xkmod1otherwise,E21

This map also generates chaotic sequences in (0,1).

Iterative map with infinite collapses can be presented as follows:

xk+1=sin/xk,E22

where a = 0.7 and the chaotic sequence in (−1,1).

Logistic map can be written as follows:

xk+1=axk1xk,E23

where a = 4 and the range is (0,1); it is the simplest map that appears in nonlinear dynamics of biological population, in which evidencing chaotic behavior belongs to a logistic map.

Piecewise map is governed by the following equation:

xk+1=xkP0xk<PxkP0.5PPxk<1/2,1Pxk0.5P12xk<1P1xkP1Pxk<1E24

where P = 0.4 and the range is (0,1)

The sine map belongs to a unimodal map and is similar to the logistic map, which can be described as follows:

xk+1=a/4sinπxk,E25

where a = 4 and the chaotic sequence in (0,1)

Singer map is a one-dimensional system like the following:

xk+1=μ7.86xk23.31xk2+28.75xk313.3xk4,E26

where μ = 1.07 and the range is (0,1).

Sinusoidal map can be defined as follows:

xk+1=axk2sinπxk,E27

where a = 2.3 and the range is (0,1).

Tent chaotic map is very similar to the logistic map, which displays specific chaotic effects.

Tent map can be described as follows:

xk+1=xk0.7xk<0.71031xkotherwise.E28

In order to get an unbiased result, we set the initial point as 0.7 for all chaotic maps in this work. Ten chaotic maps are shown in Figure 1.

Figure 1.

Visualization of employed 10 chaotic maps on one-dimensional space.

Advertisement

3. Experiments

In this section, we evaluate the performance of the chaotic metaheuristics algorithms; several experiments were carried out to test the efficiency. Twenty-three benchmark functions were used in our experiment. In order to obtain an unbiased result, all experiments were performed in the same environment.

In our experiments, we used the average and StD of the function value to compare the performance of the algorithms. Our focus was to compare our proposed algorithm with the other algorithms using two evaluation criteria. We compared the performance of the chaotic algorithm with the other well-known algorithms. The maximum number of fitness evaluations (FEs) is 10,000 × D, where D is the dimension of the problem. Moreover, the Wilcoxon rank sum test was used in our experiment to test the significance of algorithms. The fitness evaluation criteria are as follows:

Objective function value: algorithms were run 50 times for each benchmark function, and the average and SD were calculated.

The number of function evaluations (FEs): FEs are also recorded in our study; they are required to be less than ε. ε is fixed at 106, which is smaller and harder to reach than that used by Noman and Iba. The notation CNT indicates the number of runnings in which algorithms could reach ε. The maximum number of FEs is 10,000 × D.

To obtain an unbiased result, we compared our algorithm with five well-known optimization algorithms of different types, such as evolutionary and warm-based algorithms. Several studies have shown that they have good performance on optimization problems. These algorithms include particle swarm optimization algorithm, Krill herd algorithm, and Biogeography-based optimization algorithm and their chaos-based algorithms. The experiments were carried out on a PC with a 3.60-Hz processor and 8.0 RAM in MatLab R2014b.

The global optimization toolbox used in our experiment includes PSO algorithms. We used the standard PSO algorithm; c1 and c2 used a default value of 1.49. For the CPSO, we used the same parameter settings. For the BBO and CBBO, the probability of modification is 1 and the initial mutation probability is 0.005. The max immigration and emigration rates for each island are 1. For the KH and CKH, the foraging speed is 0.02 and the maximum diffusion speed is 0.005.

This group contains twenty-three benchmark functions f1–f23, which have limited dimensions, many dimensions, and multiple modal cases. From Tables 1 and 2, we find that the chaotic algorithms easily reach the global best result on all benchmark functions. The other algorithms cannot obtain the results as good as those of the chaotic algorithms. For example, the KH and PSO algorithms both obtain very few global best results on this set of functions. Table 2 shows that the convergence of the chaotic algorithms also outperforms those of the other algorithms and requires very few FEs to reach the ε level. For example, on function f18, the FE of the CBBO algorithm is 8.59E+02, which is significantly lower than the others. Table 3 shows the rank of all functions.

FunctionResultPSOCPSOCKHKHBBOCBBO
f1Mean2.54E−194.18E−773.84E−291.01E−052.40E−032.91E−117
Std1.44E−192.95E−778.06E−301.16E−065.47E−041.30E−116
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f2Mean1.07E−075.03E+025.20E−021.91E−015.25E−053.81E−67
Std6.31E−082.10E+033.68E−012.52E−011.44E−051.70E−66
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f3Mean8.83E−087.00E−047.84E−272.52E−051.96E−022.90E−21
Std6.56E−084.50E−031.86E−274.91E−068.50E−031.27E−20
p-value6.41E−045.01E−115.01E−115.01E−112.83E−10N/A
+/=/−++++N/A
f4Mean7.41E−027.43E+002.71E−155.60E−031.02E+012.27E−02
Std2.22E−021.40E+013.55E−161.59E−025.87E+001.17E−02
p-value5.01E−115.01E−115.01E−117.41E−095.01E−11N/A
+/=/−+++N/A
f5Mean1.80E+011.28E+031.60E−015.34E+001.92E+014.05E−01
Std9.88E+002.40E+037.97E−013.43E+005.15E+003.49E−01
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−++++N/A
f6Mean7.74E−171.12E−304.50E−291.06E−052.40E−035.53E−19
Std2.04E−171.58E−301.25E−291.11E−064.97E−042.45E−18
p-value5.01E−115.01E−111.50E−095.01E−116.03E−01N/A
+/=/−+++N/A
f7Mean3.47E−011.80E+009.21E−021.45E+006.30E−034.60E−03
Std9.30E−021.15E+004.53E−022.77E−011.80E−034.90E−03
p-value5.01E−111.06E−167.41E−095.01E−115.01E−11N/A
+/=/−++++=N/A
f8Mean−8.01E+02−1.23e+03−8.73E+03−8.52E+09−1.45E+02−1.26E+04
Std1.72E+025.80e+031.70E+03−4.57E+071.04E+015.47E−12
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f9Mean1.21E+012.59E+022.10E+023.86E+011.20E+021.32E−09
Std5.64E+001.40E+028.35E+011.39E+012.02E+014.85E−09
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f10Mean4.67E−012.02E+011.95E+012.17E+005.35E+001.15E−11
Std7.59E−012.05E−011.05E−013.00E−011.04E+003.32E−11
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f11Mean2.70E−033.08E−022.96E−044.97E−071.84E−041.73E−15
Std8.20E−035.97E−021.50E−036.69E−084.18E−056.89E−15
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−+++++N/A
f12Mean3.32E−029.35E−011.16E−294.10E−037.64E−051.17E−16
Std4.94E−021.39E+002.65E−302.07E−023.18E−055.22E−16
p-value5.01E−115.01E−115.01E−115.01E−115.01E−11N/A
+/=/−++++N/A
f13Mean7.90E−039.31E−011.80E−284.86E−012.88E+005.56E−17
Std5.90E−031.17E+003.65E−294.76E−015.90E−012.45E−16
p-value1.50E−091.82E−021.50E−092.83E−101.48E−07N/A
+/=/−++++N/A
f14Mean1.13E+011.89E+001.09E+011.24E+011.27E+019.98E−01
Std6.50E−011.83E+003.86E+001.29E+002.34E−152.84E−16
p-value2.26E−062.64E−052.83E−107.41E−091.91E−01N/A
+/=/−+++++N/A
f15Mean8.96E−041.30E−031.13E−023.70E−033.52E−043.07E−04
Std2.56E−043.55E−042.75E−021.09E−028.14E−051.19E−05
p-value8.16E−052.64E−055.01E−111.50E−094.34E−01N/A
+/=/−++++=N/A
f16Mean−9.91E−01−9.50E−01−1.03E+00−1.03E+00−1.03E+00−1.03E+00
Std1.83E−012.51E−012.10E−165.06E−112.58E−106.07E−15
p-value8.68E−031.63E−031.16E−016.03E−015.59E−01N/A
+/=/−++===N/A
f17Mean3.98E−013.98E−013.98E−013.98E−013.98E−013.98E−01
Std0.00E+000.00E+002.56E−111.99E−111.47E−105.45E−12
p-value8.42E−026.42E−025.75E−028.80E−025.65E−02N/A
+/=/−=====N/A
f18Mean3.00E+008.40E+003.00E+004.35E+003.00E+003.00E+00
Std5.39E−161.88E+012.10E−146.04E+001.14E−088.07E−13
p-value1.16E−016.41E−041.16E−011.91E−019.51E−02N/A
+/=/−=+===N/A
f19Mean−3.86E+00−3.86E+00−3.86E+00−3.79E+00−3.86E+00−3.86E+00
Std2.13E−155.23E−091.85E−152.38E−011.80E−071.85E−12
p-value8.68E−031.16E−012.96E−018.68E−031.82E−01N/A
+/=/−+==+=N/A
f20Mean−3.24E+00−3.28E+00−3.32E+00−3.27E+00−3.32E+00−3.32E+00
Std5.82E−027.81E−024.56E−165.98E−022.33E−046.07E−05
p-value8.68E−033.59E−022.96E−013.88E−031.16E−01N/A
+/=/−++=+=N/A
f21Mean−5.46E+00−5.13E+00−4.79E+00−5.31E+00−7.35E+00−1.02E+01
Std2.20E+002.80E+003.26E+001.14E+002.60E+006.80E−07
p-value5.01E−115.01E−115.01E−115.01E−112.26E−06N/A
+/=/−+++++N/A
f22Mean−3.96E+00−6.67E+00−8.80E+00−5.02E+00−9.87E+00−1.04E+01
Std1.19E+003.23E+002.86E+003.05E−011.64E+002.88E−06
p-value5.01E−115.01E−115.97E−075.01E−113.59E−02N/A
+/=/−+++++N/A
f23Mean−4.49E+00−5.77E+00−9.36E+00−5.26E+00−1.03E+01−1.05E+01
Std2.41E+003.71E+002.87E+001.38E+001.21E+002.12E−07
p-value5.01E−115.01E−116.41E−045.01E−111.63E−03N/A
+/=/−+++++N/A
Sum(+/=/−)21/2/020/2/112/6/519/3/116/7/0N/A

Table 1.

The average function values obtained by the average function values obtained by PSO, CPSO, KH, CKH, BBO, and CBBO at D = 30.

FunctionResultPSOCPSOCKHKHBBOCBBO
f1Mean8.45E+032.23E+049.36E+038.55E+04N/A1.68E+04
Std7.83E+027.53E+021.45E+025.47E+03N/A7.21E+02
CNT50505031N/A50
f2Mean1.51E+056.05E+04N/AN/A1.50E+058.56E+03
Std1.35E+045.64E+03N/AN/A8.94E+039.66E+01
CNT53N/AN/A2950
f3Mean5.72E+052.18E+053.75E+042.65E+05N/A2.50E+03
Std3.67E+044.46E+041.17E+031.16E+05N/A1.25E+02
CNT5085045N/A50
f4MeanN/AN/A1.76E+04N/AN/A6.95E+03
StdN/AN/A3.79E+02N/AN/A4.96E+02
CNTN/AN/A50N/AN/A50
f5MeanN/AN/AN/AN/AN/AN/A
StdN/AN/AN/AN/AN/AN/A
CNTN/AN/AN/AN/AN/AN/A
f6Mean9.81E+043.45E+049.36E+034.54E+04N/A2.89E+04
Std7.35E+031.95E+031.30E+025.03E+03N/A2.51E+03
CNT5050509N/A50
f7Mean1.81E+053.58E+04N/AN/AN/AN/A
Std1.02E+054.20E+03N/AN/AN/AN/A
CNT112N/AN/AN/AN/A
f8Mean7.81E+00N/AN/AN/AN/A5.43E+03
Std1.07E+04N/AN/AN/AN/A1.32E+03
CNT1N/AN/AN/AN/A50
f9Mean1.34E+05N/AN/AN/AN/A3.77E+03
Std4.12E+03N/AN/AN/AN/A7.38E+01
CNT2N/AN/AN/AN/A50
f10Mean1.51E+054.27E+031.24E+04N/AN/A6.89E+03
Std8.90E+031.56E+034.53E+03N/AN/A8.73E+01
CNT501344N/AN/A50
f11MeanN/AN/A1.19E+042.91E+041.84E+051.54E+03
StdN/AN/A5.24E+034.92E+034.21E+037.03E+01
CNTN/AN/A48491950
f12MeanN/AN/A1.08E+044.18E+04N/A1.21E+04
StdN/AN/A4.08E+027.18E+03N/A4.78E+01
CNTN/AN/A5045N/A50
f13MeanN/AN/A1.12E+042.23E+05N/A2.90E+05
StdN/AN/A2.80E+020.00E+00N/A4.08E+01
CNTN/AN/A501N/A50
f14MeanN/AN/AN/AN/AN/A1.55E+04
StdN/AN/AN/AN/AN/A8.78E+03
CNTN/AN/AN/AN/AN/A50
f15Mean1.02E+042.44E+033.72E+04N/A1.12E+052.55E+05
Std6.25E+031.09E+038.14E+03N/A5.55E+043.21E+04
CNT291842N/A4040
f16Mean3.60E+033.62E+045.70E+048.56E+049.31E+043.45E+04
Std2.80E+021.98E+022.36E+034.69E+032.90E+031.10E+03
CNT484650505050
f17Mean3.65E+041.01E+048.63E+035.94E+041.09E+057.53E+02
Std4.75E+021.83E+021.47E+024.73E+038.85E+035.04E+02
CNT505050505050
f18Mean4.00E+041.34E+043.52E+031.06E+042.04E+038.59E+02
Std3.64E+022.75E+022.11E+038.55E+034.01E+031.05E+03
CNT504150495050
f19Mean4.58E+046.60E+031.12E+036.98E+031.08E+043.90E+03
Std4.48E+021.12E+038.49E+024.88E+028.05E+033.45E+02
CNT505050445050
f20Mean2.48E+051.32E+043.75E+031.82E+055.13E+048.55E+04
Std3.45E+035.41E+037.78E+031.03E+053.66E+045.66E+04
CNT274250325050
f21Mean4.89E+043.76E+041.85E+054.56E+048.50E+044.33E+03
Std3.93E+024.48E+028.80E+034.57E+021.29E+039.19E+02
CNT25221954650
f22Mean4.91E+043.70E+045.79E+048.94E+041.84E+055.80E+04
Std5.22E+027.49E+025.46E+038.95E+038.79E+031.32E+03
CNT81747341250
f23Mean4.91E+043.73E+041.52E+052.55E+052.13E+058.79E+04
Std4.67E+024.35E+022.35E+041.79E+045.65E+042.13E+03
CNT11732164150

Table 2.

The average FEs obtained by PSO, CPSO, KH, CKH, BBO and CBBO at D = 30.

AlgorithmPSOCPSOCKHKHBBOCBBO
Average ranking3.304.132.63.523.391.39
Final ranking362541

Table 3.

The average rank of PSO, CPSO, KH, CKH, BBO and CBBO.

From the results of the 23 benchmark functions, we can find that, in general, the chaotic algorithms outperform the other algorithms in terms of the average function value and the number of function evaluations. The results presented in this section confirm that the proposed chaotic algorithm exhibits a higher convergence velocity and greater robustness than the other algorithms.

Advertisement

4. Discussion and conclusion

The convergence properties of metaheuristics algorithms are strongly related to its stochastic nature and they use a random sequence for its parameters during running. Generating random sequences with a long period and a good uniformity are very important for easy simulating complex phenomena, sampling, numerical analysis, decision-making, and especially in heuristic optimization. Its quality determines the reduction of storage and computation time to achieve a desired accuracy. Chaos has properties of randomness, nonrepetition, and ergodicity; it matched the stochastic feature of metaheuristic optimization algorithms perfectly. Chaotic optimization not only accelerates the speed of algorithm, but can also enhance the variety of movement pattern.

Chaos has been observed in various applications widely. In this chapter, we used chaos theory combined with the latest algorithm to analyze the properties. The first advantage of chaotic algorithms is using fewer chaotic maps to enhance the searching capability. Secondly, chaotic optimization performs search at higher speed compared to the stochastic searches that rely on probability. Moreover, chaotic optimization is a simple structure and easy to implement. For future studies, it may be well worth employing chaotic algorithms for solving real-world engineering problems.

References

  1. 1. Assarzadeh Z, Naghsh-Nilchi AR. Chaotic particle swarm optimization with mutation for classification. Journal of Medical Signals and Sensors. 2015;5(1):12
  2. 2. Saremi S, Mirjalili S, Lewis A. Biogeography-based optimisation with chaos. Neural Computing and Applications. 2015;25(5):1077-1097
  3. 3. Goldberg DE, Holland JH. Genetic algorithms and machine learning. Machine Learning, 1988;3(2):95-99
  4. 4. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 1997;11(4):341-359
  5. 5. Hansen N, Ostermeier A. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation, 2001;9(2):159-195
  6. 6. Kennedy J. Particle swarm optimization. In Encyclopedia of machine learning. 2011;760-766. Springer US
  7. 7. Tang R, Fong S, Yang XS, Deb S. Wolf search algorithm with ephemeral memory. In: InDigital Information Management (ICDIM); 2012 Aug 22; IEEE; 2012
  8. 8. Gandomi AH, Yang XS, Alavi AH. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Engineering with Computers. 2013;29(1):17-35
  9. 9. Tang R, Fong S, Deb S, Raymond W. Dynamic group search algorithm. Computational and Business Intelligence (ISCBI). 2016
  10. 10. Tang R, Fong S, Deb S, Wong R. Dynamic group search algorithm for solving an engineering problem. Operational Research
  11. 11. Gandomi AH, Alavi AH. Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation. 2012
  12. 12. Wang GG, Guo L, Gandomi AH, Hao GS, Wang H. Chaotic krill herd algorithm. Information Sciences. 2014
  13. 13. Simon D. Biogeography-based optimization. IEEE transactions on evolutionary computation. 2008
  14. 14. Guo W, Li W, Kang Q, Wang L, Wu Q. Chaotic biogeography-based optimisation. International Journal of Computing Science and Mathematics. 2014

Written By

Rui Tang, Simon Fong and Nilanjan Dey

Submitted: 07 May 2017 Reviewed: 31 October 2017 Published: 28 March 2018