Open access

Applications of the Differential Ant-Stigmergy Algorithm on Real-World Continuous Optimization Problems

Written By

Peter Korosec and Jurij Silc

Published: October 1st, 2009

DOI: 10.5772/9604

Chapter metrics overview

1,966 Chapter Downloads

View Full Metrics

1. Introduction

Ants have always fascinated human beings. What particularly strikes the occasional observer as well as the scientist is the high degree of societal organization that these insects can achieve in spite of very limited individual capabilities (Dorigo et al., 2000). Ants have inspired also a number of optimization algorithms. These algorithms are increasingly successful among researches in computer science and operational research (Blum, 2005; Cordón et al., 2002; Dorigo & Stützle, 2004).

A particular successful metaheuristic—Ant Colony Optimization (ACO)—as a common framework for the existing applications and algorithmic variants of a variety of ant algorithms has been proposed in the early nineties by Marco Dorigo (Dorigo, 1992). ACO takes inspiration from the foraging behavior of some ant species. These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. ACO exploits a similar mechanism for solving combinatorial optimization problems.

In recent years ACO algorithms have been applied to more challenging and complex problem domains. One such domain is continuous optimization. However, a direct application of the ACO for solving continuous optimization problem is difficult.

The first algorithm designed for continuous function optimization was continuous ant colony optimization (Bilchev & Parmee, 1995) which comprises two levels: global and local; it uses the ant colony framework to perform local searches, whereas global search is handled by a genetic algorithm. Up to now, there are few other adaptations of ACO algorithm to continuous optimization problems: continuous interacting ant colony (Dréo & Siarry, 2002), ACO for continuous and mixed-variable (Socha, 2004), aggregation pheromone system (Tsutsui, 2006), and multilevel ant-stigmergy algorithm (Korošec & Šilc, 2008).

In this chapter we will present so-called Differential Ant-Stigmergy Algorithm (DASA), a new approach to the continuous optimization problem (Korošec, 2006). We start with the DASA description followed by three case studies which show real-world application of the proposed optimization approach. Finally, we conclude with discussion of the obtained results.

Figure 1.

Stigmergy is an organizing principle in emergent systems in which the individual parts of the system communicate with one another indirectly by modifying their local environment. Ant colonies are a classic example. The ants communicate indirectly. Information is exchanged through modifications of the environment (local gradients of pheromone).

Advertisement

2. A differential ant-stigmergy approach

2.1. Continuous optimization

The general continuous optimization problem is to find a set of parameter values, p = ( p 1 p 2 p D ) that minimizes a function, f cost ( p ) of D real variables, i.e.,

Find:

p * | f cost ( p * ) f cost ( p ) p D E1

To solve this problem, we created a fine-grained discrete form of continuous domain. With it we were able to represent this problem as a graph. This enabled us to use ant-based approach for solving numerical optimization problems.

2.2. The fine-grained discrete form of continuous domain

Let p ' i be the current value of the i-th parameter. During the searching for the optimal parameter value, the new value, p i is assigned to the i-th parameter as follows:
p i = p ' i + δ i E2

Here, δ i is the so-called parameter difference and is chosen from the set

Δ i = Δ i 0 Δ i + E3

where

Δ i = { δ i k | δ i k = b k + L i 1 k = 1,2, d i } E4
and
Δ i + = { δ i k + | δ i k + = + b k + L i 1 k = 1,2, d i } E5

Here

d i = U i L i + 1 E6

Therefore, for each parameter p i the parameter difference, δ i has a range from b L i to b U i where b is the so-called discrete base,

L i = lg b ( ε i ) E7
and
U i = lg b ( max ( p i ) min ( p i ) ) E8

With the parameter ε i the maximum precision of the parameter p i is set. The precision is limited by the computer's floating-point arithmetic. To enable a more flexible movement over the search space, the weight ω is added to Eq. 2:

p i = p ' i + ω δ i E9

where ω = R a n d o m I n t e g e r ( 1, b 1 )

2.3. Graph representation

From all the sets Δ i 1 i D where D represents the number of parameters, a so-called search graph G = ( V E ) with a set of vertices, V, and a set of edges, E, between the vertices is constructed (see Fig. 2). Each set Δ i is represented by the set of vertices,

V i = { v i ,1 v i ,2 v i ,2 d i + 1 } E10
and
V = i = 1 D V i E11

Then we have that

Δ i = { δ i d i δ i d i j + 1 δ i ,1 Δ i ,0, δ i ,1 + δ i j + δ i d i + } Δ i + E12.1

is equal to

V i = { v i ,1 v i j v i d i + 1 0 v i d i + 1 + j v i ,2 d i + 1 } E12.2

where j = 1,2, d i

Figure 2.

Search graph representation of a discrete multi-parameter optimization problem.

Each vertex of the set V i is connected to all the vertices that belong to the set V i + 1 Therefore, this is a directed graph, where each path ν from start vertex to any of the ending vertices is of equal length and can be defined with v i as:

ν = ( v 1 v 2 v D ) E13

where v i V i 1 i D

The optimization task is to find a path ν such that f cost ( p ) f cost ( p ' ) where p ' is currently the best solution, and p = p ' + Δ ( ν ) (using Eq. 9). Additionally, if the objective function f cost ( p ) is smaller than f cost ( p ' ) then the p ' values are replaced with p values.

2.4. The differential ant stigmergy algorithm

The optimization consists of an iterative improvement of the currently best solution, p ' by constructing an appropriate path ν that uses Eq. 9 and returns a new best solution. This is done as follows (see Fig. 3):

Figure 3.

The Differential Ant-Stigmergy Algorithm (DASA).

  1. Step 1. A solution p ' is manually set or randomly chosen.

  2. Step 2. A search graph is created and an initial amount of pheromone, τ V i 0 is deposited on all the vertices from the set V i V 1 i D according to the Cauchy probability density function

    C ( x ) = 1 s π + π s ( x l ) 2 E14

    where l is the location offset and s = s global s local is the scale factor. For an initial pheromone distribution the standard Cauchy distribution ( l = 0, s global = 1, and s local = 0 ) is used and each parameter vertices are equidistantly arranged between z = [ 4,4 ]

  3. Step 3. There are m ants in a colony, all of which begin simultaneously from the start vertex. Ants use a probability rule to determine which vertex will be chosen next. More specifically, ant α in step i moves from a vertex in set V i to vertex v i j { v i ,1 v i ,2 v i ,2 d i + 1  } with a probability, prob, given by:

    p r o b j ( α i ) = τ ( v i j ) 1 k 2 d i + 1 τ ( v i k ) E15

    where τ ( v i k ) is the amount of pheromone on vertex v i k

    The ants repeat this action until they reach the ending vertex. For each ant, path ν is constructed. If for some predetermined number of tries we get ν = 0 the search process is reset by randomly choosing new p temporary_best and pheromone re-initialization. New solution p is constructed (see Eq. 9) and evaluated with a calculation of f cost ( p )

    The current best solution, p current_best out of m solutions is compared to the temporary best solution p temporary_best If p current_best is better than p temporary_best then p temporary_best values are replaced with p current_best values. In this case s global is increased (in our case for 1 %) and pheromone amount is redistributed according to the associated path ν best Furthermore, if new p temporary_best is better then p best then p best values are replaced with p temporary_best values. So, global best solution is stored. If no better solution is found s local is decreased (in our case for 3 %).

  4. Step 4. Pheromone dispersion is defined by some predetermined percentage χ The probability density function C ( x ) is changed in the following way:

    l ( 1 χ ) l E16
    s local ( 1 χ ) s local E17

    Pheromone dispersion has a similar effect as pheromone evaporation in classical ACO algorithm.

  5. Step 5. The whole procedure is then repeated until some ending condition is met.

    It is a well known that ant-based algorithms have problems with convergence. This happens when on each step of the walk there is a large number of possible vertices from which ant can choose from. But this is not the case with the DASA where Cauchy distribution of pheromone over each parameter was used. Namely, such distribution reduces the width of the graph to only few dominant parameter values (i.e., vertices). On the other hand, with proper selection of the discrete base, b, we can also improve the algorithm's convergence (larger b reduces the search graph size).

2.5. Software implementation and parameter settings

The DASA was implemented in the Borland® Delphi™ programming language (see Fig. 4). The computer platform used to perform the optimizations was based on AMD Opteron™ 2.6-GHz processors, 2 GB of RAM, and the Microsoft® Windows® XP 32-bit operating system.

The DASA parameters were set as follows: the number of ants, m was set to 10, the pheromone evaporation factor, χ was set to 0.2, and the maximum parameter precision, ε dependent on the discrete step of each parameter.

Figure 4.

GUI used for setting the DASA options.

Advertisement

3. Case studies

Case studies presented in this section are related to the development of a new dry vacuum cleaner (Korošec et al., 2007; Tušar et al., 2007). In particular, the efficiency of the turbo-compressor unit (see Fig. 5) was improved.

3.1. An electric motor power losses minimization

Home appliances, such as vacuum cleaners and mixers, are generally powered by a universal electric motor (UM). These appliances need as low as energy consumption, that is, input power, as possible, while still satisfying the needs of the user by providing sufficient output power. The optimization task is to find the geometrical parameter values that will generate the rotor and the stator geometries resulting in the minimum power losses.

Figure 5.

Turbo-compressor unit of a dry vacuum cleaner.

Figure 6.

Parameters that define the rotor and stator geometries.

There are several invariable and variable parameters that define the rotor and stator geometries. The invariable parameters are fixed; they cannot be altered, either for technical reasons (e.g., the air gap) or because of the physical constraints on the motor (e.g., the radius of the rotor's shaft). The variable parameters do not have predefined optimum values. Some variable parameters are mutually independent and without any constraints. Others are dependent, either on some invariable parameters or on mutually independent ones.

In our case, 11 mutually independent variable parameters defining the rotor and stator geometries are subject to optimization (see Fig. 6): rotor yoke thickness, p 1 rotor external radius, p 2 rotor pole width, p 3 stator yoke horizontal thickness, p 4 stator yoke vertical thickness, p 5 stator middle-part length, p 6 stator internal edge radius, p 7 stator teeth radius, p 8 stator slot radius, p 9 stator width, p 10 and stator height, p 11 We optimized only 10 parameters (the ratio between p 10 and p 11 was set constant) of the UM rotor and stator geometries.

Power losses, P l o s s e s are the main factor affecting the efficiency of a UM. The efficiency of a UM, η is defined as the ratio of the output power, P out to the input power, P inp :

η = P out P inp = P out P out + P losses E18

and it is very dependent on various power losses (Sen, 1997):

P losses = P Cu + P Fe + P brush + P vent + P frict E19

The overall copper losses, P Cu occurring in the rotor and the stator slots are as follows:

P Cu = i ( J 2 A ρ l turn ) i E20

where i stands for each slot, J is the current density, A is the slot area, ρ is the copper's specific resistance and l turn is the length of the winding turn. The calculation of the iron losses, P Fe is less exact because iron has non-linear magnetic characteristics. The iron losses are therefore separated into two components: the hysteresis losses and the eddy-current losses. This means the motor's iron losses can be expressed with the formula

P Fe = c e B 2 f r 2 m r + c e B 2 f s 2 m s + c h B 2 f s m s E21

where c e is the eddy-current material constant at 50Hz, c h is the hysteresis material constant at 50Hz, B is the maximum magnetic flux density, f r is the frequency of the magnetic field density in the rotor, f s is the frequency of the magnetic field density in the stator, m r is the mass of the rotor, and m s is the mass of the stator. Three additional types of losses occurring in the UM, i.e., the brush losses, P brush ; the ventilation losses, P vent ; and the friction losses, P fric ; depend mainly on the speed of the motor. When optimizing the geometries of the rotor and the stator, the motor's speed is assumed to be fixed; hence P brush P vent and P fric have no impact on the motor's efficiency, and so these losses are not significantly affected by the geometries of the rotor and the stator. Therefore, in our case, the overall copper and iron losses can be used to estimate the cost function:

f c ( p ) = P Cu + P Fe E22

Our goal is to find such parameter-value settings, where f c ( p ) is minimized.

The DASA starts its search with the existent solution. The stopping criterion was set to 1,400 calculations. The calculation of a single solution via the ANSYS Multiphysics simulation takes approximately two minutes, which means that the execution of 1,400 calculations needs about two days. The optimization method was run 20 times. The obtained results in terms of the UM power losses are presented statistically in Table 1.

Existing solution Optimized solutions
Worst Mean Best
177.9 136.7 129.5 113.8

Table 1.

Optimized UM's power losses in Watts after 1,400 calculations.

Figure 7.

Laminations of the original engineering design with power losses of 177.9 W (left) and laminations of an optimized design acceptable from the production point of view with power losses of 129.1 W (right).

The engineering rotor and stator design (the existent solution) results in power losses of 177.9 W, and can be seen in Fig. 7 (left). The figure shows the magnetic flux density in the laminations (higher magnetic flux density, B causes higher power losses). Figure 7 (right) present a typical example of a feasible rotor and stator geometry, with power losses of 129.1 W. This solution has very low iron losses in the rotor due to its small size and in spite of its high magnetic saturation. The small rotor and its saturation are compensated by large stator poles that ensure large enough a magnetic flux. This design is completely feasible from the technical and production points of view.

3.2. An electric motor casing stiffness maximization

The casing is a part of the dry vacuum cleaner motor which is built into vacuum cleaners. The casing is basically an axisymmetric shell structure which is built of steel suitable for forming. For this procedure which consists of eleven different phases it is important that the radii are growing or do not change while the height is growing. That is an important rule which must not be broken during the geometry optimization. The goal of optimization is to preserve the stiffness while using a thinner shell structure and consequently save material and reduce costs.

The computational model of the casing comprises 26 different parameters which generate the geometry variations. The classical parameters are: radius on the top, p 1 radius on the side, p 2 radius at the groove for brushes, p 3 deviation of the hole for diffuser fixation, p 4 radius of roundness at the beginning of the bearing groove, p 5 radius on the top of the bearing groove, p 6 radius on the top of air culverts, p 7 radius at the bottom of air culverts, p 8 roundness of air culverts, p 9 angle of slope, p 22 angle of culverts span, p 23 slope of bearing groove, p 24 height of bearing groove, p 25 and slope of the brushes groove, p 26

Besides the above listed 14 classical parameters there are 12 more (p 10 to p 21), with which the ribs on areas A, B and C are defined (Fig. 8). These are essential to improve the stiffness so we will search for those which maximize the cost function. With these parameters we can generate a lot of combinations of different rib shapes, depending on the number of given paths and accuracy of intervals for points deviations, positions and number of ribs. In our case there were several billion combinations for ribs. It is obvious that instead of a defined path at position j, we could define displacements for individual points but this would result in many more parameters and time consuming calculation procedure. The question still remains what would be the benefit of that. From previous designs there exist some reasonable shapes for ribs which have been defined intuitively and these then change their position, magnitude and number, which are probably sufficient for a good result, as there has been over ten different rib shapes defined earlier.

Figure 8.

Parameters of a casing and ribs. Ribs A (top), B (center), and C (bottom); basic (left) and modify forms (center and right).

Besides the parameters for the creation of ribs there are other classical parameters which also have an influence on their shape. Namely, if the radii p 1 or p 2 are varying then this is reflected in the shape of ribs of model A or if the height of bearing groove, p 25 and slope of groove, p 24 are modified then this will be reflected in the ribs of model C, etc.

The cost function is relatively simply defined when loadings are deterministic. Usually we optimize such cases with the minimization of displacements or stresses, maximization of elasticity, etc. (Dražumerič & Kosel, 2005). It is difficult to determine the cost function for a dynamically loaded assembly where the loads are stochastic. So the question is how to define the stiffness of a shell structure if we do not know the lateral loads. Let us consider a certain arbitrarily shaped plane shell. We can write its potential energy, W p as:

W p = 1 2 σ i j ε i j d V E23

If we consider that if the Poissons shear modulus is neglected, the stress tensor, σ i j and strain tensor, ε i j are the following:

σ i j = z { 2 w ( x y ) x 2 2 w ( x y ) y 2 2 w ( x y ) x y } T E24
and
ε i j = z E { 2 w ( x y ) x 2 2 w ( x y ) y 2 2 w ( x y ) x y } T E25

Here w ( x y ) is displacement in z direction and E is elasticity modulus. The potential energy can then be written as:

W p = 1 2 E V ( σ x x 2 + σ y y 2 + 2 τ x y 2 ) d V E26

We can see that in this case the potential energy is an integral of squares of stresses over the volume. We define also the kinetic energy, W k for plate vibrations:

W k = ρ ω 2 2 V z w 2 ( x y ) d V E27

where ρ represents the density of material. Here the second derivative of displacement is approximated with respect to time with the product of squared displacement and eigen frequency, ω

For conservative systems the maximal potential energy is equal to maximal kinetic energy and from this follows the expression for cost function which can be in our case the stiffness itself:

f c ( p ) = V ( σ x x 2 + σ y y 2 + 2 τ x y 2 ) d V V z w 2 ( x y ) d V E28

The DASA was run 20 times and each run consisted of 2,000 calculations. The cost function was calculated by the ANSYS Multiphysics simulation tool. The obtained results are presented statistically in Table 2.

Existing solution Optimized solutions
Worst Mean Best
5.87·10-3 6.13·10-3 6.81·10-3 7.34·10-3

Table 2.

Optimized casing's stiffness after 2,000 CFD calculations.

The results of optimization were quite surprising (Fig. 9 bottom row). It was expected that the ribs would form on all the given surface but they did not. The ribs were more distinct where the vertical part of the surface was turning into the horizontal one (ribs A). There were no ribs at the surface where the stator is in a tight fit, probably because of pre-stressing of the casing through the stator. Also in the groove for brushes (ribs B) there were no distinct ribs, probably because the groove itself is a kind of rib. Instead of this there is a given slope for a groove which was not there before. The radii of roundings were in most cases bigger than before as we can clearly see at the air culverts (Fig. 9 middle bottom).

Figure 9.

Existent (up) and optimized (bottom) casing and different loading cases.

3.3. A turbo-compressor aerodynamic power maximization

Radial air impellers are the basic components of many turbo-machines. In the following we will concentrate on relatively small impellers and subsonic speeds used in a dry vacuum cleaner. Our main aim was to find an impeller shape that has a higher efficiency, i.e., greater aerodynamic power, than the one currently used in production.

An impeller is constructed from blades, an upper and a lower side. The sides enclose the blades and keep them together. The blades, which are all the same, were the main part of the optimization. The geometry of a blade is shown in Fig. 10, where the gray color represents the blade. The method of modeling is as follows: we construct the points at specific locations, draw the splines through them and spread the area on the splines. Once a blade is made an air channel must be constructed in a similar way.

In Fig. 10a the point 1 has two parameters: the radius p 1 and the angle p 2 Similarly, the points 2, 5 and 6 have parameter pairs p 3 and p 4 p 5 and p 6 p 7 and p 8 The points 3 and 4 are fixed on the x axis. This is because the impeller must have a constant outer radius p 9 and the outer side of the blade must be parallel to the z axis. On the other hand, the outer angle of the blade p 10 and the angle of the spline at points 3 and 4, can be varied. Analogously, the angles p 11 and p 12 are the inner-blade angles for the upper and lower edges of the blade at the input, respectively.

In Fig. 10c the points 1, 2, and 3 form the upper spline, and the points 4, 5, and 6, the lower spline. Between the points 1 and 6 is the point 7, which defines the spline of the input shape of the blade. In this figure, the points 1, 2, 5, and 6 have the parameters p 13 p 14 p 15 and p 16 respectively, describing their heights.

Figure 10.

Parametric modeling: a) top view; b) 3D view, c) side view.

Point 3 stays on the x axis and point 4 has a constant height p 17 In other words, the designer of the impeller must know at least the outer diameter p 9 and the height p 17 The parameters p 18 and p 19 describe the input angles of the lower and upper parts of the blade with respect to the r z plane. Similarly, the parameters p 20 and p 21 describe the outer blade angle with respect to the same plane.

In Fig. 10b the meaning of point 7 is explained more precisely. The parameters p 22 p 23 and p 24 define the radius, height, and angle, respectively. The radius and angle dictate where the point should appear with respect to the x y plane and the height with respect to the r z plane. Similarly, the angles p 25 p 26 p 27 and p 28 are needed to define the starting and ending angles of the spline constructed between the points 1, 7, and 6.

If we look closely at Fig. 10c then we can see the contour surrounding the blade. This is the air channel with the following parameters: the inner radius p 29 (see Fig. 10a), which is needed for the hexahedral mesh (explained later), the air intake radius p 30 the air outflow radius p 31 the bolt radius p 32 the bolt height p 33 and the impeller height p 34

In this way we have successfully modeled the impeller geometry with 34 parameters. For each parameter we have a predefined search interval with a given discrete step. Therefore, the size of the search space can be obtained as the product of the number of possible settings over all the parameters. It turns out that there are approximately 3 1034 possible solutions.

The mesh of the air channel between the two blades is constructed with more than 6,000 hexahedral elements. The boundary conditions are zero velocity at all the solid regions and symmetry boundary conditions at the fluid regions. At the influx and outflux the intake velocity, v in and reference pressure, p ref are defined, respectively. The intake velocity is parabolically distributed, because we expect that the intake flow is laminar and so:

v in = v ( Φ ( t ) ) 6 r r up ( r r up 1 ) E29

Here, v ( Φ ( t ) ) is a velocity dependent on the stream, which further depends on time, as we shall see later, r up is the upper radius, defined before, and r is the radius within the limits from r s to r up The reference pressure, p ref is set to zero.

With respect to the maximum time, t max , the flux is:

Φ ( t ) = v A in t t max E30

where A in is the influx area and t is the current time.

Figure 11.

The existent impeller (left), the worst optimized impeller (middle), and the best optimized impeller (right).

The distribution of the relative pressure can be used to estimate the cost function. The average pressure, p in is chosen from the air-intake area. Finally, the aerodynamic power, which represents the cost function, is as follows:

f c ( p ) = P air = ( p out p in ) Φ ( t opt ) E31

where p out is the output pressure at the radius r out and Φ ( t opt ) = 40 l/s is the flux near the desired optimum performance of the impeller. Our goal is to find such parameter-value settings, where P air is maximized.

The DASA was run 10 times and each run consisted of 2,000 CFD calculations. For the CFD calculations we used the ANSYS Multiphysics package. A single CFD calculation takes approximately seven minutes. The obtained results, in terms of aerodynamic power, are presented statistically in Table 3.

Existing solution Optimized solutions
Worst Mean Best
411.00 432.00 472.00 483.00

Table 3.

Optimized impeller's aerodynamic power in watts after 2,000 CFD calculations.

Results show that we were able to increase aerodynamic power for approximately 20 %. Figure 11 shows a 3D view of the existent and two optimized impellers (best and worst of 10 runs).

Advertisement

4. Discussion

In this chapter the Differential Ant-Stigmergy Algorithm (DASA) was presented, where the main goal was an evaluation of the DASA on some real-world engineering applications. Case studies were selected from a R&D project where more efficient turbo-compressor for dry vacuum cleaner was developed. Here the DASA was used to improve the efficiency of an electric motor, increase casing stiffness, and increase impeller's aerodynamic power. In all these cases the improvement was evident.

Figure 12.

The Bonferroni-Dunn post-hoc statistical test.

In (Korošec & Šilc, 2009a) we also compared the DASA to the eleven state-of-the-art stochastic algorithms for continuous optimization which were presented in the CEC'2005 Special Session on Real Parameter Optimization. The algorithms are:

  1. BLX-GL50, a hybrid real-coded genetic algorithm with female and male differentiation (García-Martínez & Lozano, 2005),

  2. BLX-MA, a real-coded memetic algorithm with adaptive local search parameters (Molina et al., 2005),

  3. CoEVO, an evolutionary algorithm with mutation step co-evolution (Pošík, 2005),

  4. DE, a differential evolution (Rönkkönen et al., 2005),

  5. EDA, a continuous estimation of distribution algorithm (Yuan & Gallagher, 2005),

  6. FEA, a flexible evolutionary algorithm (Alonso et al., 2005),

  7. G-CMA-ES, a restart covariance matrix adaptation evolutionary strategy with increasing population size ( Auger & Hansen, 2005a ),

  8. G-DE, a guided differential evolution (Bui et al., 2005),

  9. K-PCX, a population-based steady-state procedure (Sinha et al., 2005),

  10. L-CMA-ES, an advanced local search evolutionary algorithm (Auger & Hansen, 2005b), and

  11. SPC-PNX, a steady-state real-parameter genetic algorithm (Ballester et al., 2005).

Obtained results confirmed that the DASA is comparable to above algorithms and therefore generally applicable to global optimization problems.

The experimental results have shown that the DASA ranks among the best algorithms for real-life-like applications. Its main advantage is in consistent problem solving which is indicated by 19 rankings in top third, 4 ranking in middle third and only 2 in bottom third.

Statistical analysis following the Bonferroni-Dunn post-hoc test with α = 0.10 showed that the DASA is significantly better than 8 out of 11 compared algorithms.

The algorithm was also applied to dynamic optimization problems with continuous variables proposed for CEC’2009 Special Session on Evolutionary Computation in Dynamic and Uncertain Environments (Korošec & Šilc, 2009b). If we compare the DASA to:

  1. a self-adaptive differential evolution (Brest et al., 2009),

  2. a dynamic artificial immune algorithm (de França & Von Zuben, 2009),

  3. an evolutionary programming (Yu & Suganthan, 2009), and

  4. a clustering particle swarm optimizer (Li & Yang, 2009),

the results show that the DASA can find reasonable solutions for all of the problems. It can be seen that the DASA performs not many worse than a self-adaptive differential evolution and much better than the other three algorithms. One obvious advantage is that was no need any changes to the original algorithm. So, it can be used as such for both cases of numerical optimization, static and dynamic. Furthermore, the algorithm is unsusceptible to different types of changes and can be used with very limited knowledge about problem, only maximal dimension and input problem parameters.

References

  1. 1. Alonso S. Jimenez J. Carmona H. Galvan B. Winter G. 2005 Performance of a flexible evolutionary algorithm. www.ntu.edu.sg/home/EPNSugan.
  2. 2. Auger A. Hansen N. 2005a A restart CMA evolution strategy with increasing population size, Proceedings of the IEEE Congress on Evolutionary Computation, 1769 1776 , 0-78039-363-5 UK, September 2005, IEEE.
  3. 3. Auger A. Hansen N. 2005a Performance evaluation of an advanced local search evolutionary algorithm, Proceedings of the IEEE Congress on Evolutionary Computation, 1777 1784 , 0-78039-363-5 UK, September 2005, IEEE.
  4. 4. Ballester P. J. Stephenson J. Carter J. N. Gallagher K. 2005 Real-parameter optimization performance study on the CEC-2005 benchmark with SPC-PNX, Proceedings of the IEEE Congress on Evolutionary Computation, 498 505, 0-78039-363-5 UK, September 2005, IEEE.
  5. 5. Bilchev G. Parmee I. C. 1995 The ant colony metaphor for searching continuous design spaces, In: Evolutionary Computing, L. Fogarty (Ed.), 25-39, Springer, 0302-9743 Lecture Notes in Computer Science, 993
  6. 6. Blum C. 2005 Ant colony optimization: Introduction and recent trends, Physics of Life Reviews, 2 4 (December 2005) 35325-39373, 1571-0645
  7. 7. Brest J. Zamuda A. Bosković B. Sepesy Maučec. M. Žumer V. 2009 Dynamic optimization using self-adaptive differential evolution, Proceedings of the IEEE Congress on Evolutionary Computation, 415 422 , 978-1-42442-959-2 Trondheim, Norway, May 2009, IEEE.
  8. 8. Bui L. T. Shan Y. Qi F. Abbass H. A. 2005 Comparing two versions of differential evolution in real parameter optimization, www.ntu.edu.sg/home/EPNSugan.
  9. 9. Cordón O. Herrera F. Stützle T. 2002 A review on the ant colony optimization metaheuristic: Basis, models and new trends, Mathware and Soft Computing, 9 3 (2002) 141-175, 1134-5632
  10. 10. de França F. O. Von Zuben. F. J. 2009 A dynamic artificial immune algorithm applied to challenging benchmarking problems, Proceedings of the IEEE Congress on Evolutionary Computation, 423 430 , 978-1-42442-959-2 Trondheim, Norway, May 2009, IEEE.
  11. 11. Dorigo M. 1992 Optimization, learning and natural algorithms, Ph.D.Thesis, Politecnico di Milano, Italy.
  12. 12. Dorigo M. Bonabeaub E. Theraulaz G. 2000 Ant algorithms and stigmergy, Future Generation Computer Systems, 16 8 (June 2000) 851-871, 0016-7739X.
  13. 13. Dorigo M. Stützle T. 2004 Ant Colony Optimization, The MIT Press, 978-0-26204-219-2 Cambridge, Massachusetts.
  14. 14. Dražumerič R. Kosel F. 2005 Optimization of geometry for lateral buckling process of a cantilever beam in the elastic region, Thin-Walled Structures, 43 3 (March 2005) 515-529, 0263-8231
  15. 15. Dréo J. Siarry P. 2002 A new ant colony algorithm using the heterarchical concept aimed at optimization of multiminima continuous functions, In: Ant Algorithms, M. Dorigo et al. (Eds.), 216-227, Springer, 0302-9743 Lecture Notes in Computer Science, 2463
  16. 16. García-Martínez C. Lozano M. 2005 Hybrid real-coded genetic algorithm with female and male differentiation, Proceedings of the IEEE Congress on Evolutionary Computation, 896 903 , 0-78039-363-5 UK, September 2005, IEEE.
  17. 17. Korošec P. 2006 Stigmergy as an approach to metaheuristic optimization, Ph.D.Thesis, Jožef Stefan International Postgraduate School, Ljubljana, Slovenia.
  18. 18. Korošec P. Oblak K. Kosel F. Šilc J. 2007 Multi-parameter numerical optimization of selected thin-walled machine elements using a stigmergic optimization algorithm, Thin-Walled Structures, 45 12 (December 2007) 991-1001, 0263-8231
  19. 19. Korošec P. Šilc J. 2008 Using stigmergy to solve numerical optimization problems, Computing and Informatics, 27 3 (2008) 377-402, 1335-9150
  20. 20. Korošec P. Šilc J. 2009a A stigmergy-based algorithm for continuous optimization tested on real-life-like environment, In: EvoWorkshops 2009, M. Giacobini et. al (Eds.), 675-684, Springer, 0302-9743 Lecture Notes in Computer Science, 5484
  21. 21. Korošec P. Šilc J. 2009b The differential ant-stigmergy algorithm applied to dynamic optimization problems, Proceedings of the IEEE Congress on Evolutionary Computation, 407 414 , 978-1-42442-959-2 Trondheim, Norway, May 2009, IEEE.
  22. 22. Li C. Yang S. 2009 A clustering particle swarm optimizer for dynamic optimization, Proceedings of the IEEE Congress on Evolutionary Computation, 439 446 , 978-1-42442-959-2 Trondheim, Norway, May 2009, IEEE.
  23. 23. Molina D. Herrera F. Lozano M. 2005 Adaptive local search parameters for real-coded memetic algorithms, Proceedings of the IEEE Congress on Evolutionary Computation, 888 895 , 0-78039-363-5 UK, September 2005, IEEE.
  24. 24. Pošík P. 2005 Real-parameter optimization using the mutation step co-evolution, Proceedings of the IEEE Congress on Evolutionary Computation, 872 879 , 0-78039-363-5 UK, September 2005, IEEE.
  25. 25. Rönkkönen J. Kukkonen S. Price K. V. 2005 Real-parameter optimization with differential evolution, Proceedings of the IEEE Congress on Evolutionary Computation, 506 513 , 0-78039-363-5 UK, September 2005, IEEE.
  26. 26. Sen P. C. 1997 Principles of Electric Machines and Power Electronics, 2nd edition, John Wiley & Sons, 978-0-47102-295-4 New York.
  27. 27. Sinha A. Tiwari S. Deb K. 2005 A population-based, steady-state procedure for real-parameter optimization, Proceedings of the IEEE Congress on Evolutionary Computation, 514 521 , 0-78039-363-5 UK, September 2005, IEEE.
  28. 28. Socha K. 2004 ACO for continuous and mixed-variable optimization, In: Ant Colony Optimization and Swarm Intelligence, M. Dorigo et al. (Eds.), 25-36, Springer, 0302-9743 Lecture Notes in Computer Science, 3172
  29. 29. Tsutsui S. 2006 An enhanced aggregation pheromone system for real-parameter optimization in the ACO metaphor, In: Ant Colony Optimization and Swarm Intelligence, M. Dorigo et al. (Eds.), 60-71, Springer, 0302-9743 Lecture Notes in Computer Science, 4150
  30. 30. Tušar T. Korošec P. P. Papa G. Filipič B. Šilc J. 2007 A comparative study of stochastic optimization methods in electric motor design, Applied Intelligence, 27 2 (2007) 101-111, 0092-4669X.
  31. 31. Yu E. L. Suganthan P. 2009 Evolutionary programming with ensemble of external memories for dynamic optimization, Proceedings of the IEEE Congress on Evolutionary Computation, 431 438 , 978-1-42442-959-2 Trondheim, Norway, May 2009, IEEE.
  32. 32. Yuan B. Gallagher M. 2005 Experimental results for the special session on real-parameter optimization at CEC 2005: A simple, continuous EDA, Proceedings of the IEEE Congress on Evolutionary Computation, 1792 1799 , 0-78039-363-5 UK, September 2005, IEEE.

Written By

Peter Korosec and Jurij Silc

Published: October 1st, 2009