Open access peer-reviewed chapter

Combination of Particle Swarm and Ant Colony Optimization Algorithms for Fuzzy Systems Design

By Chia-Feng Juang

Published: February 1st 2010

DOI: 10.5772/7226

Downloaded: 2595

1. Introduction

Fuzzy systems (FSs) have been extensively applied to automatic control, pattern recognition, and decision analysis. However, a common bottleneck is encountered in the derivation of fuzzy rules, which is often difficult, time consuming, and relies on expert knowledge. To automate the design of FSs, many metaheuristic learning algorithms have been proposed. One major optimization category uses Swarm Intelligence (SI) model (Kennedy et al., 2001). The SI technique studies collective behavior in decentralized systems. Its development was based on mimicking the social behavior of animals or insects in an effort to find the optima in the problem space. SI models are initialized with a population of random solutions. One well-known SI model is particle swarm optimization (PSO) (Kennedy & Eberhart, 1995). Many modified PSO models have been proposed and successfully applied to different optimization problems (Clerc & Kennedy, 2002; Bergh & Engelbrecht, 2004; Ratnaweera et al., 2004; Juang, 2004; Kennedy & Mendes, 2006; Parrott & Li, 2006; Chen & Li, 2007). FS design using PSO has also been proposed in several studies (Juang, 2004; Chatterjee et al., 2005; Juang et al., 2007 ; Araujo & Coelho, 2008; Sharma et al., 2009).

Another well-known SI is ant colony optimization (ACO) (Dorigo & Stutzle, 2004). The ACO technique is inspired by real ant colony observations. It is a multi-agent approach that was originally proposed to solve difficult discrete combinatorial optimization problems, such as the traveling salesman problem (TSP) (Dorigo et al., 1996; Dorigo & Gambardella, 1997). In the original ACO meta-heuristic, artificial ant colonies cooperate to find good solutions for difficult discrete optimization problems. Different ACO models have been applied to FS design problems (Cassillas et al., 2000; Cassillas et al., 2005; Mucientes & Casillas; 2007; Juang & Lo, 2007 ; Juang et al., 2008; Juang & Lu; 2009 ). In (Cassillas et al.,2000; Mucientes & Casillas; 2007; Juang et al., 2008 ; Juang & Lu; 2009), the FS input space was partitioned in grid type with antecedent part parameters of an FS manually assigned in advance. In ( Juang & Lo, 2007 ), the FS input space was flexibly partitioned using a fuzzy clustering-like algorithm in order to reduce the total number of rules. For all of these studies, the consequent part parameters were optimized in discrete space using ACO. Since only the consequent part parameters are optimized, and the optimization space is restricted to be discrete, the designed FSs are unsuitable for problems where high accuracy is a major concern.

Several studies on the combination of PSO or ACO with other optimization algorithms have been proposed in order to improve the performance of the original optimization model. In (Juang, 2004), a hybrid of GA and PSO, called HGAPSO, was proposed. In HGAPSO, new individuals are created not only by PSO but also by the crossover and mutation operations of a GA. In (Fan et al., 2004; Juang & Hsu, 2005), the simplex method was introduced into PSO. In (Ling et al., 2008), a hybrid of PSO with wavelet mutation was proposed. To apply the ACO technique to solve continuous optimization problems, several studies on the combination of ACO with other continuous optimization methods have been performed (Feng & Feng, 2004; Ge et al., 2004). An ACO followed by immune operation for optimization in a continuous space was proposed in (Feng & Feng, 2004). The incorporation of a deterministic searching algorithm (the Powell method) into ACO for continuous optimization was proposed in (Ge et al., 2004).

This chapter studies the combination of PSO and ACO for FSs design. One problem of PSO in FS design is that its performance is affected by initial particle positions, which are usually randomly generated in a continuous search space. A poor initialization may result in poor performance. Searching in the discrete-space domain by ACO helps to find good solutions. However, the search constraint in a discrete-space domain restricts learning accuracy. The motivation on the combination of ACO and PSO is to compensate the aforementioned weakness of each method in FS design problems. Two combination approaches, sequential and parallel, for PSO and ACO proposed in ( Juang & Lo, 2008 ; Juang & Wang, 2009) are described and discussed in this Chapter.

This chapter is organized as follows. Section 2 describes the FS to be designed. The rule generation algorithm and rule initialization are also described in this section. Section 3 describes PSO and how to apply it to FS design. Section 4 describes ACO and how to apply it to FS design. Section 5 describes the sequential combination of PSO and ACO for FS design. Section 6 describes the parallel combination of PSO and ACO for FS design. Finally, Section 7 draws conclusions.

2. Fuzzy systems

2.1. Fuzzy system functions

This subsection describes the FS to be designed. The FS is of zero-order Takagi-Sugeno-Kang (TSK) type. That is, the ith rule, denoted asR i , in the FS is represented in the following form:

Ri:If x1(k)isAi1And...Andxn(k)isAin,Thenu(k)isaiE1

wherekis the time step,x1(k),...,xn(k)are input variables,u(k)is the system output variable,Aijis a fuzzy set, andaiis a crisp value. Fuzzy setAijuses a Gaussian membership function

Mij(xj)=exp{(xjmijbij)2}E2

wheremijandbijrepresent the center and width of the fuzzy setAij, respectively. In the inference engine, the fuzzy AND operation is implemented by the algebraic product in fuzzy theory. Thus, given an input data setx=(x1,...,xn), the firing strengthϕi(x)of ruleiis calculated by

ϕi(x)=j=1nMij(xj)=exp{j=1n(xjmijbij)2}E3

If there arerrules in an FS, the output of the system calculated by the weighted average defuzzification method is

u=i=1rϕi(x)aii=1rϕi(x)E4

whereaiis the rule consequent value in (1). There are a total ofD=r(2n+1)free parameters in an FS, all of which are optimized using the combination of PSO and ACO algorithms.

2.2. Rule generation and initialization

Most studies on SI-based FS design algorithms determine the number of rules by trial and errors and assign the initial FS parameters randomly and uniformly in the domain of each free parameter. The subsection describes one promising rule generation and initialization algorithm based on the fuzzy clustering-like approach that has been used in an SI algorithm ( Juang et al., 2007 ). It is assumed that there are initially no rules in the designed FS. The rule generation method generates fuzzy rules online upon receiving training data. Rules are generated in order to ensure that at least one rule is activated with a firing strength larger than a pre-defined thresholdϕth(0,1)for each inputx.

Figure 1.

Distributions of input data, generated fuzzy rules that properly cover the data, and initial shapes of the corresponding fuzzy sets in each input dimension.

Geometrically, as Fig. 1 shows, this threshold ensures that each input data is properly covered by a rule in the input space. According to this concept, the firing strengthϕi(x)in (3) is used as the criterion to decide if a new fuzzy rule should be generated. For each incoming piece of datax(k), find

I=arg max1irϕi(x(k))E5

whereris the number of existing rules at timet. IfϕIϕthorr=0, then a new fuzzy rule is generated to coverx(t)andrr+1. A smallerϕthvalue generates a smaller number of rules. The generation of therth rule also generates therth new fuzzy set in each input variable. That is, the number of fuzzy sets in each input dimension is equal to the number of fuzzy rules in the designed FS. To reduce the number of fuzzy sets in each input dimension, the fuzzy set generation criterion proposed in ( Juang et al., 2007 ; Juang & Wang, 2009) can be further employed though it adds computation cost. For each newly generated fuzzy rule, the corresponding center and width of Gaussian fuzzy setArjin each input variable are assigned as follows:

mrj=xj(k), brj=bfixj=1,...,nE6

wherebfixis a pre-specified constant value. Since the centers and widths of all fuzzy sets can be further tuned by PSO, all of the initial widths are simply set to the same value ofbfix

3. Particle swarm optimization (PSO) for FS design

This section first describes the basic concept of PSO. The application of PSO to optimize the generated FS in Section 2 is then is then described. The swarm in PSO is initialized with a population of random solutions (Kennedy & Eberhart, 1995). Each potential solution is called a particle. Each particle has a position, which is represented by a position vectorsi. A swarm of particles moves through the problem space, with the velocity of each particle represented by a velocity vectorvi. At each time step, a functionfis evaluated, usingsias an input. Each particle keeps track of its own best position, which is associated with the best fitness it has achieved so far, in a vectorpi. Furthermore, each particle is defined within the context of a topological neighborhood that is made up of itself and other particles in the swarm. The best position found by any member of the neighborhood is tracked inpig. For a global version of PSO,pigis defined as the best position in the whole population. At each iterationt, a new velocity for particleiis obtained by using the individual best position,pi(t), and the neighborhood best position,pig(t):

vi(t+1)=wvi(t)+c1ϕ1(pi(t)xi(t))+c2ϕ2(pig(t)x(t)i)E7

wherewis the inertia weight,c1andc2are positive acceleration coefficients, andϕ1andϕ2are uniformly distributed random vectors in [0,1], where a random value is sampled for each dimension. The limit ofviin the range[vmax,vmax]is problem dependent. For some problems, if the velocity violates this limit, it is reset within its proper limits. Depending on their velocities, each particle changes its position according to the following equation:

si(t+1)=s(t)i+vi(t+1)E8

Based on (7) and (8), the particle population tends to cluster around the best.

The use of PSO for FS design, i.e., optimization of all free parameters in an FS, is described as follows. For the FS in (1) that consists ofninput variables andrrules, all of its free parameters can be described by the following position vector

s=[m11,b11,,m1n,b1n,a1, ,mr1,br1,,mrn,brn,ar]DE9

After the rule generation and initialization process described in Section 2, the initial antecedent part parameters are determined. Based on the solution vector representation in (9) and the antecedent part parameter initialization in (6), theith solution vectorsiis generated:

si=[si1 si2   siD]  = [m11+Δm11i,bfix+Δb11i,,m1n+Δm1ni,bfix+Δb1ni,a1,         mr1+Δmr1i,bfix+Δbr1i,,mrn+Δmrni,bfix+Δbrni,ar]E10

whereΔmijandΔbijare small random numbers. The parameteraiis a random number randomly and uniformly distributed in the FS output range. The evaluation functionffor a particlesiis computed according to the performance of the FS constituted of the parameters in (10).

4. Ant colony optimization (ACO) for FS design

ACO is a meta-heuristic algorithm inspired by the behavior of real ants, and in particular how they forage for food (Dorigo & Caro, 1999; Dorigo & Stutzle, 2004). It was first applied to the traveling salesman problem (TSP). In ACO, a finite size colony of artificial ants is created. Each ant then builds a solution to the problem. While building its own solution, each ant collects information based on the problem characteristics and on its own performance. The performance measure is based on a quality function F(∙). ACO can be applied to problems that can be described by a graph, where the solutions to the optimization problem can be expressed in terms of feasible paths on the graph. Among the feasible paths, ACO is used to find an optimal one which may be a locally or globally optimal solution. The information collected by the ants during the search process is stored in the pheromone trails,τ, associated to the connection of all edges. These pheromone trails play the role of a distributed long-term memory about the whole ant search process. The ants cooperate in finding a solution by exchanging information via the pheromone trials. Edges can also have an associated heuristic value,η, representing a priori information about the problem instance definition or run-time information provided by a source different from the ants. Ants can act concurrently and independently, showing a cooperative behavior. Once all ants have computed their tours (i.e. at the end of the each iteration), ACO algorithms update the pheromone trail using all the solutions produced by the ant colony. Each edge belonging to one of the computed solutions is modified by the amount of pheromone that is proportional to its solution value. The pheromone trail may be updated locally while an ant builds its trail or globally when all ants have built their trails.

Letτij(t)be the pheromone level on edge (i, j) at iteration t, andηijbe the corresponding heuristic value. The probability that an ant chooses j as the next vertex when it is at the vertexiat iterationtis given by

pij(t)={τij(t)ηijβ(t)zJ(i)τiz(t)ηizβ(t),if jJ(i)0otherwiseE11

where J(i) is the set of vertices that remain to be visited by the ant,βis a parameter that determines the relative influence of the pheromone trail and the heuristic information. After all ants have completed their tours, the pheromone level is updated by

τij(t+1)=ρτij(t)+Δτij(t)E12

whereρ(0,1)is a parameter such that1ρrepresents the evaporation coefficient. The update value Δτ ij is related to the quality value F. Many updating rules for Δτ ij have been studied (Dorigo & Stutzle, 2004), like ant system (Dorigo et al., 1996), ant colony system (Dorigo & Gambardella, 1997), MAX MIN ant system (Stutzle & Hoos, 2000), and hypercube framework ACO (Blum & Dorigo, 2004). The major differences between these ACO algorithms and AS are the probability selection techniques or pheromone update.

ACO can be applied to design the consequent part parameters in an FS. The general design approach is described as follows. Consider the FS whose structure and antecedent part parameters are determined according to (5) and (6) in Section 2. Suppose the consequent part is selected from a discrete setU={u1,...,um}. For each rule, there aremcandidate actions to be chosen. Each rule with competing consequent may be written as

Ri:If x1isAi1And...AndxnisAinThenu(k)isu1Oru2Or...OrumE13

That is, we have to decide one from a total ofmrcombinations of consequent parts. This combinatorial problem is solved by ACO. To select the consequent value of each rule by ACO, we regard a combination of selected consequent values for a whole FS as a tour of an ant. For example, in Fig. 2, there are four rules, denoted byR1,...,R4, in an FS and three candidate values,u1,...,u3, for each rule. Starting from the initial state, the nest, the ant moves throughR1,...,R3and stops atR4, where the tour of this ant is marked by a bold line. For each rule, the node visited by the ant is selected as the consequent part of the rule. For the whole FS constructed by the ant in Fig. 2, the consequent values inR1R2,R3, andR4are u 2 , u 3 , u 1 , and u 3 , respectively. Selection of the consequent value is partially based on pheromone trails between each rule. The size of the pheromone matrix isr×mand each entry in the matrix is denoted byτij, wherei=1,...,randj=1,...,m. As shown in Fig. 2, when the ant arrives at ruleRq, then selection of themcandidate values (denoted by nodes) ofRq+1is partly based onτq+1jj=1,...,N. By using only the pheromone matrix, the transition probability is defined by

pij(t)=τij(t)z=1mτiz(t)E14

In an FS, different fuzzy rules may share the same consequent value. That is, when value a j is selected as the consequent of rule i, it may also be selected as the consequent of following rules i+1,…,r. For this reason, the set J(i) in (11) is released to the whole set U for all i in (14).

Figure 2.

The relationship between an ant path and the selected consequent values in an FS.

The ACO works without the use of heuristic values, and the consequent part can be simply selected by using (14). The use of heuristic values can be further employed for learning performance improvement. However, determination of the heuristic valueηusually requires a priori information about the problem instance. For an unknown plant control problem using FSs, it is difficult to assign proper heuristic values in advance. For this problem, in ( Juang & Lo, 2007 ), a new heuristic value assignment approach is proposed for controlling an unknown plant. For the control problems considered in this study, it is assumed that neither a priori knowledge of the plant model nor training data collected in advance are available. This study proposes an on-line heuristic value update approach according to temporal difference error between the actual output y(k) and the desired output y d (k). In ( Juang et al., 2008 ), a simple heuristic value assignment approach is proposed for controlling a plant with an unknown model except the information on the change of output direction with control input. This study assigns heuristic values to each candidate consequent value according to the corresponding fuzzy rule inputs which are control errore(k)=y(k)yd(k)and its change with timeΔe(k)=e(k)e(k1). In ( Juang & Lu, 2009 ), the q-values in a reinforcement fuzzy Q-learning algorithm are used as heuristic values for an unknown plant control. Each candidate is assigned with a q-value which is updated using success and failure reinforcement signals during the reinforcement learning process.

5. Sequential combination of ACO and PSO

This section describes the sequential combination approach of ACO and PSO proposed in ( Juang & Lo, 2008 ). In this sequential combination approach, the rule consequent of each on-line generated rule described in Section 2 is first learned by ACO. The advantage of using ACO for rule consequent learning is that it can help determine a good fuzzy rule base for subsequent learning. However, the search constraint in a discrete-space domain restricts learning accuracy, and the ants do not optimize antecedent part parameters. Therefore, after ACO learning, PSO is then employed to further optimize both the antecedent and consequent parameters, where initial particles in PSO are generated according to learning results from ACO.

Figure 3.

Flow chart of sequential combination of ACO and PSO for FS design.

Figure 3 shows the flow of the sequential combination of ACO and PSO. The formulation of consequent part learning by ACO is described in Section 4. Detailed function evaluation and update of pheromone levels are described as follows. The pheromone trails, τ ij , on the ant tour are updated according to the performance of the constructed FS. When an ant completes a tour, the corresponding FS is evaluated by a quality function F, which is defined as the inverse of learning error. A higher F value indicates better performance. Let the population size be P s , meaning that there are P s ants in a colony. For each iteration, after all the ants in the colony have completed their tours, i.e., after the construction of P s FSs, select the one with the highest F from the initial iteration until now. If a new global best ant is found in this iteration, then pheromone trails on the tour traveled by the global best ant are updated; otherwise, no pheromone update is performed in this iteration. Denote the global best ant as q* with the corresponding quality value asFq*. The new pheromone trailτij(t+1)is updated by

τij(t+1)=(1ρ)τij(t)+Δτij(t),if(i,j)global-best-tourE15

where 0 < ρ< 1 is the pheromone trail evaporation rate and

Δτij(t)=Fq*E16

The ACO learning iteration above repeats until the criterion for switching is met. The switching point from ACO to PSO learning is determined by the learning error convergence property of the global best ant. Let E(t) denote the error index of the global best ant at iteration t. For example, E(t) can be defined as root-mean-squared error (RMSE) or sum of absolute error (SAE). If

E(t)E(t+50)E(t) 1%E17

then ACO learning terminates and learning switches to PSO.

Using PSO releases the discrete space constraint imposed on consequent parameters when ACO is used, and searches the best consequent parameters in continuous space. In addition to the consequent parameters, PSO also searches the optimal antecedent part parameters. Like ACO, population size in the PSO is equal toPS. The elements in positionsare set as in (9). At iterationt=0, the initial positionss1(0),,sPs(0)are generated randomly according to the best-performing FS, denoted assACO, found in ACO. Positions1(0)is set to be the same assACO. The leftPS1particles,s2(0), ..., sPs(0), are generated by adding uniformly distributed random numbers tosACO. That is,

si(0)=sACO+wii=2,...,PsE18

wherewiis a random vector. The initial velocities,vi(0),i=1,...,Ps, of all particles are randomly generated. The performance of each particle is evaluated according to the FS it represents. The evaluation functionfis defined as the error indexE(t)described above. According tof, we can find individual best positionpiof each particle and the global best particlepigin the whole population. Velocity and position of each particle are updated using (7) and (8), respectively. The whole learning process ends when a predefined criterion is met. In ( Juang & Lo, 2008 ), the criterion is the total number of iterations. In ( Juang & Lo, 2008 ), the sequential combination of ACO and PSO approach for FS design has been applied to different control problems, including chaotic system regulation control, nonlinear plant tracking control, and water bath temperature control. The performance of the sequential combination approach has been shown to be better than those of ACO, PSO and different existing FS design methods which were applied to the same problem.

6. Parallel combination of ACO and PSO

This section describes the parallel combination approach of ACO and PSO for FS design ( Juang & Wang, 2009 ). Like the sequential combination approach, the parallel combination approach uses a constant population size and is denoted as P s =2N. Each individual in the population represents a parameter solution of the FS as described in (9). An individual may be generated by an ant path in ACO or a particle in PSO. Individuals generated by ants and particles are called ant individuals and particle individuals, respectively. Figure 4 shows a block diagram of the algorithm. Generation of population individuals are described as follows.

Figure 4.

Parallel combination of ACO and PSO for FS design.

In the first iteration, the rule generation algorithm in Section 2 and N different ant paths generate half of the population. The other half are generated from particle individuals. The N ant individuals contain no rules initially. New rules are generated using the criterion in (5) during the evaluation of an ant individual (FS). If the number of rules after evaluation of the N ant individuals is r 1, then the number of rules in the N particle individuals are all equal to r 1. In the parallel combination approach, the objective of using PSO is to optimize both the antecedent and consequent parameters in existing fuzzy rules; therefore, no rules are generated during the performance evaluation of a particle (FS). The initial N particle position vectors are generated using (10).

The second and subsequent iterations generate a new population with 2N new individuals. For the generation of new individuals in each iteration, the ACO-based FS design approach in Section 4 is used. In this approach, N ant paths generate N ant individuals (FSs). During the new ant individual generation process, some ants may choose the same path and generate the same individuals. This phenomenon becomes more obvious as more iterations are conducted due to pheromone matrix convergence. To consider this phenomenon, suppose that N t ants have the same path at the tth iteration. Then only N- N t +1 different ant individuals are reserved in the population. The performance of these N- N t +1 individuals is evaluated and the pheromone matrix is updated according to their performance. The remaining N- N t +1 new individuals in the population (population size is fixed at 2N) in iteration t are partly generated by particles in the last iteration. The original N + N t-1 - 1 particle individuals from the last iteration are optimized by PSO. The performance of these optimized particle individuals is then evaluated. In addition to these optimized particle individuals, the N - N t-1 + 1 ant individuals in the previous iteration (iteration t-1) also help generate auxiliary particles to improve particle search performance. Adding these ant individuals with small random values generates auxiliary particles. The purpose of adding small random values is to distinguish auxiliary particles from existing ant individuals and to improve the algorithm’s exploration ability. Suppose an original individual has the form in (10), then the auxiliary particle takes the following form

[m11+Δm11,σ11+Δσ11,,m1n+Δm1n,σ1n+Δσ1n,a1+Δa1,,mrn+Δmrn,σrn+Δσrn,ar+Δar]E19

The performance of these N - N t-1 + 1 auxiliary particles is then computed. These auxiliary particles together with the original N + N t-1 - 1 particles constitute a total of 2N temporary particles. Only the best N + N t - 1 particles are reserved from among these 2N particles. In the next iteration, these reserved particles cooperate to find better solutions through PSO.

For ant individual update by ACO at the end of each iteration, as in Section 4, a new ant path generates a new FS (individual) according to the pheromone levels and transition probability in (14). The pheromone levels are updated using (15) and (16). For particle individual update in iteration t, PSO updates all of the N + N t-1 - 1 particles generated either from auxiliary particles or original particle individuals in the previous iteration. Positions and velocities are updated based on (7) and (8) using a local version of PSO. For neighborhood best particlepig(t)computation, the neighbors of a particle withrt1rules for findingpig(t)are defined as the particles that also havert1rules. As in the sequential combination approach, the learning process ends when a pre-defined number of iterations is performed.

In ( Juang & Wang, 2009 ), the parallel combination learning approach has been applied to two control examples, nonlinear plant tracking control and reversing a truck following a circular path. These examples generate training data only when fuzzy control starts. Simulation results show that the proposed method achieves a smaller control error than ACO, PSO, and other different SI learning algorithms used for comparison in each example.

7. Conclusion

This chapter describes the design of FSs using PSO, ACO, and their sequential and parallel combination approaches. The use of on-line rule generation not only helps to determine the number of fuzzy rules, but also helps to locate the initial antecedent parameters for subsequent parameter learning using PSO. For PSO, the incorporation of ACO helps to locate a good initial fuzzy rule base for further PSO learning. For ACO, the incorporation of PSO helps to find the parameters in a continuous space. The cooperative search of ACO and PSO compensates for the searching disadvantage of each optimization method. Reported results show that the two combination approaches outperform different advanced PSO and ACO algorithms for FS design problems. Performance of these two combination approaches on other optimization problems will be studied in the future. Other different combination approaches of ACO and PSO may also be studied for further performance improvement.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Chia-Feng Juang (February 1st 2010). Combination of Particle Swarm and Ant Colony Optimization Algorithms for Fuzzy Systems Design, Fuzzy Systems, Ahmad Taher Azar, IntechOpen, DOI: 10.5772/7226. Available from:

chapter statistics

2595total chapter downloads

2Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Triangle Formation of Multi-Agent Systems with Leader-Following on Fuzzy Control

By Hongyong Yang and Jianzhong Gu

Related Book

First chapter

Numerical Solution of Many-Body Wave Scattering Problem for Small Particles and Creating Materials with Desired Refraction Coefficient

By M. I. Andriychuk and A. G. Ramm

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us