Open access peer-reviewed chapter

A Gradient Multiobjective Particle Swarm Optimization

Written By

Hong-Gui Han, Lu Zhang and Jun-Fei Qiao

Submitted: 23 October 2017 Reviewed: 08 March 2018 Published: 05 September 2018

DOI: 10.5772/intechopen.76306

From the Edited Volume

Optimization Algorithms - Examples

Edited by Jan Valdman

Chapter metrics overview

1,154 Chapter Downloads

View Full Metrics

Abstract

An adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is developed to improve the computation performance. In this AGMOPSO algorithm, the MOG method is devised to update the archive to improve the convergence speed and the local exploitation in the evolutionary process. Attributed to the MOG method, this AGMOPSO algorithm not only has faster convergence speed and higher accuracy but also its solutions have better diversity. Additionally, the convergence is discussed to confirm the prerequisite of any successful application of AGMOPSO. Finally, with regard to the computation performance, the proposed AGMOPSO algorithm is compared with some other multiobjective particle swarm optimization (MOPSO) algorithms and two state-of-the-art multiobjective algorithms. The results demonstrate that the proposed AGMOPSO algorithm can find better spread of solutions and have faster convergence to the true Pareto-optimal front.

Keywords

  • multiobjective particle swarm optimization
  • multiobjective problem
  • multiobjective gradient
  • convergence

1. Introduction

Most of the engineering and practical applications, such as wastewater treatment processes and aerodynamic design problem, often have a multiobjective nature and require solving several conflicting objectives [1, 2, 3]. Handling with these multiobjective optimization problems (MOPs), there are always a set of possible solutions which represent the tradeoffs among the objectives known as Pareto-optimal set [4, 5]. Evolutionary multiobjective optimization (EMO) algorithms, which are a class of stochastic optimization approaches based on population characteristic, are widely used to solve the MOPs, because a series of Pareto-optimal solutions can be obtained in a single run [6, 7, 8, 9]. The multiobjective optimization algorithms are striving to acquire a Pareto-optimal set with good diversity and convergence. The most typical EMO algorithms include the non-dominated sorting genetic algorithm (NSGA) [10] and NSGA-II [11], the strength Pareto evolutionary algorithm (SPEA) [12] and SPEA2 [13], the Pareto archived evolutionary strategy (PAES) [14], the Pareto envelope-based selection algorithm (PESA) [15] and PESA-II [16].

The notable characteristic of particle swarm optimization (PSO) is the cooperation among all particles of a swarm which is attracted toward the global best (gBest) in the swarm and its own personal best (pBest), so the PSO have a better global searching ability [17, 18, 19]. Among these EMO algorithms, owing to the high convergence speed and ease of implementation, multiobjective particle swarm optimization (MOPSO) algorithms have been widely used [20, 21, 22, 23]. MOPSO can also be applied to multiple difficult optimization problems such as noisy and dynamic problems. However, apart from the archive maintenance, in MOPSO, two issues still remain to be further addressed. The first one is the update of gBest and pBest, because the absolute best solution cannot be selected by the relationship of the non-dominated solutions. Then, the selection of gBest and pBest results in the different flight directions for a particle, which has an important effect on the convergence and diversity of MOPSO [24].

Zheng et al. introduced a novel MOPSO algorithm, which can improve the diversity of the swarm and improve the performance of the evolving particles over some advanced MOPSO algorithms with a comprehensive learning strategy [25]. The experimental results illustrate that the proposed approach performs better than some existing methods on the real-world fire evacuation dataset. In [26], a multiobjective particle swarm optimization with preference-based sort (MOPSO-PS), in which the user’s preference was incorporated into the evolutionary process to determine the relative merits of non-dominated solutions, was developed to choose the suitable gBest and pBest. The results indicate that the user’s preference is properly reflected in optimized solutions without any loss of overall solution quality or diversity. Moubayed et al. proposed a MOPSO by incorporating dominance with decomposition (D2MOPSO), which proposes a novel archiving technique that can balance the relationship of the diversity and convergence [27]. The analysis of the comparable experiments demonstrates that the D2MOPSO can handle with a wide range of MOPs efficiently. And some other methods for the update of gBest and pBest can be found in [28, 29, 30, 31]. Although many researches have been done, it is still a huge challenge to select the appropriate gBest and pBest with the suitable convergence and diversity [32, 33].

The second particular issue of MOPSO is how to own fast convergence speed to the Pareto Front, well known as one of the most typical features of PSO. According to the requirement of the fast convergence for MOPSO, many different strategies have been put forward. In [34], Hu et al. proposed an adaptive parallel cell coordinate system (PCCS) for MOPSO. This PCCS is able to select the gBest solutions and adjust the flight parameters based on the measurements of parallel cell distance, potential and distribution entropy to accelerate the convergence of MOPSO by assessing the evolutionary environment. The comparative results show that the self-adaptive MOPSO is better than the other methods. Li et al. proposed a dynamic MOPSO, in which the number of swarms is adaptively adjusted throughout the search process [35]. The dynamic MOPSO algorithm allocates an appropriate number of swarms to support convergence and diversity criteria. The results show that the performance of the proposed dynamic MOPSO algorithm is competitive in comparison to the selected algorithms on some standard benchmark problems. Daneshyari et al. introduced a cultural framework to design a flight parameter mechanism for updating the personalized flight parameters of the mutated particles in [36]. The results show that this flight parameter mechanism performs efficiently in exploring solutions close to the true Pareto front. In the above MOPSO algorithms, the improved strategies are expected to achieve better performance. However, few works have been done to examine the convergence of these MOPSO algorithms [37].

Motivated by the above review and analysis, in this chapter, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is put forward. This novel AGMOPSO algorithm has faster convergence in the evolutionary process and higher efficiency to deal with MOPs. The proposed AGMOPSO algorithm contains a major contribution to solve MOPs as follows: A novel MOG method is proposed for updating the archive to improve the convergence speed and the local exploitation in the evolutionary process. Unlike some existing gradient methods for single-objective optimization problems [38, 39, 40] and MOPs [41], much less is known about the gradient information of MOPs. One of the key feathers of the MOG strategy is that the utilization of gradient information for MOPs is able to obtain a Pareto set of solutions to approximate the optimal Pareto set. In view of the advantages of the MOG strategy, this AGMOPSO algorithm can obtain a good Pareto set and reach smaller testing error with much faster speed. This characteristic makes this method ideal for MOPs.

Advertisement

2. Problem formulation

2.1. Multiobjective problems

A minimize MOP contains several conflicting objectives which is defined as:

minimizeFx=f1xf2xfmxT,subject toxΩ,E1

where m is the number of objectives, x is the decision variable, fi() is the ith objective function. A decision variable y is said to dominate the decision vector z, defined as y dominates z or yz, which is indicated as:

i:fiyfizandj:fjy<fjz,E2

where i = 1,2, …, m, j = 1,2, …,m. When there is no solution that can dominate one solution in MOPs, this solution can be used as the Pareto optimal solution. This Pareto optimal solution comprises the Pareto front.

2.2. Particle swarm optimization

PSO is a stochastic optimization algorithm, in which a swarm contains a certain number of particles that the position of each particle can stand for one solution. The position of a particle which is expressed by a vector:

xit=xi,1txi,2txi,Dt,E3

where D is the dimensionality of the searching space, i = 1, 2, …, s; s is the swarm size. Also each particle has a velocity which is represented as:

vit=vi,1tvi,2tvi,Dt.E4

During the movement, the best previous position of each particle is recorded as pi (t) = [pi,1(t), pi,2(t),…, pi,D(t)], and the best position obtained by the swarm is denoted as g(t) = [g1(t), g2(t),…, gD(t)]. Based on pi(t) and g(t), the new velocity of each particle is updated by:

vi,dt+1=ωvi,dt+c1r1pi,dtxi,dt+c2r2gdtxi,dt,E5

where t denotes the tth iteration during the searching process; d = 1, 2, …, D is the dimension in the searching space; ɷ is the inertia weight; c1 and c2 are the acceleration constants and r1 and r2 are the random values uniformly distributed in [0, 1]. Then the updating formula of the new position is expressed as:

xi,dt+1=xi,dt+vi,dt+1.E6

At the beginning of the searching process, the initial position of each particle is randomly generated. As the searching process goes on, the particle swarm may appear as an uneven distribution phenomenon in the evolutionary space.

Advertisement

3. Multiobjective gradient method

The key points of AGMOPSO, compared to the original MOPSO, are that the MOG method is taken into account. In AGMOPSO, the population with N particles intends to search for a set of non-dominated solutions to be stored in an archive with a predefined maximal size.

In MOPSO, the position of each particle can represent the potential solution for the conflicting objectives. The gBest and pBest can guide the evolutionary direction of the whole particle swarm. The position xi and velocity vi of the ith particle are the D-dimensional vectors xi (0)∈ RD, vi (0)∈RD. The particle updates the velocity and position by the motion trajectory in Eqs. (5) and (6). The external archive A(0) is initialized as a null set. Meanwhile, the best previous position pi(t) is computed by:

pit=pit1,ifxitpit1,xit,otherwise,E7

where ajt1pit means x(t) is not dominated by pi(t − 1). The process of archive A(t) is updated based on the previous archive A(t − 1) and the best previous position pi(t)

At=At1pit,ifajt1pit,A¯t1pit,otherwise,E8

where A(t) = [a1(t), a2(t),…, aK(t)]T, aj(t) = [a1,j(t), a2,j(t),…, aD,j(t)], A¯t1 is updated archive which removed the solutions dominated by the best previous position pi(t), K is the dimensionality of archive A(t) which will be changed in the learning process, ajt1pit means aj(t-1) is not dominated by pi(t) and pi(t) is not dominated by aj(t-1). Moreover, g(t) is found according to [24].

In AGMOPSO, to enhance the local exploitation, the archive A(t) is further updated by the MOG method using the gradient information to obtain a Pareto set of solutions that approximates the optimal Pareto set. Without loss of generality, assuming all of the objective functions are differentiable, the directional derivative in fi(aj(t)) in a direction ūj(t) at point aj(t) is denoted as

u¯jtfiajt=limδ0fiajt+δu¯jtfiajtδ,E9

where δ > 0, ūj(t) = [ū1,j(t), ū2,j(t), …, ūD,j(t)], i = 1, 2, …, m; j = 1, 2, …, K, and the directional derivative can be rewritten:

u¯jtfiajt=fiajtu¯jt,E10

then, the gradient direction of MOP can be represented as:

u¯jtFajt=u¯jtf1ajtu¯jtf2ajtu¯jtfmajtT,E11

According to Eq. (11), the minimum direction of MOP is calculated as

ûit=fiajtfiajt,
fiajt=[fi(ajt)/a1,jt,fi(ajt)/a2,jt,,fi(ajt)/aD,jt],E12

and ûit=1. In addition, the smooth criteria fi(aj(t)) are said to be Pareto-stationary at the point aj(t) if

i=1mαitûit=0,i=1mαit=1,αit0,i.E13

The weight vector can be set as

αt=1ÛTÛ2û12û22ûm2T,E14

where Ût=û1t,û2t,ûmt,αit=ûi2/ÛTÛ2, and α=1.

To find the set of Pareto-optimal solutions of MOPs, the multi-gradient descent direction is given as follows:

Fajt=i=1mαitûit,i=1mαit=1,αit0,i.E15

This multi-gradient descent direction is utilized to evaluate the full set of unit directions. And the archive A(t) is updated as follows:

a¯jt=ajt+hFajt,E16

where, h is the step size, aj(t) and āj(t) are the jth archive variables before and after the MOG algorithm has been used at time t and the fitness values are updated at the same time.

Moreover, the archive A(t) can store the non-dominated solutions of AGMOPSO. But the number of non-dominated solutions will gradually increase during the search process. Therefore, to improve the diversity of the solutions, a fixed size archive is implemented in AGMOPSO to record the good particles (non-dominated solutions). During each iteration, the new solutions will be compared with the existing solutions in the archive using the dominating relationship. When a new solution cannot be dominated by the existing solutions in the archive, it will be reserved in the archive. On the contrary, the dominated new solutions cannot be accepted in the archive. If the capacity of the archive reaches the limitation, a novel pruning strategy is proposed to delete the redundant non-dominated solutions to maintain uniform distribution among the archive members.

Assuming that there are K points which will be selected from the archive serve. The maximum distance of the line segment between the first and the end points (namely whole Euclidean distance Dmax) are obtained. Then, the average distance of the remained K-2 points are set

d=Dmax/K1,E17

where d is the average distance of all points. The average values of d are used to guide to select the non-dominated solutions of more uniform distribution. In addition, for the three objectives, all of the solutions (except the first and the end) are projected to the Dmax. The points can be reserved, the projective points and the average distance points can be found. However, most projective distances of the adjacent points are not equal to the average distance. Thus, the next point is likely to be selected when it has the distance more closely to the average distance. Once the search process is terminated, the solutions in archive will become the final Pareto front. Taking DTLZ2 as an example, Figure 1 shows this strategy with three objectives in details.

Figure 1.

Illustration of points selection procedure. (a) Is the original points and (b) is the selection result of the proposed strategy.

Local search is a heuristic method to improve PSO performance. It repeatedly tries to improve the current solution by replacing it with a neighborhood solution. In the proposed MOG algorithm, the set of unit directions is described by the normalized combination of the unit directions that map to the intersection points as Eq. (12). Then, each single run of the algorithm can yield a set of Pareto solutions. Experiments demonstrate that the improvements make AGMOPSO effective.

In MOPSO, it is desired that an algorithm maintains good spread of solutions in the non-dominated solutions as well as the convergence to the Pareto-optimal set. In this AGMOPSO algorithm, an estimate of density is designed to evaluate the density of solutions surrounding it. It calculates the overall Euclidean distance values of the solutions, and then the average distance of the solutions along each of the objectives corresponding to each objective is calculated. This method is able to get a good spread result under some situations to improve the searching ability. And the pseudocode of AGMOPSO is presented in Table 1.

Initializing the flight parameters, population size, the particles positions x(0) and velocity v(0)
Loop
Calculating the fitness value
Getting the non-dominated solutions % Eq. (8)
Storing the non-dominated solutions in archive A(t)
Updating the archive using MOG method % Eq. (16)
If (the number of archive solutions exceed capacity)
Pruning the archive
End
Selecting the gBest from the archive A(t)
Calculating the flight parameters
Updating the velocity xi(t) and position vi(t) % Eqs. (56)
End loop

Table 1.

AMOPSO algorithm.

Advertisement

4. Simulation results and analysis

In this section, three ZDT and two DTLZ benchmark functions are employed to test the proposed of AGMOPSO. This section compares the proposed AGMOPSO with four state-of-the-art MOPSO algorithms—adaptive gradient MOPSO (AMOPSO) [41], crowded distance MOPSO (cdMOPSO) [32], pdMOPSO [31] and NSGA-II [11].

4.1. Performance metrics

To demonstrate the performance of the proposed AGMOPSO algorithm, two different quantitative performance metrics are employed in the experimental study.

1. Inverted generational distance (IGD):

IGDFF=xFmindisxF/F,E18

where mindis(x, F) is the minimum Euclidean distance between the solution x and the solutions in F. A smaller value of IGD(F*, F) demonstrates a better convergence and diversity to the Pareto-optimal front.

2. Spacing (SP):

SP=1K1i=1Kd¯di,E19

where di is the minimum Euclidean distance between ith solution and other solutions, K is the number of non-dominated solutions, d¯ is the average distance of the all Euclidean distance di.

4.2. Parameter settings

All the algorithms have three common parameters: the population size N, the maximum number of non-dominated solutions K and iterations T. Here, N = 100, K = 100 and T = 3000.

4.3. Experimental results

The experimental performance comparisons of the cdMOPSO algorithm on ZDTs and DTLZs are shown in Figures 26. Seen from Figures 26, the non-dominated solutions obtained by the proposed AGMOPSO algorithm can approach to the Pareto Front appropriately and maintain a greater diversity than other compared algorithms. Experimental results in Figures 24 show that the proposed AGMOPSO algorithm is superior to the cdMOPSO algorithm in diversity performance and can approach the Pareto Front. In addition, the results in Figures 5 and 6 show that the proposed AGMOPSO algorithm can obtain a better performance on the three-objective benchmark problems with accurate convergence and the preferable diversity.

Figure 2.

The Pareto front with non-dominated solutions obtained by the two multiobjective algorithms for ZDT3 function.

Figure 3.

The Pareto front with non-dominated solutions obtained by the two multiobjective algorithms for ZDT4 function.

Figure 4.

The Pareto front with non-dominated solutions obtained by the two multiobjective algorithms for ZDT6 function.

Figure 5.

The Pareto front with non-dominated solutions obtained by the two multiobjective algorithms for DTLZ2 function.

Figure 6.

The Pareto front with non-dominated solutions obtained by the two multiobjective algorithms for DTLZ7 function.

In order to show the experimental performance in details, the experimental results, which contain the best, worst, mean and standard deviations of IGD and SP based on the two-objective of ZDTs and the three-objective of DTLZs are listed in Tables 2 and 3, respectively.

FunctionIndexAGMOPSOAMOPSOpdMOPSOcdMOPSONSGA-II
ZDT3Best0.001490.004250.20190.0031090.005447
Worst0.006970.008320.42650.0289860.006105
Mean0.004330.006320.30520.0030630.005834
std0.002970.005270.10030.0071310.000202
ZDT4Best3.01942.71333.39804.97600.00462
Worst5.15225.05434.97606.36100.11166
Mean3.79333.89434.03305.91200.016547
std1.51332.74011.65104.51800.031741
ZDT6Best0.20460.09362.23100.0008970.01119
Worst0.78340.91542.87900.0036270.01498
Mean0.48780.54332.46900.0029880.01286
std0.02420.02360.81690.00015430.001004
DTLZ2Best0.04770.05190.13300.03220.07830
Worst0.39130.34250.36900.20670.2740
Mean0.10580.18780.20700.10150.1059
std0.00600.01320.04130.01340.008383
DTLZ7Best0.057660.020440.007960.007010.00614
Worst0.328030.102950.076780.054390.03208
Mean0.019850.045730.048310.028560.01799
std0.001390.003120.002890.001650.00129

Table 2.

Comparisons of different algorithms for IGD.

FunctionIndexAGMOPSOAMOPSOpdMOPSOcdMOPSONSGA-II
ZDT3Best0.0234750.0978110.0996540.103560.081569
Worst0.0878740.4166260.4871260.874490.106568
Mean0.0674510.2459310.1985510.596840.092216
std0.0128730.0509370.0794420.224680.008415
ZDT4Best0.0309140.0398250.0695640.1395770.031393
Worst0.0780110.1937650.2337940.3009510.044254
Mean0.0499230.0788210.1866980.2045730.038378
std0.0010920.0045170.0637570.0955620.003837
ZDT6Best0.0109810.0087390.0099350.0123960.006851
Worst0.1005510.0885350.0237660.0402050.010127
Mean0.0341270.0402510.0106830.0345690.008266
std0.0097560.0073410.0030210.0038840.000918
DTLZ2Best0.14380.09430.05690.09320.021456
Worst0.68930.89470.69910.58970.7314
Mean0.03980.46310.47210.35620.4162
std0.007640.034010.029640.017720.03655
DTLZ7Best0.19580.10470.09320.13470.0632
Worst0.90320.93550.83610.93070.7466
Mean0.05020.04930.44590.59720.4191
std0.010970.032010.008960.21330.00796

Table 3.

Comparisons of different algorithms for SP.

Moreover, the experimental results in Tables 2 and 3 include the details of the four evolutionary algorithms. To illustrate the significance of the findings, the comparing results for the performance index is analyzed as follows:

  1. Comparison of IGD index: From Table 2, the proposed AGMOPSO algorithm is superior to other MOPSO algorithms in terms of the results of IGD. Firstly, in the two-objective of ZDTs instances, the AGMOPSO can have better mean deviations of IGD than other four evolutionary algorithms on ZDT3 and ZDT4. It is indicated that the MOG method has played a vital role on the algorithm. Meanwhile, compared with NSGA-II [11], the proposed AGMOPSO has better IGD index performance of accuracy and stability for the two-objective of ZDTs (except ZDT4). Second, in the three-objective of DTLZs instances, the AGMOPSO is superior to other four algorithms in terms of the mean deviations value of IGD. According to the comparisons between the AGMOPSO and other four evolutionary algorithms, it is demonstrated that the proposed AGMOPSO is the closest to the true front and nearly enclose the entire front, which means the proposed AGMOPSO algorithm achieves the best convergence and divergence.

  2. Comparison of SP index: The comparison of SP among the proposed AGMOPSO algorithm and other compared algorithms was shown in Table 3. Firstly, in the two-objective of ZDTs instances, the AGMOPSO can have better mean deviations and best deviations of SP than other four evolutionary algorithms ZDT3 and ZDT4. Meanwhile, compared with NSGA-II [11], the proposed AGMOPSO has better SP index performance of diversity for the two-objective of ZDTs (except ZDT6). From the results in Table 3, the comparison of the SP between the proposed AGMOPSO algorithm illustrate that the MOG method can have better effect on the diversity performance than other existing methods. Secondly, in the three-objective of DTLZs instances, the proposed AGMOPSO algorithm has the best SP performance on the DTLZ2 and DTLZ7 than the other four compared algorithms. In addition, to verify the effect of the MOG method, the proposed AGMOPSO can obtain a set of non-dominated solutions with greater diversity and convergence than NSGA-II on instances (except ZDT4 and ZDT6). Therefore, the proposed AGMOPSO algorithm can obtain more accurate solutions with better diversity on the most ZDTs and DTLZs.

Advertisement

5. Conclusion

A novel method, named AGMOPSO, is proposed to solve MOPs, which underlies MOG to accelerate the solution convergence and deletes the redundant solutions in the archive by the equidistant partition principle. Meanwhile, the convergence analysis and convergence conditions of AGMOPSO are also carefully investigated for the successful applications. Based on the theoretical analysis and the experimental results, the proposed AGMOPSO algorithm with the local search strategy MOG is a novel method for solving theses MOPs. The comparisons of the different indexes also demonstrate that the proposed AGMOPSO algorithm is superior to the other algorithms for most of ZDTs and DTLZs.

References

  1. 1. Wang Y, Li HX, Yen GG, Song W. MOMMOP: Multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems. IEEE Transactions on Cybernetic’s. 2015;45(4):830-843
  2. 2. Yuan W, You XG, Xu J, et al. Multiobjective optimization of linear cooperative spectrum sensing: Pareto solutions and refinement. IEEE Transactions on Cybernetics. 2016;46(1):96-108
  3. 3. Brockhoff D, Zitzler E. Objective reduction in evolutionary multiobjective optimization: Theory and applications. Evolutionary Computation. 2009;17(2):135-166
  4. 4. Hu XB, Wang M, Paolo ED. Calculating complete and exact Pareto front for multiobjective optimization: A new deterministic approach for discrete problems. IEEE Transactions on Cybernetics. 2013;43(3):1088-1101
  5. 5. Martin D, Rosete A, Alcala FJ, Francisco H. A new multiobjective evolutionary algorithm for mining a reduced set of interesting positive and negative quantitative association rules. IEEE Transactions on Evolutionary Computation. 2014;18(1):54-69
  6. 6. Jiang SY, Yang SX. An improved multiobjective optimization evolutionary algorithm based on decomposition for complex Pareto fronts. IEEE Transactions on Cybernetics. 2016;46(2):421-437
  7. 7. Nag K, Pal NR. A multiobjective genetic programming-based ensemble for simultaneous feature selection and classification. IEEE Transactions on Cybernetics. 2016;46(2):499-510
  8. 8. He D, Wang L, Yu L. Multi-objective nonlinear predictive control of process systems: A dual-mode tracking control approach. Journal of Process Control. 2015;25:142-151
  9. 9. Zhang X, Tian Y, Jin Y. A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation. 2015;19(6):761-776
  10. 10. Srinivas N, Deb K. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation. 1994;2(3):221-248
  11. 11. Deb K, Pratap A, Agarwal S, et al. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation. 2002;6(2):182-197
  12. 12. Zitzler E, Thiele L. An evolutionary algorithm for multiobjective optimization: The strength pareto approach. Tech. Rep. TIK-Report. Zurich, Switzerland: Swiss Federal Institute Technology (ETH); 1998
  13. 13. Zitzler E, Laumanns M, Thiele L. SPEA2: Improving the strength Pareto evolutionary algorithm. Tech. Rep. TIK-Report 103. Zurich, Switzerland: Swiss Federal Institute Technology (ETH); 2001
  14. 14. Knowles JD, Corne DW. Approximating the nondominated front using the Pareto archived evolution strategy. Evolutionary Computation. 2000;8(2):149-172
  15. 15. Corne DW, Knowles JD, Oates MJ. The Pareto envelope-based selection algorithm for multiobjective optimization. International Conference on Parallel Problem Solving from Nature. 2000:839-848
  16. 16. Qu BY, Suganthan P, Das S. A distance-based locally informed particle swarm model for multimodal optimization. IEEE Transactions on Evolutionary Computation. 2013;17(3):387-402
  17. 17. Ali H, Shahzad W, Khan FA. Energy-efficient clustering in mobile ad-hoc networks using multi-objective particle swarm optimization. Applied Soft Computing. 2012;12(7):1913-1928
  18. 18. AlRashidi MR, El-Hawary ME. A survey of particle swarm optimization applications in electric power systems. IEEE Transactions on Evolutionary Computation. 2009;13(4):913-918
  19. 19. Elhossini A, Areibi S, Dony R. Strength pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization. Evolutionary Computation. 2010;18(1):127-156
  20. 20. Zhang Y, Gong D, Zhang J. Robot path planning in uncertain environment using multi-objective particle swarm optimization. Neurocomputing. 2013;103:172-185
  21. 21. Li J, Zhang JQ, Jiang CJ, Zhou MC. Composite particle swarm optimizer with historical memory for function optimization. IEEE Transactions on Cybernetics. 2015;45(10):2350-2363
  22. 22. Hu WW, Tan Y. Prototype generation using multiobjective particle swarm optimization for nearest neighbor classification. IEEE Transactions on Cybernetics. 2016;46(12):2719-2731
  23. 23. Xue B, Zhang MJ, Browne WN. Particle swarm optimization for feature selection in classification: A multi-objective approach. IEEE Transactions on Cybernetics. 2013;43(6):1656-1671
  24. 24. Hu M, Weir JD, Wu T. An augmented multi-objective particle swarm optimizer for building cluster operation decisions. Applied Soft Computing. 2014;25:347-359
  25. 25. Zheng YJ, Ling HF, Xue JY, Chen SY. Population classification in fire evacuation: A multiobjective particle swarm optimization approach. IEEE Transactions on Evolutionary Computation. Feb. 2014;18(1):70-81
  26. 26. Lee KB, Kim JH. Multiobjective particle swarm optimization with preference-based sort and its application to path following footstep optimization for humanoid robots. IEEE Transactions on Evolutionary Computation. 2013;17(6):755-766
  27. 27. Al Moubayed N, Petrovski A, McCall J. D2MOPSO: MOPSO based on decomposition and dominance with archiving using crowding distance in objective and solution spaces. Evolutionary Computation. 2014;22(1):47-77
  28. 28. Zhu Q, Lin Q, Chen W, et al. An external archive-guided multiobjective particle swarm optimization algorithm. IEEE Transactions on Cybernetics. To be published. DOI: 10.1109/TCYB. 2017.2710133
  29. 29. Zheng YJ, Chen SY. Cooperative particle swarm optimization for multiobjective transportation planning. Applied Intelligence. 2013;39(1):202-216
  30. 30. Agrawal S, Panigrahi BK, Tiwari MK. Multiobjective particle swarm algorithm with fuzzy clustering for electrical power dispatch. IEEE Transactions on Evolutionary Computation. 2008;12(5):529-541
  31. 31. Yen GG, Leong WF. Dynamic multiple swarms in multiobjective particle swarm optimization. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans. 2009;39(4):890-911
  32. 32. Helwig S, Branke J, Mostaghim S. Experimental analysis of bound handling techniques in particle swarm optimization. IEEE Transactions on Evolutionary Computation. 2013;17(2):259-271
  33. 33. Zhang Y, Gong DW, Ding Z. A bare-bones multi-objective particle swarm optimization algorithm for environmental/economic dispatch. Information Sciences. 2012;192(1):213-227
  34. 34. Hu W, Yen GG. Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Transactions on Evolutionary Computation. 2015;19(1):1-18
  35. 35. Li X, Yao X. Cooperatively coevolving particle swarms for large scale optimization. IEEE Transactions on Evolutionary Computation. 2012;16(2):210-224
  36. 36. Daneshyari M, Yen GG. Cultural-based multiobjective particle swarm optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2011;41(2):553-567
  37. 37. Britto A, Pozo A. Using reference points to update the archive of MOPSO algorithms in many-objective optimization. Neurocomputing. 2014;127(1):78-87
  38. 38. Chakraborty P, Das S, Roy GG. On convergence of the multi-objective particle swarm optimizers. Information Sciences. 2011;181(8):1411-1425
  39. 39. Garcia NJ, Olivera AC, Alba E. Optimal cycle program of traffic lights with particle swarm optimization. IEEE Transactions on Evolutionary Computation. 2013;17(6):823-839
  40. 40. Gong M, Cai Q, Chen X. Complex network clustering by multiobjective discrete particle swarm optimization based on decomposition. IEEE Transactions on Evolutionary Computation. 2014;18(1):82-97
  41. 41. Mousa AA, El-Shorbagy MA, Abd-El-Wahed WF. Local search based hybrid particle swarm optimization algorithm for multiobjective optimization. Swarm and Evolutionary Computation. 2012;3(1):1-14

Written By

Hong-Gui Han, Lu Zhang and Jun-Fei Qiao

Submitted: 23 October 2017 Reviewed: 08 March 2018 Published: 05 September 2018