An adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is developed to improve the computation performance. In this AGMOPSO algorithm, the MOG method is devised to update the archive to improve the convergence speed and the local exploitation in the evolutionary process. Attributed to the MOG method, this AGMOPSO algorithm not only has faster convergence speed and higher accuracy but also its solutions have better diversity. Additionally, the convergence is discussed to confirm the prerequisite of any successful application of AGMOPSO. Finally, with regard to the computation performance, the proposed AGMOPSO algorithm is compared with some other multiobjective particle swarm optimization (MOPSO) algorithms and two state-of-the-art multiobjective algorithms. The results demonstrate that the proposed AGMOPSO algorithm can find better spread of solutions and have faster convergence to the true Pareto-optimal front.
- multiobjective particle swarm optimization
- multiobjective problem
- multiobjective gradient
Most of the engineering and practical applications, such as wastewater treatment processes and aerodynamic design problem, often have a multiobjective nature and require solving several conflicting objectives [1, 2, 3]. Handling with these multiobjective optimization problems (MOPs), there are always a set of possible solutions which represent the tradeoffs among the objectives known as Pareto-optimal set [4, 5]. Evolutionary multiobjective optimization (EMO) algorithms, which are a class of stochastic optimization approaches based on population characteristic, are widely used to solve the MOPs, because a series of Pareto-optimal solutions can be obtained in a single run [6, 7, 8, 9]. The multiobjective optimization algorithms are striving to acquire a Pareto-optimal set with good diversity and convergence. The most typical EMO algorithms include the non-dominated sorting genetic algorithm (NSGA)  and NSGA-II , the strength Pareto evolutionary algorithm (SPEA)  and SPEA2 , the Pareto archived evolutionary strategy (PAES) , the Pareto envelope-based selection algorithm (PESA)  and PESA-II .
The notable characteristic of particle swarm optimization (PSO) is the cooperation among all particles of a swarm which is attracted toward the global best (gBest) in the swarm and its own personal best (pBest), so the PSO have a better global searching ability [17, 18, 19]. Among these EMO algorithms, owing to the high convergence speed and ease of implementation, multiobjective particle swarm optimization (MOPSO) algorithms have been widely used [20, 21, 22, 23]. MOPSO can also be applied to multiple difficult optimization problems such as noisy and dynamic problems. However, apart from the archive maintenance, in MOPSO, two issues still remain to be further addressed. The first one is the update of gBest and pBest, because the absolute best solution cannot be selected by the relationship of the non-dominated solutions. Then, the selection of gBest and pBest results in the different flight directions for a particle, which has an important effect on the convergence and diversity of MOPSO .
Zheng et al. introduced a novel MOPSO algorithm, which can improve the diversity of the swarm and improve the performance of the evolving particles over some advanced MOPSO algorithms with a comprehensive learning strategy . The experimental results illustrate that the proposed approach performs better than some existing methods on the real-world fire evacuation dataset. In , a multiobjective particle swarm optimization with preference-based sort (MOPSO-PS), in which the user’s preference was incorporated into the evolutionary process to determine the relative merits of non-dominated solutions, was developed to choose the suitable gBest and pBest. The results indicate that the user’s preference is properly reflected in optimized solutions without any loss of overall solution quality or diversity. Moubayed et al. proposed a MOPSO by incorporating dominance with decomposition (D2MOPSO), which proposes a novel archiving technique that can balance the relationship of the diversity and convergence . The analysis of the comparable experiments demonstrates that the D2MOPSO can handle with a wide range of MOPs efficiently. And some other methods for the update of gBest and pBest can be found in [28, 29, 30, 31]. Although many researches have been done, it is still a huge challenge to select the appropriate gBest and pBest with the suitable convergence and diversity [32, 33].
The second particular issue of MOPSO is how to own fast convergence speed to the Pareto Front, well known as one of the most typical features of PSO. According to the requirement of the fast convergence for MOPSO, many different strategies have been put forward. In , Hu et al. proposed an adaptive parallel cell coordinate system (PCCS) for MOPSO. This PCCS is able to select the gBest solutions and adjust the flight parameters based on the measurements of parallel cell distance, potential and distribution entropy to accelerate the convergence of MOPSO by assessing the evolutionary environment. The comparative results show that the self-adaptive MOPSO is better than the other methods. Li et al. proposed a dynamic MOPSO, in which the number of swarms is adaptively adjusted throughout the search process . The dynamic MOPSO algorithm allocates an appropriate number of swarms to support convergence and diversity criteria. The results show that the performance of the proposed dynamic MOPSO algorithm is competitive in comparison to the selected algorithms on some standard benchmark problems. Daneshyari et al. introduced a cultural framework to design a flight parameter mechanism for updating the personalized flight parameters of the mutated particles in . The results show that this flight parameter mechanism performs efficiently in exploring solutions close to the true Pareto front. In the above MOPSO algorithms, the improved strategies are expected to achieve better performance. However, few works have been done to examine the convergence of these MOPSO algorithms .
Motivated by the above review and analysis, in this chapter, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm, based on a multiobjective gradient (MOG) method, is put forward. This novel AGMOPSO algorithm has faster convergence in the evolutionary process and higher efficiency to deal with MOPs. The proposed AGMOPSO algorithm contains a major contribution to solve MOPs as follows: A novel MOG method is proposed for updating the archive to improve the convergence speed and the local exploitation in the evolutionary process. Unlike some existing gradient methods for single-objective optimization problems [38, 39, 40] and MOPs , much less is known about the gradient information of MOPs. One of the key feathers of the MOG strategy is that the utilization of gradient information for MOPs is able to obtain a Pareto set of solutions to approximate the optimal Pareto set. In view of the advantages of the MOG strategy, this AGMOPSO algorithm can obtain a good Pareto set and reach smaller testing error with much faster speed. This characteristic makes this method ideal for MOPs.
2. Problem formulation
2.1. Multiobjective problems
A minimize MOP contains several conflicting objectives which is defined as:
where m is the number of objectives, x is the decision variable, fi() is the ith objective function. A decision variable y is said to dominate the decision vector z, defined as y dominates z or , which is indicated as:
where i = 1,2, …, m, j = 1,2, …,m. When there is no solution that can dominate one solution in MOPs, this solution can be used as the Pareto optimal solution. This Pareto optimal solution comprises the Pareto front.
2.2. Particle swarm optimization
PSO is a stochastic optimization algorithm, in which a swarm contains a certain number of particles that the position of each particle can stand for one solution. The position of a particle which is expressed by a vector:
where D is the dimensionality of the searching space, i = 1, 2, …, s; s is the swarm size. Also each particle has a velocity which is represented as:
During the movement, the best previous position of each particle is recorded as pi (t) = [pi,1(t), pi,2(t),…, pi,D(t)], and the best position obtained by the swarm is denoted as g(t) = [g1(t), g2(t),…, gD(t)]. Based on pi(t) and g(t), the new velocity of each particle is updated by:
where t denotes the tth iteration during the searching process; d = 1, 2, …, D is the dimension in the searching space; ɷ is the inertia weight; c1 and c2 are the acceleration constants and r1 and r2 are the random values uniformly distributed in [0, 1]. Then the updating formula of the new position is expressed as:
At the beginning of the searching process, the initial position of each particle is randomly generated. As the searching process goes on, the particle swarm may appear as an uneven distribution phenomenon in the evolutionary space.
3. Multiobjective gradient method
The key points of AGMOPSO, compared to the original MOPSO, are that the MOG method is taken into account. In AGMOPSO, the population with N particles intends to search for a set of non-dominated solutions to be stored in an archive with a predefined maximal size.
In MOPSO, the position of each particle can represent the potential solution for the conflicting objectives. The gBest and pBest can guide the evolutionary direction of the whole particle swarm. The position xi and velocity vi of the ith particle are the D-dimensional vectors xi (0)∈ RD, vi (0)∈RD. The particle updates the velocity and position by the motion trajectory in Eqs. (5) and (6). The external archive A(0) is initialized as a null set. Meanwhile, the best previous position pi(t) is computed by:
where means x(t) is not dominated by pi(t − 1). The process of archive A(t) is updated based on the previous archive A(t − 1) and the best previous position pi(t)
where A(t) = [a1(t), a2(t),…, aK(t)]T, aj(t) = [a1,j(t), a2,j(t),…, aD,j(t)], is updated archive which removed the solutions dominated by the best previous position pi(t), K is the dimensionality of archive A(t) which will be changed in the learning process, means aj(t-1) is not dominated by pi(t) and pi(t) is not dominated by aj(t-1). Moreover, g(t) is found according to .
In AGMOPSO, to enhance the local exploitation, the archive A(t) is further updated by the MOG method using the gradient information to obtain a Pareto set of solutions that approximates the optimal Pareto set. Without loss of generality, assuming all of the objective functions are differentiable, the directional derivative in fi(aj(t)) in a direction ūj(t) at point aj(t) is denoted as
where δ > 0, ūj(t) = [ū1,j(t), ū2,j(t), …, ūD,j(t)], i = 1, 2, …, m; j = 1, 2, …, K, and the directional derivative can be rewritten:
then, the gradient direction of MOP can be represented as:
According to Eq. (11), the minimum direction of MOP is calculated as
and . In addition, the smooth criteria fi(aj(t)) are said to be Pareto-stationary at the point aj(t) if
The weight vector can be set as
where and .
To find the set of Pareto-optimal solutions of MOPs, the multi-gradient descent direction is given as follows:
This multi-gradient descent direction is utilized to evaluate the full set of unit directions. And the archive A(t) is updated as follows:
where, h is the step size, aj(t) and āj(t) are the jth archive variables before and after the MOG algorithm has been used at time t and the fitness values are updated at the same time.
Moreover, the archive A(t) can store the non-dominated solutions of AGMOPSO. But the number of non-dominated solutions will gradually increase during the search process. Therefore, to improve the diversity of the solutions, a fixed size archive is implemented in AGMOPSO to record the good particles (non-dominated solutions). During each iteration, the new solutions will be compared with the existing solutions in the archive using the dominating relationship. When a new solution cannot be dominated by the existing solutions in the archive, it will be reserved in the archive. On the contrary, the dominated new solutions cannot be accepted in the archive. If the capacity of the archive reaches the limitation, a novel pruning strategy is proposed to delete the redundant non-dominated solutions to maintain uniform distribution among the archive members.
Assuming that there are K points which will be selected from the archive serve. The maximum distance of the line segment between the first and the end points (namely whole Euclidean distance Dmax) are obtained. Then, the average distance of the remained K-2 points are set
where d is the average distance of all points. The average values of d are used to guide to select the non-dominated solutions of more uniform distribution. In addition, for the three objectives, all of the solutions (except the first and the end) are projected to the Dmax. The points can be reserved, the projective points and the average distance points can be found. However, most projective distances of the adjacent points are not equal to the average distance. Thus, the next point is likely to be selected when it has the distance more closely to the average distance. Once the search process is terminated, the solutions in archive will become the final Pareto front. Taking DTLZ2 as an example, Figure 1 shows this strategy with three objectives in details.
Local search is a heuristic method to improve PSO performance. It repeatedly tries to improve the current solution by replacing it with a neighborhood solution. In the proposed MOG algorithm, the set of unit directions is described by the normalized combination of the unit directions that map to the intersection points as Eq. (12). Then, each single run of the algorithm can yield a set of Pareto solutions. Experiments demonstrate that the improvements make AGMOPSO effective.
In MOPSO, it is desired that an algorithm maintains good spread of solutions in the non-dominated solutions as well as the convergence to the Pareto-optimal set. In this AGMOPSO algorithm, an estimate of density is designed to evaluate the density of solutions surrounding it. It calculates the overall Euclidean distance values of the solutions, and then the average distance of the solutions along each of the objectives corresponding to each objective is calculated. This method is able to get a good spread result under some situations to improve the searching ability. And the pseudocode of AGMOPSO is presented in Table 1.
|Initializing the flight parameters, population size, the particles positions x(0) and velocity v(0)|
Calculating the fitness value
Getting the non-dominated solutions % Eq. (8)
Storing the non-dominated solutions in archive A(t)
Updating the archive using MOG method % Eq. (16)
If (the number of archive solutions exceed capacity)
Pruning the archive
Selecting the gBest from the archive A(t)
Calculating the flight parameters
Updating the velocity xi(t) and position vi(t) % Eqs. (5–6)
4. Simulation results and analysis
In this section, three ZDT and two DTLZ benchmark functions are employed to test the proposed of AGMOPSO. This section compares the proposed AGMOPSO with four state-of-the-art MOPSO algorithms—adaptive gradient MOPSO (AMOPSO) , crowded distance MOPSO (cdMOPSO) , pdMOPSO  and NSGA-II .
4.1. Performance metrics
To demonstrate the performance of the proposed AGMOPSO algorithm, two different quantitative performance metrics are employed in the experimental study.
1. Inverted generational distance (IGD):
where mindis(x, F) is the minimum Euclidean distance between the solution x and the solutions in F. A smaller value of IGD(F*, F) demonstrates a better convergence and diversity to the Pareto-optimal front.
2. Spacing (SP):
where di is the minimum Euclidean distance between ith solution and other solutions, K is the number of non-dominated solutions, is the average distance of the all Euclidean distance di.
4.2. Parameter settings
All the algorithms have three common parameters: the population size N, the maximum number of non-dominated solutions K and iterations T. Here, N = 100, K = 100 and T = 3000.
4.3. Experimental results
The experimental performance comparisons of the cdMOPSO algorithm on ZDTs and DTLZs are shown in Figures 2–6. Seen from Figures 2–6, the non-dominated solutions obtained by the proposed AGMOPSO algorithm can approach to the Pareto Front appropriately and maintain a greater diversity than other compared algorithms. Experimental results in Figures 2–4 show that the proposed AGMOPSO algorithm is superior to the cdMOPSO algorithm in diversity performance and can approach the Pareto Front. In addition, the results in Figures 5 and 6 show that the proposed AGMOPSO algorithm can obtain a better performance on the three-objective benchmark problems with accurate convergence and the preferable diversity.
In order to show the experimental performance in details, the experimental results, which contain the best, worst, mean and standard deviations of IGD and SP based on the two-objective of ZDTs and the three-objective of DTLZs are listed in Tables 2 and 3, respectively.
Moreover, the experimental results in Tables 2 and 3 include the details of the four evolutionary algorithms. To illustrate the significance of the findings, the comparing results for the performance index is analyzed as follows:
Comparison of IGD index: From Table 2, the proposed AGMOPSO algorithm is superior to other MOPSO algorithms in terms of the results of IGD. Firstly, in the two-objective of ZDTs instances, the AGMOPSO can have better mean deviations of IGD than other four evolutionary algorithms on ZDT3 and ZDT4. It is indicated that the MOG method has played a vital role on the algorithm. Meanwhile, compared with NSGA-II , the proposed AGMOPSO has better IGD index performance of accuracy and stability for the two-objective of ZDTs (except ZDT4). Second, in the three-objective of DTLZs instances, the AGMOPSO is superior to other four algorithms in terms of the mean deviations value of IGD. According to the comparisons between the AGMOPSO and other four evolutionary algorithms, it is demonstrated that the proposed AGMOPSO is the closest to the true front and nearly enclose the entire front, which means the proposed AGMOPSO algorithm achieves the best convergence and divergence.
Comparison of SP index: The comparison of SP among the proposed AGMOPSO algorithm and other compared algorithms was shown in Table 3. Firstly, in the two-objective of ZDTs instances, the AGMOPSO can have better mean deviations and best deviations of SP than other four evolutionary algorithms ZDT3 and ZDT4. Meanwhile, compared with NSGA-II , the proposed AGMOPSO has better SP index performance of diversity for the two-objective of ZDTs (except ZDT6). From the results in Table 3, the comparison of the SP between the proposed AGMOPSO algorithm illustrate that the MOG method can have better effect on the diversity performance than other existing methods. Secondly, in the three-objective of DTLZs instances, the proposed AGMOPSO algorithm has the best SP performance on the DTLZ2 and DTLZ7 than the other four compared algorithms. In addition, to verify the effect of the MOG method, the proposed AGMOPSO can obtain a set of non-dominated solutions with greater diversity and convergence than NSGA-II on instances (except ZDT4 and ZDT6). Therefore, the proposed AGMOPSO algorithm can obtain more accurate solutions with better diversity on the most ZDTs and DTLZs.
A novel method, named AGMOPSO, is proposed to solve MOPs, which underlies MOG to accelerate the solution convergence and deletes the redundant solutions in the archive by the equidistant partition principle. Meanwhile, the convergence analysis and convergence conditions of AGMOPSO are also carefully investigated for the successful applications. Based on the theoretical analysis and the experimental results, the proposed AGMOPSO algorithm with the local search strategy MOG is a novel method for solving theses MOPs. The comparisons of the different indexes also demonstrate that the proposed AGMOPSO algorithm is superior to the other algorithms for most of ZDTs and DTLZs.