Open access peer-reviewed chapter

Particle Swarm Optimization

Written By

Rajesh Kumar Samala

Submitted: 05 August 2022 Reviewed: 17 August 2022 Published: 08 February 2023

DOI: 10.5772/intechopen.107156

From the Edited Volume

Swarm Intelligence - Recent Advances and Current Applications

Edited by Marco Antonio Aceves-Fernández

Chapter metrics overview

230 Chapter Downloads

View Full Metrics

Abstract

The procedure is to obtain the best solution for the certain parameters in the given network to satisfy every requirement for design by considering the smallest affordable cost will be considered as an optimization. Optimization traditional approach will have some constraints like, outcome of single-based, local optima convergence, problems in unknown search space. To overcome the above constraints, many research organizations have established various metaheuristics to search optimization solutions for unsolved issues. The main intend to explain the Particle Swarm Optimization algorithm (PSOA) is to explain the stochastic optimization approach basics. Motivation of this Particle Swarm Optimization algorithm (PSOA) is to develop a strong metaheuristic optimization solution which is inspired natural swarm behavior like schooling of birds and fishes. PSOA is a simplified social network simulation. The final intent of this PSOA is a graphical representation and graphical simulate smoothly but undefined bird or fish flock’s directions. Every bird’s vicinity of observability is restricted to some area. Though having many birds permits every bird in the swarm fitness function to be bigger surface concerned. Mathematically every Particle Swarm Optimization algorithm (PSOA) has associated with fitness value, velocity, and position. Memory of maintaining global fitness, best position, and global fitness value.

Keywords

  • swarm optimization
  • position and velocity

1. Introduction

The optimization is having great impact on nature and human law of affairs. The characteristic of this optimization is to obtain the most superior (global minima or maxima). The optimization structure should maintain the generality. Hence the optimization design study should be obtaining the best of activity of human aspects and analyzing, understanding, solution and evaluating of mathematical issues of the system. This optimization issues are including some real problems constraints and these constraints must be satisfied to get the most feasible global solution. Many constraints are required in designing this optimization solution in real world practice.

The main objective this optimization is to obtain the best with the small effort. For example, in economic operation of electrical power system, with the minimum cost of operation obtaining the maximum efficiency. This is also applicable for many more fields in the real world.

Advertisement

2. Mathematical formulation

Generally, all the optimization design having the following steps:

  1. Input data

  2. Initialization

  3. Constraints

  4. Evaluation

  5. Updating

  6. Storing Result

The design of optimization will be expressed in a standard form as follows:

Objective Function=fX=maximizing or minimizingcostE1

Subjected to

Inequality Constraints forminimizing objective functionpnumberazX0,where z=1,2,3,,pE2
Inequality Constraints formaximizing objective functionp numberazX0,where z=1,2,3,,pE3
Equality Constraintsq numberbzX=0,where z=1,2,3,,qE4
Input or design varibales defined asxi,where i=1,2,3,,nE5
Input or design variable expressedasX=x1x2x3...xnE6

The design variables can also be expressed as xil=xi=xiu. Here xil is the lower limit and xiu is the upper limit.

Advertisement

3. Present best and global best optimization

If design or Input variables that give the optimization like maximization or minimization of objective function f(X) can be represented with X* (Figure 1).

Figure 1.

Local minima and global minima [1].

Pints P1, P2, P3, P4, P5, P6 are known as Present best or Local best solutions and the P7 is known as the Global best solution for the given system.

Objective of this given system has a present or local best solution at X* point, then

fX*fXfor everyXvalue and in single variable ofxforaclosed region ofaxb.
Advertisement

4. Various optimization techniques

In the literature we have different types of mathematical approaches for optimization are found like Simplex approach, gradient approach, dynamic programming, branch and bound and integer programming etc. for all these approaches the size will be limited to solve. These approaches are more efficient in solving linear issues. With the increase in variables number and constraints number, the time of computation is also increases to obtain the solution. This increase in time increases exponentially. With the increase in complexity in solving complex issues with the help of mathematical approaches is also become more and more complex.

In last few years number of meta-heuristic approaches has developed to sole such complex issues with limited flexibility. Many numbers of bio and nature inspired optimization approaches has been developed like Genetic Algorithm (GA) and Swarm Intelligence (SI) approaches.

Here GA will work on the Darwinian principle for survival of the best solution in the population. GA uses the selection, cross-over and the mutation etc., as the operators to evaluate the population. Proved that, GA will give the closest solution for the global optimization with the introducing required improvements into it. Furthermore, the Differential Evaluation (DE) approach has explored to reach the global optimal solution.

Taking an Inspiration from living organism social behavior like fishes, birds, inspects etc. these can be communicate with each other might be directly or indirectly with a set of patterns used for the individual optimization. These approaches will operate on the cooperation of organisms instead of competing among them. All these exchange of information from one to other.

Here the Particle Swarm Optimization (PSO) is from the nature inspired behavior of food searching behavior of fishes and birds flocking behavior. To obtain the present best or local optimal solution and the global best or global optimal solution the fishes and the birds will be considered as particles.

As said previously PSO is the stochastic optimization method on swarm based. The finding the food is with the cooperative way in these swarms. Every particle (member) in this swarm ready to modify their pattern for search based on experience from learning from other particles (members) or by own.

Velocity and the position of particles will update in PSO based on the change in environment to reach proximity and the quality requirement. Here the proximity is the swarms must carry time and space calculations and the quality is swarms have to find the appropriate environmental change and have to respond.

According to cornfield model proposed by Heppner on the birds or fish flock behavior, let us assume a food location in the plane. Birds will randomly move or search for the food initially. Now (x0, y0) are the coordinates of position of cornfield. The individual birds coordinate for position and the velocity can be considered as (x, y) and (vx, vy) respectively. Distance from cornfield and the current position is used to determine the performance of the speed and the current position. If each particle has its own memory and can be able to memorize the best position it reached.

This present best position is represented with Pbest (local best). Adjustment of velocity denoted with a constant ‘a’, random numbers between [0, 1] denoted with ‘rand’. The below mathematical equations used to change the velocity as.

If Pbest x < x, then

Vx = Vx – (a * rand)

else

Vx = Vx + (a * rand)

end

If Pbest y < y

Vy = Vy – (a * rand)

else

Vy = Vy + (a * rand)

end

Now if there is a communication between swarms in some other way, every individual able to memorize their best location, considering global best position (gbest) of entire swarm.

Now if the constant for the velocity adjustment is denoted with ‘b’, by using above mathematical equations if the velocity adjusted, Gbest is also changed (or) updated as.

If gbest < x,

Vx = Vx - (b * rand)

else

Vx = Vx + (b * rand)

end.

If gbest < y

Vy = Vy - (b * rand)

else

Vy = Vy + (b * rand)

end

The final fixed velocity and position updating after some trial and errors as.

Velocity updating:

Vx=Vx+2pbestxxrand+2gbestxxrandE7

Position updating:

x=x+VxE8

Here every individual is considered as a particle and these particles are not having any mass and volume. But these particles are having position as well as velocity. Hence this optimization is known as ‘Particle Swarm Optimization Algorithm’.

This PSO algorithm can be explained using the following flowchart (Figure 2).

Figure 2.

The Flowchart of the PSOA [2].

From the flowchart we assume that ‘N’ is the size of swarm. Position vector of every individual particle in D - dimensional vector space is

Xi=Xi1Xi2Xi3XidXiDE9

Velocity vector,

Vi=Vi1Vi2Vi3VidViDE10

Present best (or) local best position of individual particle is,

Pi=Pi1Pi2Pi3PidPiDE11

Optimal (or) the best global position of individual particle

Pg=Pg1Pg2Pg3PgdPgDE12

Now without generality loss, by considering minimization is an objective function, the mathematical equation for updating of individual particle optimal position as,

iffPi>fxit+1Pid,t+1=xidelse
Pidt+1=Pid

Now the velocity and position formula are given as,

Vidt+1=Vidt+C1Pidtxidtrand+C1PidtxidtrandE13
xidt+1=xidt+Vidt+1E14

This proposed PSO was good enough for optimization. But later this PSO was modified to improve the effectiveness by adding Inertia weight. With Inertia weight introduction, the velocity update equation modified as,

Vidt+1=ωVidt+C1Pidtxidtrand+C1PidtxidtrandE15

Now by taking the convergence rate into an account, the construction factor (x) is introduced into PSO. Now the new update equation for velocity is became,

Vidt+1=xVidt+C1Pidtxidtrand+C1PidtxidtrandE16

Here the distance from the current position of the particle to its own best position called ‘Cognitive’ i.e., own thinking. Hence ‘C1’ is called cognitive acceleration factor (or) cognitive learning factor.

Next the distance from current position of the particle to the global best position It is known as ‘Social’ factor. This indicates that there is a coordination and sharing of information between particles. Good particles movement through the cognition. Hence ‘C2’ is the social acceleration factor (or) social learning factor.

Theory and practical application of this PSO algorithm have found great progress. This PSO used in various domains by every researcher with the understanding principle of operation and application.

PSO algorithm does not required any continuous derivative and differential optimized functions. This has fast rate of convergence. This PSO is very easy to understand, execute using programming.

Now coming to disadvantages: multiple local extreme functions, PSO may local minima and cannot produce optimal result because premature convergence of particle. PSO may not produce good results because of lack of best search techniques because of not using information sufficiently in procedure of calculations. In every Iteration PSO using only local optima Information which may not produce correct results. This PSO can be able to provide global search possibility but cannot guarantee for convergence to the global test. This PSO more suitable for optimization Issues s class with high dimensional and getting accurate outcome is not required because PSO is a meta-heuristic optimization algorithm. It never gives the correct explanation from the principle why it is efficient and not specified any of application range.

Advertisement

5. Various types of research about PSO

  1. PSO algorithm can be analyzed theoretically and understanding of its working mechanism.

  2. Structure change and obtain better performance.

  3. Different parameters influence on PSO.

  4. Different topology influence on PSO.

  5. Parallel study of PSO algorithm.

  6. Discrete study of PSO algorithm.

  7. Multi-objective optimization study of PSO algorithm.

  8. Use PSO algorithm for various fields in Engineering.

5.1 PSO for multi objective optimization

The single objective optimization is not correct practically Because the effective outcome is not accurate in single objective optimization. This problem has overcome by multi objective (MO) optimization. This MO has become the latest area in research. In this multi-objective optimization issues, all the target functions independently optimized and finally determine the best value for every target. But there is a conflict between objects. Because of conflicting between objects, unfortunately the finding of optimal solution is highly impossible for every objective hence only a pareto perfect solution has been determined.

Particles are independent agents in traditional PSO. Based on their own companion and its own experience problem space can be search by particles. As given earlier formula for particles update cognitive is the former and the social part is the latter. Here, selection of gbest and pbest (social and cognitive guide) is the important issue of Multi-Objective PSO (MOPSO). In both, i.e., in traditional PSO and MOPSO, choosing cognitive guide is same. But the guide must be found based on pareto dominance. There are two steps involved in choosing social guide.

Step-1: Candidate pool creation used for the guide selection.

In PSO traditional, this guide has been chosen from pbest (or) local (or) pbest of neighbors. While in case of MOPSO, to save more pareto optimal solutions, the normal technique is using an external pool.

Step 2: Guide Selection: Choosing of gbest must satisfy the below standards.

The chosen guide should be capable of provide particles guidance effectively. This is because of improve the speed of convergence.

The selected guide required to give balanced search with pareto frontier. This is because to maintain population diversity.

To select social guide here two methods are specified.

  1. Quantity standard

  2. Roulette mode of selection

  3. Quantity standard: Selection of social guide by following various procedures, but not random selection involvement.

  4. Roulette mode of selection: Random selection based on various standards, in order to maintain population diversity.

Pseudo code for PSO algorithm:

Initialize population

for t = 1: maximum generation

for i = 1: population size

  if f (xi,d (t)) < f (pi (t)) then pi (t) = xi,d (t)

   f (pg (t)) = min (f (pi (t)))

  end

  for d = 1: dimension

  vi,d (t + 1) = wvi,d (t) + C1* r1(pi – xi,d (t)) + C2* r2 (pg – xi,d (t))

  xi,d (t + 1) = xi,d (t) + vi,d (t + 1)

  if vi,d (t + 1) > vmax then vi,d (t + 1) = vmax

  else if vi,d (t + 1) < vmin then vi,d (t + 1) = vmin

  end

  if xi,d (t + 1) > xmax then xi, d(t + 1) = xmax

  else if xi,d (t + 1) < xmin then xi,d (t + 1) = xmin

  end

 end

 end

end

For each particle

  Initialize particle

END

Do

For each particle

   Calculate fitness value

   If the fitness value is better than the best fitness value (pbest) in history set current value

    as the new pbest

End

Choose the particle with the best fitness value of all the particles as the gbest

For each particle

Calculate particle velocity

Update particle position

End

While maximum iterations or minimum error criteria is not attained (Table 1).

PSO Basic VariantFunctionAdvantagesDisadvantages
Velocity Clamping (VC)Control the global exploration of the particle.
Reduces the size of the step velocity, so that the particles remain in the search area, but it cannot change the search direction of the particle.
VC reduces the size of the step velocity so it will control the movement of the particleIf all the velocity becomes equal to the particle will continue to conduct searches within a hypercube and will probably remain in the optima but will not converge in the local area.
Inertia WeightControls the momentum of the particle by weighing the contribution of the previous velocityA larger inertia weight in the end of search will foster the convergence ability.Achieve optimally convergence strongly influenced by the inertia weight
Construction CoefficientTo ensure the stable convergence of the PSO algorithmSimilar with inertia weightWhen the algorithm converges, the fixed values of the parameters might cause the unnecessary fluctuation of particles
Synchronous and Asynchronous UpdatesOptimization in parallel processingImproved convergence rateHigher throughput: More sophisticated finite element formulations Higher accuracy (mesh densities)

Table 1.

The basic variant of PSO [3].

The PSO algorithm can be hybridized with several metaheuristic algorithm to balance the exploration and exploitation. Some algorithms have better efficiency in exploration, but they are poor in exploitation. Some algorithms will take high iteration to reach convergence and show poor performance. In this case the PSO is introduced to enhance the performance using,

ViK+1=ωViK+ρiC1PibestXiK+ρ2C2GXiKE17
XiK+1=XiK+ViK+1E18

The proposed algorithm using PSO is written as,

  1. Define population size (P), Initial values, C1 and C2 as acceleration constants, Pc the number of variables, and maximum iteration number as Maxiter.

  2. Setting t = 0, as counter initialization.

  3. for i = 1: if P ≥ i, do

  4. Generate an initial random population Xi(t).

  5. Fitness evaluation function of every search agent or solution f(Xi).

  6. end for

  7. repeat

  8. Apply standard original Particle Swarm Optimization (PSO) Algorithm.

  9. Apply the selection operate of another algorithm which are going to hybridize with PSO. (Ex: Genetic Algorithm (GA), Ant-Lion Optimization Algorithm (ALOA) etc.)

  10. Partition the population X(t) into in to partno sub-partitions, where each sub-partition Xt size is v x η.

  11. for i = 1: partno ≥ i do

  12. Apply the arithmetical crossover as shown in procedure 1 on each sub-partition Xt.

  13. end for

  14. Apply the algorithm which is hybridized with PSO on the whole population X(t).

  15. Update the solution in the population X(t).

  16. Set t = t + 1. Increase the iteration count.

  17. Until iteration reaches the Maxiter. Satisfied the termination criteria.

  18. Print the best solution (Figure 3).

Figure 3.

The flowchart of the hybridization with PSOA [2].

PSOA will be applied in various optimization areas, example, Energy-Storage Optimization, Image Processing, Economic operation of power system, Optimal location identification, analysis of slope stability, foundation and pile engineering, soil, and rock mechanics, underground and tunneling space design etc. PSOA will simulate the particle movement and will be applied in visual effects and special effects in some films.

Along with the conventional applications optimization problems, Swarm Intelligence (SI) will be used in acquisition of material from library, communications, classification of medical dataset, dynamic control, planning of heating system, tracking of moving objects, and prediction.

PSO Algorithm Visual Explanation:

Let us suppose a group of birds are flying randomly in an area in search of their food (Figure 4).

Figure 4.

Random movement of birds [4].

No bird knows exactly where the food is, but they know how far they are in each iteration, (Figure 5).

Figure 5.

Birds moving toward the food [4].

Birds they do not know the best position. If any member can find the desirable path to go, the rest of the members will follow quickly (Figure 6).

Figure 6.

Finding best path [4].

Finally, following the bird which is nearest to the food is an aim.

5.2 PSO algorithm demo with one example

Objective function used for Minimization:

Fitness=10X11^2+20X22^2+30X33^2

STEP – 1: PARAMETERS INITIALIZATION (Table 2):

891
961
3510
10210
1058

Table 2.

Output [5].

   Number of variables: m = 3.

   Population size: n = 5.

   Inertia weight: Wmax = 0.9, Wmin = 0.4.

   Acceleration Factor: C1 = 2, C2 = 2.

   Maximum Iteration: Maxiter = 50.

% Population Initialization

   LB = [0 0 0];

   UB = [10 10 10];

for i = 1: n

    for j = 1:m

    X0 (i, j) = round (LB (j) + rand () * (UB (j) – LB (j)));

    end

end

  • Initialize velocity (vi) randomly for each particle: (Table 3)

  • v = 0.1 * X0 (i, j);

X1X2X3
1st Particle891
2nd Particle961
3rd Particle3510
4th Particle10210
5th Particle1058

Table 3.

Initial population [5].

Initial velocity for First Particle:

v(X1) = 0.1 * 8 = 0.8

v(X3) = 0.1 * 9 = 0.9

v(X1) = 0.1 * 1 = 0.1

Initial velocity for Second Particle:

v(X1) = 0.1 * 9 = 0.9

v(X3) = 0.1 * 6 = 0.6

v(X1) = 0.1 * 1 = 0.1

Initial velocity for Third Particle:

v(X1) = 0.1 * 3 = 0.3

v(X3) = 0.1 * 5 = 0.5

v(X1) = 0.1 * 10 = 1.0

Initial velocity for Fourth Particle:

v(X1) = 0.1 * 10 = 1.0

v(X3) = 0.1 * 2 = 0.22

v(X1) = 0.1 * 10 = 1.0

Initial velocity for Fifth Particle:

v(X1) = 0.1 * 10 = 1.0

v(X3) = 0.1 * 5 = 0.5

v(X1) = 0.1 * 8 = 0.8

Initial Position (Xi) Randomly for each Particles (Table 4):

X1X2X3
1st Particle0.80.90.1
2nd Particle0.90.60.1
3rd Particle0.30.51.0
4th Particle1.00.21.0
5th Particle1.00.50.8

Table 4.

Initial velocity [5].

Current Position = Previous Position + Velocity

Current Position for First Particle:

X1 = 8 + 0.8 = 8.8

X2 = 9 + 0.9 = 9.9

X3 = 1 + 0.1 = 1.1

Current Position for Second Particle:

X1 = 9 + 0.9 = 9.9

X2 = 6 + 0.6 = 6.6

X3 = 1 + 0.1 = 1.1

Current Position for Third Particle:

X1 = 3 + 0.3 = 3.3

X2 = 5 + 0.5 = 5.5

X3 = 10 + 1 = 11

Current Position for Fourth Particle:

X1 = 10 + 1 = 11

X2 = 2 + 0.2 = 2.2

X3 = 10 + 1 = 11

Current Position for Fifth Particle:

X1 = 10 + 1 = 11

X2 = 5 + 0.5 = 5.5

X3 = 8 + 0.8 = 8.8

STEP – 2: FITNESS EVALUATION fxit (Table 5)

X1X2X3
1st Particle8.89.91.1
2nd Particle9.96.61.1
3rd Particle3.35.511
4th Particle112.211
5th Particle115.58.8

Table 5.

Current position [5].

Fitness Evaluation for Every Particle:

The objective function taken for minimization:

Fx=10X11^2+20X22^2+30X33^2
Fx10=108.81^2+209.92^2+301.13^2
Fx10=1.9649
Fx20=109.91^2+206.62^2+301.13^2
Fx20=1.3236
Fx30=103.31^2+205.52^2+30113^2
Fx30=2.2179
Fx40=10111^2+202.22^2+30113^2
Fx40=2.9208
Fx50=10111^2+205.52^2+308.83^2
Fx50=2.2542

gBest = 1.3236 (Table 6)

FITNESS VALUE (t = 0)
1.9649
1.3236
2.2179
2.9208
2.2542

Table 6.

Choose the particle with best fitness value as gBest [5].

STEP – 3: CALCULATION OF POSITION AND VELOCITY FOR EVERY PARTICLE

  • Particle Position Calculation: xit+1=xit+vit

  • Particle Velocity Calculation:

    vit+1=wvit+C1r1xBestitxit+C2r2gBestitxit

Velocity Update for at iteration (t = 0) for First Particle:

v10+1=wv10+C1r1xBest10x10+C2r2gBest10x10
v11=0.90.8+2rand8.88.8+2rand9.98.8
v11=1.0667

Calculating all the values as:

v11=1.0667forx1
v11=4.4719forx2
v11=0.0900forx3

Particle Position Update: (Table 7)

X1X2X3
1st Particle1.0667−4.47190.0900
2nd Particle0.81000.54000.0900
3rd Particle7.48882.5728−18.3177
4th Particle−0.16781.42861.4286
5th Particle−1.21090.5286−13.6635

Table 7.

Updated velocity for every particle [5].

Position Update for First Particle:

x10+1=x10+v10
x11=8.8+1.0667=9.8667
x11=9.9+4.4719=5.4281
x11=1.1+0.900=2

STEP – 4: FITNESS EVALUATION fxit (Table 8)

X1X2X3
1st Particle9.86675.42812
2nd Particle10.717.141.19
3rd Particle10.838.07−7.317
4th Particle10.833.62812.42
5th Particle9.786.028−4.8635

Table 8.

Updated positions for every particle [5].

Current Best Calculation Best [gBest]

Update Minimization: Calculate Fitness Value:

Fx=10X11^2+20X22^2+30X33^2
FirstParticleFx=109.86671^2+205.42812^2+3023^2=1.0512
SecondParticleFx=1010.711^2+207.142^2+301.193^2=1.5695
ThirdParticleFx=1010.781^2+208.072^2+307.3173^2=4.8866
FourthParticleFx=1010.781^2+208.072^2+307.3173^2=3.6716
FifthParticleFx=109.781^2+206.0282^2+304.86353^2=2.9504

Choose the Current Best as (Table 9)

Updated Fitness Value
1.0512
1.5695
4.8866
3.6716
2.9504

Table 9.

Updated fitness value [5].

if fx11<fgBestthen

gBest=x11

if fx11<fgBest

1.0512 < 1.3236

if fx11<fgBestthen

gBest=1.0512

if fx21<fgBest

1.5695 < 1.3236

if fx31<fgBest

4.8866 < 1.3236

if fx41<fgBest

3.6716 < 1.3236

if fx51<fgBest

2.9504 < 1.3236

gBest = 1.0512

if fxit<fpBestthen

pBest = xit

if fx11<fpBest

1.0512 < 1.9649

if fx21<fpBest

1.5695 < 1.3236

if fx31<fpBest

4.8866 < 2.2179

if fx41<fpBest

3.6716 < 2.9208

if fx51<fpBest

2.9504 < 2.2542

STEP – 5: UPDATE ITERATION (Table 10)

New pBest Value
1.0512
1.3236
7.2179
2.9208
2.2542

Table 10.

Updated fitness value [5].

Update t = t + 1 = 2

t = 2

STEP – 6:

gBest = 1.0512

Repeat this until stopping Criteria met (Table 11)

1st Particle9.86675.42812

Table 11.

OUTPUT gBest & xit [5].

5.3 PSO algorithm demo with one more example

Find the minimum of the function using PSO algorithm.

fx=x2+5x+20

Using 9 particles with initial positions.

x1=9.6,x2=6,x3=2.6,x4=1.1,x5=0.6,x6=2.3,x7=2.8,x8=8.3,x9=10,

Step 1: Evaluation of objective function as

f10=120.16,f20=46,f30=0.24,f40=13.29,f50=22.64,f60=26.21,f70=26.16,f80=7.39,f90=30

Let C1 = C2 = 1 and set initial velocities of the particles to zero.

v10=0,v10=v20,v30,v40,v50,v60,v70,v80,v90=0

Step 2: Set the iteration number as t = 0 + 1 and go to step 3.

Step 3: Find the Pbest for every particle.

Pbest,it+1=Pbest,itiffit+1>Pbest,itxit+1iffit+1Pbest,it

So,

Pbest,11=9.6,Pbest,21=6,Pbest,31=2.6,Pbest,41=1.1,Pbest,51=0.6,
Pbest,61=2.3,Pbest,71=2.8,Pbest,81=8.3,Pbest,91=10

Step 4: Gbest = max(Pbest) so Gbest = 2.3.

Step 5: Updating the velocities of the particle by considering the value of random numbers r1 =0.213, r2 = 0.876, C1 = C2 = 1, w = 1.

vit+1=vit+C1r1tPbest,itxit+C2r2tGbesttxit;i=1,2,3,,9.
v11=0+0.2139.6+9.6+0.8762.3+9.6=10.4244
v21=7.2708,v31=4.2924,v51=1.4892,v61=0,v71=0.4380,v81=5.256,v91=6.7452

Step 6: Update the values of position as well.

xit+1=xit+vit+1
x11=0.8244,x21=1.2708,x31=1.6924,x41=1.8784,x51=2.0892,x61=2.3,x71=2.362,x81=3.044,x91=3.2548

Step 7: Finding objective function values of

f11=23.4424,f21=24.739,f31=25.5978,f41=25.8636,f51=26.0812,f61=26.21,f71=26.231,f81=25.9541,f91=25.6803

Step 8: Stopping Criteria.

If the terminal rule is satisfied, go to step 2. Otherwise stop the iteration and note the result.

PSO Algorithm Demo with nature inspired hybrid algorithm:

The optimal integration of Distributed Generation (DG) would be investigated by determining the reduced power loss, enhanced voltage at every bus, reduced total operating cost and enhanced voltage stability of the test network.

In this section, the candidate busses for DG placement and the capacity of the DGs are estimated with the help of Ant-Lion Optimization Algorithm (ALOA) methodology.

Later the elitism phase of the ALOA methodology for updating the DGs location will be done by introducing Particle Swam Optimization Algorithm (PSOA) in ALOA.

The ALOA imitates the ant-lion’s hunting mechanism. In ant-lion’s life cycle there are two main stages, namely, Larva stage or phase and adult stage. The antlion using its larva stage to find the food and the adult stage for reproduction. The inspiration for this ALOA is the larva stage. The antlion prepare one pit by digging in the sand in the shape of cone by moving in a circular way and throwing the sand out of the pit with its heavy jaw. After preparation of trap the antlion will wait for food. The level of hungry of an antlion and the size of the moon decides the trap size. If any pray come toward the surface of the trap, that will fall into the trap very easily. Then the antlion will catch the pray, when it is identified by the antlion.

The matrices Mant-lion and Mant stores the random places of the antlions and ants respectively,

Mantlion=AL1,1AL1,2AL1,nAL2,1AL2,2AL2,dALn,1ALn,2ALn,dMant=A1,1A1,2A1,nA2,1A2,2A2,dAn,1An,2An,dE19

Antlion’s and ant’s fitness will be stored in Mfit-AL and Mfit-A matrices,

MfitAL=fAL1,1AL1,2AL1,nfAL2,1AL2,2AL2,dfALn,1ALn,2ALn,dMfitA=fA1,1A1,2A1,nfA2,1A2,2A2,dfAn,1An,2An,dE20

5.3.1 Ant’s random movement

When ants are moving randomly in search space, then the antlions are ready to find them. All the ants randomly move in search space to find their food is given as,

antposition=[0,cumulativesum2random number1,tillmaximum iteration]E21
random number=1ifrand>0.50ifrand0.5E22

The random movement of ants is restricted within search space. Hence, the ant’s position will be updated as,

Newantposition=oldantpositionamdmcmtbmam+ctE23

Where, am and bm represents minimum and maximum of ant’s random walk, ctm and dtm represents minimum and maximum of mth variable at tth iteration.

5.3.2 Trapping of ants

The equation for the ant’s trapping into ant-lion’s pit is as,

cmt=Antlionnt+ctdmt=Antlionnt+dtE24

Construction of trap

By using roulette wheel method, the best ant-lion is chosen.

Ants sliding toward ant lion

Ants sliding into the pit are represented by,

ct=ct10wtsdt=dt10wtsE25
w=2ift>0.1S3ift>0.5S4ift>0.75S5ift>0.9S6ift>0.95SE26

5.3.3 Catching the pray and reconstruction process of pit

If the ant’s objective function is better than that of the antlions, then the ant-lion moved to a new position to find the opportunity to catch the pray in a better way. The new ant-lion’s position is given as,

Antlionnewposition=Antposition  forantobjective funtion>antlion objective functionE27

5.3.4 Elitism

In each stage the best solution is stored called Elitism. Each ant is assumed to be connected to an antlion using roulette wheel and is represented by,

eliteant=random walk around the selectedantlion2E28

5.3.5 The particle swarm optimization algorithm (PSOA)

This optimization algorithm is stochastic population-based method to solve optimization issues. PSO is a nature inspired meta-heuristic algorithm, works with a particles swarm. In a search space every particle is considered as a solution with two characteristics namely, its own velocity and position. The velocity represents the direction and the distance for optimize the position in next iteration. The position defines the present value of the solution. For the entire particle its own present best named as pbest and the overall global solution named as Gbest estimated.

5.3.6 Mathematical formulation for PSOA

PSO is a stochastic population-based metaheuristic to solve continuous optimization problems. The main idea of the metaheuristic came from the observation of behavior of natural organisms to find food. PSO works with a swarm of particles. Each particle is a solution to a problem in the decision space and has two characteristics: its own position and velocity. The position represents the current values in the solution; the velocity defines the direction and the distance to optimize the position at next iteration. For each particle i its own past best position p best i and the entire swarm’s best overall position G are remembered. In basic PSO the velocity and position of each particle are updated in the following equations,

vik+1=wvik+ρ1c1pibestXik+ρ2c2GXikE29
Xik+1=Xik+vik+1E30

Where, i is a particle index, k is an iteration number, vik is velocity, Xik is the position of particle i at iteration k, pibest is the best position found by particle i (personal best), G is the best position found by the swarm (global best, best of personal bests), w is an inertia coefficient, ρ1ρ2 are random numbers in [0,1] interval, c1,c2 are positive constants representing the factors of particle’s attraction toward its own best position or toward the swarm’s best position (Figure 7).

Figure 7.

Flow chart of proposed PSOA-ALOA hybrid technique [6].

5.3.7 Hybridization of particle swarm optimization algorithm (PSOA) based ant-lion optimization algorithm (ALOA)

In this part of hybridization is, the PSOA has been associated to enhance the ALOA performance, by updating the ALOA elitism stage.

Initially the load flow analysis is carried out with the help of Backward/Forward (BW/FW) sweep method for the base case with the bus data and the load data of the test system without adding any extra load on to the system. Now with intend to reduce these power losses up to maximum possible extent the PSOA based ALOA is combined to form a hybrid methodology. With this methodology, the candidate busses, and capacities of DGs are estimated with the help of ALOA method, and then the locations of antlions are updated with the help of PSOA methodology. This will be evaluated with the help of the flowchart given for proposed hybrid methodology.

5.3.8 Specific contribution

Initially by using the ALOA standalone algorithm the DG locations and capacities were identified intend to reduce the total power losses and total operating cost and enhancement of voltage stability index and the voltage profile at every bus. All the values have been tabulated with different power factor conditions on various test networks with different types of DG units.

Now with the help of proposed hybrid algorithm the DG locations and capacities were identified intend to reduce the total power losses and total operating cost and enhancement of voltage stability index and the voltage profile at every bus. All the values have been tabulated with different power factor conditions on various test networks with different types of DG units.

In ALOA hybridized with PSOA the elitism phase of the ALOA for DG position were updated by using the following mathematical equations,

newvolocityjkk+1=constantoldvelocityjkk+inertia constant1positive constant1(pjbestpositionkk+inertia constant2positive constant2GjbestpositionkkE31
newpositionjkk+1=oldpositionjkk+newvelocitykk+1E32

This is because the performance of the ALOA has high iteration to reach the optimal solution and poor performance. To improve the performance of ALOA is based on the PSOA. This property leads to this hybridization of ALOA-PSOA for optimal integration of DG units were identified with intend to reduce the total power losses and total operating cost and enhancement of voltage stability index and the voltage profile at every bus. All the values have been tabulated with different power factor conditions on various test networks with different types of DG units.

The PSOA has still having some issues that ought to be resolved. Therefore, the future works on the PSOA will probably concentrate on the following:

  1. Find a particular PSO algorithm which can be expected to provide good performance.

  2. Combine the PSO algorithm with other optimization methods to improve the accuracy.

  3. Use this algorithm to solve the non-convex optimization problems.

References

  1. 1. Kennedy J, Eberhart RC. Particle swarm optimization. In: Proceedings of the International Conference on Neural Networks; Institute of Electrical and Electronics Engineers. Vol. 4. 1995. pp. 1942–1948. DOI: 10.1109/ICNN.1995.488968
  2. 2. Samala RK, Kotapuri MR. Distributed generation allocation in distribution system using particle swarm optimization based ant-lion optimization. International Journal of Control and Automation. 2020;13(1)
  3. 3. Eberhart RC, Shi Y. Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the 2000 Congress on Evolutionary Computation (CEC’00). Vol. 1. 2000. pp. 84–88. DOI: 10.1109/CEC.2000.870279
  4. 4. Shi Y, Eberhart R. A modified particle swarm optimizer. In Proceedings of the IEEE International Conferences on Evolutionary Computation. 1998. pp. 69–73. DOI: 10.1109/ICEC.1998.699146
  5. 5. Yang X-S. Chapter 8 - Particle swarm optimization. Nature-Inspired Optimization Algorithms. 2021:111-121. DOI: 10.1016/B978-0-12-821986-7.00015-9
  6. 6. Samala RK. Analysis of Artificial Intelligence Based Hybridization for Optimal Allocation of Distributed Generations and Economic Power Dispatch in Radial and Mesh Configuration Systems. Iterative International Publishers. ISBN: 978-93-92591-70-9

Written By

Rajesh Kumar Samala

Submitted: 05 August 2022 Reviewed: 17 August 2022 Published: 08 February 2023