Open access peer-reviewed chapter

Particle Swarm Optimization Algorithm with a Bio-Inspired Aging Model

By Eduardo Rangel-Carrillo, Esteban A. Hernandez-Vargas, Nancy Arana-Daniel, Carlos Lopez-Franco and Alma Y. Alanis

Submitted: June 7th 2017Reviewed: October 19th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.71791

Downloaded: 1072


A Particle Swarm Optimization with a Bio-inspired Aging Model (BAM-PSO) algorithm is proposed to alleviate the premature convergence problem of other PSO algorithms. Each particle within the swarm is subjected to aging based on the age-related changes observed in immune system cells. The proposed algorithm is tested with several popular and well-established benchmark functions and its performance is compared to other evolutionary algorithms in both low and high dimensional scenarios. Simulation results reveal that at the cost of computational time, the proposed algorithm has the potential to solve the premature convergence problem that affects PSO-based algorithms; showing good results for both low and high dimensional problems. This work suggests that aging mechanisms do have further implications in computational intelligence.


  • particle swarm optimization
  • bio-inspired aging model
  • evolutionary optimization algorithms

1. Introduction

Bio-inspired optimization algorithms are based on precise observation of natural systems [1, 2, 3]. A relevant characteristic of these algorithms is that the biological process had been tested, validated and proven by means of evolution. The mechanisms of self-adaption, self-organizing and self-learning in natural inspired optimization approaches provide means to address challenging problems that cannot be solved by traditional methods [4].

Thus, bio-inspired algorithms become particularly important to tackle complex optimization problems [6, 7, 8, 9, 10]. The outstanding performance of bio-inspired optimization algorithms is attributed to their structures which are closely related to one or other features observed in nature [10]. Accuracy and repeatability are the prime objectives of every optimization algorithm. Therefore, modeling biological mechanisms may impact the outcome of the observed system, designing more accurate and efficient heuristic algorithms [13, 16].

Problem-solving algorithms inspired by one or more biological features have been developed after observing the behavior in humans, animals, and cells [10, 11, 12, 13]. For instance, Genetic Algorithms (GA) [4] defined the basis for evolutionary computing using the early works of Darwin [14] and Mendel [15]; or the Ant Colony System (ACS) [6, 7] which considers the traveling behavior model of self-organized argentine ants published by Goss [5], solving in a fashion way travel salesman-type problems; or the Particle Swarm Optimization (PSO) [8] inspired on feeding behavior of bird flocks becoming a very popular optimizer nowadays [17].

Particle-based optimizers, like those described in [8]; or those presented in [18, 19, 20, 21, 22, 28] are very popular because instead of working with one candidate solution, they offer a subset of individual candidate solutions (particles), which are explored, exploited and improved. A relevant mechanism related to evolution that could play a central role in optimization algorithms is aging [23, 25]. Aging is a natural characteristic whose inclusion in a particle-based optimizer could give a mean of individual control over the particle without highly increasing the complexity of algorithms [26].

To the authors’ best knowledge, previous PSO algorithms did not have a measurement to control individual particle existence within the swarm by evaluating each particle performance. In PSO, because of the very nature of the algorithm, an effect called premature convergenceappears when most (or all) of the particles within a swarm compromises their ability to explore and stay close to a local solution. The Particle Swarm Optimization with an Aging Leader and Challengers (ALC-PSO) [24] was the first approach to include aging processes to alleviate this unwanted effect. However, this was only leadership-oriented and not swarm-related; even more important: the aging dynamics were linear and bounded to static predefined values. In [29], it is used to design a high speed symmetric switching CMOS inverter.

Our PSO variant proposal, the Particle Swarm Optimization with Bio-inspired Aging Model (BAM-PSO) is based on a mathematical model that describes the telomeres shortening observed in the immune system cells, this model includes a form of agingeffect over all the particles of the swarm; this mechanism provides a mean to control the existence of each particle within the swarm avoiding the premature convergence effect. Therefore, the PSO variant with aging model possesses the potential to outperform current optimization algorithms and have further implications in computational intelligence. Finally, optimization under uncertainty and complex functions is an area of special scientific interest, and many real problems and applications include some form of uncertainty; it is also known that collective-intelligence algorithms perform excellent in this type of scenarios [11]; BAMPSO has been successfully implemented in several optimization applications: in time-series forecasting [30] it was implemented as a training algorithm for an artificial neural network and in [31] it was used over a Geometric Algebra (GA) framework in order to compute the rigid movement on images to improve the accuracy of Structure from Motion (SfM) algorithms, which comprises a family of computer vision algorithms whose paradigm is based on extracting structures when movement is detected or extracting movement when structures are detected in 2D images. Therefore, in this work, BAM-PSO is tested with several popular and well-established benchmark functions and its performance is compared to well-known evolutionary algorithms in both low and high dimensional scenarios.


2. Aging mechanisms to alleviate the premature convergence

Aging is the process of becoming older, which consists on the accumulation of changes over time. This process affects all living systems: humans, cells, unicellular organisms, fruit flies and mammals like rodents [12, 32, 42]. Since the particles of the PSO optimizer algorithm can be treated as a living system, aging could represent a relevant mechanism to alleviate the premature convergence problem in heuristic algorithms. Nowadays, we have better understanding of the lifespan of human cells, which is determined by homoeostatic properties of the immune system. Homeostasis refers to the regulation of the lymphocytes pool in an organism. It is assumed that the number of cells is determined by the capacity of the peripheral immune system.

In the immune system, it is observed that cell death rate accelerates if the immune cells exceed the allocated free space [33]. For instance, in the course of a viral infection, immune system cells can undergo approximately 15–20 divisions. Total proliferative capacity of human Tlymphocyte is about 40–45 divisions and depends on the telomere length [41]. Telomeres are the end parts of the chromosomes, which become shorter in every cell division; this can be appreciated in Figure 1. The cell can reach its unresponsiveness state when the telomere length completes about one half of its initial value.

Figure 1.

Telomere division process in humanTcells.

Telomere dynamics can be interpreted in a mathematical model based on experimental observations. In this work, we consider the mathematical model proposed by [33] to represent the telomere dynamics. This model considers the following equation:


where Trepresents the remaining telomere divisions per cell. αdefines the telomere consumption rate per iteration. prepresents the length of telomere repeats in naive cells produced at the age t(with initial length p0=8.3×103) and Nis the number of cells as defined in [33].

Eq. (1) is a differential non-ascending equation that defines the derivative in telomere division per cell depending on the consumption rate of the cell α, telomere capacity of the cell (p) and number of cells (N). Eq. (1) describes the dynamic of average telomere length Tin the pool of naive cells. The rate of this process depends on pT, where pis the telomere length in the cells and p0defines the telomere at initial age. This dynamic describes the self-sustaining process of regulation of total concentration of the Tcells and how the telomere is affected by Tcells concentration and iterations [33].

The proposed scheme in Eq. (1) can provide PSO the same self-regulation capabilities during swarm’s concentration around a local minimum (known as premature convergence) if we consider the swarm as lymphocytes, and particle concentration around a local minimum as Tcells concentration. Finally, senescence of the particle will be translated as a new random-generated particle, replacing the unresponsive one. For this to be achieved, a given particle within the swarm will have a limited number of iterations to exist within the swarm, similar to telomere length at p0, and senescence of this particle will occur when particle’s capacity for search space-exploitation approaches to 0 and swarm’s concentration around a local minimum has exceeded a given limit (premature convergence indicator), similar to Tcells in human immune systems [35].

Based on this aging mechanism, it is possible to include senescence to a given particle within the swarm, and the lifespan of each particle will be adjusted according to the error produced by its candidate solution and the premature convergence indicator of the swarm.


3. Particle swarm optimization with a bio-inspired aging model (BAM-PSO)

The ALC-PSO variant [24] suggests an interesting approach using aging factors to define when to remove non-useful characteristics of the algorithm. However, this variant proposes a simple aging model with the following characteristics:

  • The lifespan exists only for the leaderand is controlled by linear means according to lifespan controller shown in [24].

  • There is no lifespan controller for the rest of the particles of the swarm, meaning that the particles continue offering candidate solutions even if premature convergence has occurred.

  • There is no communication between particles within the swarm about premature convergence and no evaluation is performed about particle concentration around local minima.

Consequently, there are several points that can be improved in ALC-PSO. For instance, including aging mechanisms to the rest of the particles within the swarm may help exploration without affecting the convergence. Moreover, in order to alleviate premature convergence in the PSO, there is an urgent need to include means of measuring the premature convergence in real time allowing the swarm to discard non-useful particles and to explore new candidate solutions without losing the convergence inertia toward the global minimum.

Based on previous observations, we propose a variant of the PSO named Particle Swarm Optimization with a Bio-inspired Aging Model (BAM-PSO). Our proposed algorithm considers the aging leader and challengers in the same fashion as ALC-PSO, but it applies senescence to each particle within the swarm by using the mathematical model that describes aging dynamics in Eq. (1).

For BAM-PSO to implement senescence efficiently, it is necessary to implement a mechanism that allows the algorithm to interpret when the swarm has reached a local minimum; this can be achieved by means of premature convergence measurement.

In the aging model represented by Eq. (1), the number of cells can be interpreted as a measure of the particles numbers around the same one-dimensional location. In this sense, measuring the standard deviation among the swarm in each particle dimension can be computed as follows:


where DRis the dimension of the problem; kminrepresents the deviation minimum for all dimensions; kjis the premature convergence around j-thelement of the dimension D. Note that xijis the particle within the swarm and xj¯represents the mean value of all j-thelements of the swarm. This scheme provides a premature convergence measurement mechanism around each element of the problem dimension.

3.1. Lifespan controller

The BAM-PSO algorithm considers Eq. (2) to evaluate swarm’s efficiency, and to control the lifespan of each particle, the aging mechanism proposed by [33] is adapted to the algorithm, satisfying the next criteria:


where Lijis the lifespan of the i-thparticle with j-thelement of dimension D. Lmaxrepresents the maximum lifespan of any particle within the swarm.

The error improvement of the particle with respect to the iteration tis calculated by:


The error of the i-thparticle eitis computed within the swarm at iteration t.

This scheme completes the bio-inspired, population-broad aging mechanism and will allow us to propose the final algorithm.

The steps involved in the BAM-PSO algorithm are as follows:

Step 1:Initialization. The initial positions of all particles are generated randomly within the n-dimensional search space, with velocities initialized to 0. The best particle among the swarm is selected as the leader.The age of the leaderand all particles within the swarm is initialized to 0.

Step 2:Velocity and position updating. Every particle follows the velocity update rule and the position update rule presented in [8]:




where iis the ithparticle of a swarm that satisfiesSRD, and jis the jthelement of dimension problemD. Also trepresents the iteration counter, R1and R2are random, normalized and uniformly distributed values. c1,c2represents the social and cognitive parameter, xijtis the particle ijposition for titeration, xijt+1is the particle ijposition for t+1iteration, vijtis the particle’s ijvelocity for titeration. pijtrepresents the local best position for particle ijin iteration tand pgjtrepresents the global best position for entire swarm in iteration t.

Step 3:Evaluate the leaderor generate new challengers for leadership according to leadership term according to lifespan controller defined in [24]:


with leadership term: θ=1,2,,Θ.

where frepresents the objective function value for the best candidate solution, θthe remaining leadership’s term, Θrepresents the maximum leadership term, δg Bestdefines the entire swarm improvement factor, δl Bestrepresents the individual particle improvement factor, leaderrepresents the particle within the swarm that is the acting leader (not necessarily pgjt) and whose all particles will follow according to Eqs. (5) and (6); finally, δLeaderrepresents the leader’sindividual improvement factor.

Eqs. (7), (8) and (9) indicate the leading performance of the leader.The lifespan controller utilizes these performance evaluations to adjust the leading term of leaderaccording to the following decision tree:

if δgBest<0: θ=θ+2uptoΘ,else:

if δlBest<0: θ=θ+1uptoΘ.else:

if δLeaderθ<0: θ=θnoincrease,else:


When the leading term of leaderreaches θ=0the leaderis considered exhausted and replaced by newly generated challengers as described in [24].

Step 4:Adjust lifespan of all particles within the swarm according to Eqs. (2)(4) and replace particles with random ones for every depleted lifespan.

Step 5:Terminal condition check. If the number of iterations is larger than the predefined or the error has reached a minimum expected value, the algorithm terminates. Otherwise go to Step 2 for a new round of iteration.

Figure 2 shows the flow chart for BAM-PSO algorithm.

Figure 2.

BAM-PSO flow diagram.


4. Results

The proposed BAM-PSO algorithm is compared with five different biologically inspired algorithms: PSO with inertial vector and boundaries [27], ant colony system (ACS) [6], differential evolution (DE) [31], simplified swarm optimization (SSO) [20] and particle swarm optimization with aging leader and challengers (ALC-PSO) [24]. These algorithms are selected because of several factors: first, PSO is the base algorithm for BAM-PSO, so it is natural to compare performance with the original optimizer, SSO and ALC-PSO are other well-known variants of PSO that in some way, claim to alleviate the premature convergence problem and, specifically, ALC-PSO is related in many ways to BAM-PSO. Finally, while ACS and DE are not related closely to BAM-PSO, they are swarm-based and evolution-based optimization algorithms respectively and thus, were considered as good candidates for performance comparison.

To test optimization performance of these algorithms, well-established benchmark functions are selected in low and high dimensionality [33]. These selected functions help evaluate algorithm’s performance over a broad type of problems, because they possess multiple local minima, complex non-linear structure, or have bowl-shaped/plate-shaped structure [36, 37]; even some of them have a steep ridge and drops structure. From the literature, a list of 18 functions was considered relevant enough to test BAM-PSO performance. The selected benchmark functions are shown in Table 1.

For comparison purposes, all algorithms were configured in similar vein when it was possible, e.g. the ACS algorithm [6] uses ant-typevectors which can be considered particles in a swarmlike those found in the PSO [8, 21], ALC-PSO [24] and the proposed BAM-PSO algorithms. Nevertheless, the behavior and setting are very different, since ant-typevectors behavior is determined by a mathematical model that simulates the pheromone attraction between biological ants. The DE algorithm [34] does not have a swarm-based mathematical model for the dynamics of particles, but instead the mathematical model used to simulate evolutionis based on vectors and mutation factors. Finally, SSO algorithm [20] does not consider linear equations to update the information of the particles, instead a probability function is considered to decide the next particle position based on previously defined settings.

4.1. Evaluating the algorithms in low dimensional settings

The swarm size Sfor every algorithm is set to 20, dimension Dfor every function is set to 2, and total iterations are set to 10,000for each objective function. Table 2 reveals the performance for the different selected algorithms in a low dimension, the results show the best possible solution offered by the algorithm after terminal condition was reached. As we can see, both BAM-PSO and ALC-PSO algorithms show improved performance in comparison to the other algorithms. Meaning that BAM-PSO provides good results in low dimensional problems for all the benchmark functions, outperforming most of the other tested algorithms. It is important to note, that results marked in Bold are the best solution obtained for each case.

f7Sum of squaresi=1nixi2
f8Sum of exponential squaresi=1nj=1nxjibi2
f10Shifted Rastrigini=1n[A+i=1nxi2Acos2πxi
f11Schwefel 1.2i=1nj=1ixj2
f15Perm D, 0, β-functioni=1nj=1nji+βxiji12

Table 1.

Benchmark functions used in algorithm performance comparison for BAM-PSO.

f10.000000E + 000.000000E + 007.800000E-034.250000E-024.880000E-090.000000E + 00
f28.880000E-168.880000E-162.120000E + 004.300000E-024.160000E-028.880000E-16
f30.000000E + 000.000000E + 002.800000E-015.460000E-065.460000E-083.550000E-43
f40.000000E + 000.000000E + 002.610000E + 004.300000E-037.300000E-010.000000E + 00

Table 2.

Optimization results comparison for D = 2.

4.2. Evaluating the algorithms in high dimensional settings

Our second simulation scenario consists in evaluating the performance of the BAM-PSO with high dimensional problems. In this case, the total of 18 benchmark functions from Table 1 was considered and the function dimension Dwas configured to 30.

Based on the previous results, ALC-PSO, SSO, and PSO algorithm were selected to compare results with the BAM-PSO because of their shared origin. However, ACS was also included due to its swarm nature.

At first glance, the results shown in Table 3 suggest that the BAM-PSO provides the best performance of all compared algorithms in highly-dimensional problems for several benchmark functions. It is important to note, that results marked in Bold are the best solution obtained for each case.

f13.420000E-014.810000E-021.664995E-021.017972E + 001.009513E + 00
f21.710000E + 002.950000E + 006.179332E-011.936833E + 011.575243E + 01
f39.720000E-041.060000E + 001.054778E-053.628362E + 022.766939E + 02
f43.830000E + 001.420000E + 032.003442E + 021.421872E + 064.577661E + 05
f52.701825E-071.943493E-011.314056E + 016.189616E + 025.320685E + 05
f61.000012E + 001.347625E + 045.670018E + 032.350724E + 063.796814E + 06
f71.838826E-251.846162E-018.309639E-015.267949E + 037.131057E + 03
f91.591141E + 004.989256E-031.084663E + 036.528900E + 024.572974E + 02
f104.265668E-032.575325E-016.748519E + 012.362575E + 016.768979E + 01
f111.000007E + 000.000000E + 008.449218E + 1203.61590E + 1151.11140E + 121
f121.00008E + 1199.432255E + 1192.858556E + 1251.96354E + 1243.06278E + 125
f131.000053E + 009.837181E + 005.033260E + 013.015314E + 033.832691E + 03
f141.005478E + 002.327770E + 039.314781E + 038.149385E + 031.062061E + 04
f152.039775E-230.000000E + 008.520128E-064.815499E-049.508960E + 02
f169.177340E + 001.276359E + 011.683612E + 021.086427E + 024.458898E + 01
f171.000155E + 006.796335E + 001.001832E + 001.026017E + 021.676198E + 02
f189.112266E-012.056725E + 024.960351E + 007.179904E + 058.280520E + 05

Table 3.

Optimization results comparison for D = 30.

4.3. Non-parametrical statistical analysis of BAM-PSO performance results

In order to conclude whether or not BAM-PSO outperforms the other selected algorithms, more accurate means of comparison other than simple observation of benchmark results are required; for this reason, some of the most popular non-parametric statistical tests were employed. This type of analysis is widely accepted as a metric of performance comparison between algorithms in a pair-wise configuration [43]. To this end, using the statistical procedures defined by [38, 39, 40], the Signed Test and the Wilcoxon Test statistical analysis were selected.

In Table 4, we can observe that BAM-PSO outperforms the other algorithms with an accepted level of significance using this procedure. However, this test is a simple first-line procedure and to uncover more evidence over the results, we rely on a more robust and sensitive procedure, which is the Wilcoxon Test.

Positive results14151818
Negative results4300
Significant difference? (P < 0.05)YesYesYesYes

Table 4.

Non-parametrical sign test for benchmark results at D = 30.

The Wilcoxon test results shown in Table 5 shed light over the fact that BAM-PSO algorithm can go beyond the results provided by the PSO, SSO and ACS with great statistical significance (P < 0.001), but the procedure finds not enough evidence to conclude that the BAM-PSO can outperform the ALC-PSO at this level of statistical significance; however, the results are good enough to show that BAM-PSO can outperform ALC-PSO with good statistical significance (P < 0.01), and considering the literature claim that: if the resulting P-value is small enough (P < 0.05), then it can be accepted that the median of the differences between the paired observations is statistically significantly different from 0 [44]. We can conclude then, that BAM-PSO has a greater performance over a broad set of benchmark functions over all other selected algorithms with statistical relevance, including ALC-PSO.

BAM-PSO(R+) positive ranks obtained(R−) negative ranks obtained(Rm) maximum negative ranksAccepted significance? (P < 0.05)
vs ALC-PSO147.024.027 for (P < 0.01)Yes
vs PSO156.015.017 for (P < 0.001)Yes
vs SSO171.00.017 for (P < 0.001)Yes
vs ACS171.00.017 for (P < 0.001)Yes

Table 5.

Non-parametrical Wilcoxon test for benchmark results at D = 30.

The performance of BAM-PSO can be explained by its senescence mechanism: after particles falls into local minimum, they offer less improvement; then, the senescence mechanism starts acting by producing senescenceon the swarm; then, exhausted particles are replaced with random ones through the search space. This favors exploration after premature convergence without completely eliminating exploitation of search space near the local minimum, which in the end provides better optimization results than other PSO variants.


5. Conclusions

In this chapter, we introduced a PSO variant algorithm called Particle Swarm Optimization with Bio-inspired Aging Model (BAM-PSO) which was compared with other five popular bio-inspired optimizers. This test was performed using popular benchmark functions with low and high dimensionality configuration.

We observed that the BAM-PSO algorithm has the potential to solve the premature convergence problem of PSO showing good results for both low and high dimensional problems with statistical relevance according to several non-parametric analyses. Furthermore, according to results shown in Section 4, BAM-PSO performs better than the selected PSO variants.

As shown in results section, BAM-PSO outperforms all other compared swarm-based algorithms with at least a confidence factor as high as P<0.01. However, the cost of this improved accuracy is found in computation complexity due to the introduction of Eq. (2) and all the lifespan control for the particles; which in turn translates to computing time; this time increase was found to be approximately of at least 9 times the required computation time for the original PSO on the conducted experiments of section 5 and 1.5 times the required computation time for ALC-PSO. However, this increase in time is not fixed, as it depends on how early the premature convergence occurs and how many particles are replaced after senescence.

Finally, these experimental results provide support on the important role of aging mechanisms during the selection process in bio-inspired optimization algorithms, because the population-broad aging mechanism implemented in BAM-PSO allows the algorithm to provide better results than some other popular optimizers that does not implement aging.



The authors thank the support of CONACYT Mexico, through Projects CB256769 and 340 CB258068 (Project supported by Fondo Sectorial de Investigación para la Educación).

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Eduardo Rangel-Carrillo, Esteban A. Hernandez-Vargas, Nancy Arana-Daniel, Carlos Lopez-Franco and Alma Y. Alanis (December 20th 2017). Particle Swarm Optimization Algorithm with a Bio-Inspired Aging Model, Particle Swarm Optimization with Applications, Pakize Erdoğmuş, IntechOpen, DOI: 10.5772/intechopen.71791. Available from:

chapter statistics

1072total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Particle Swarm Optimization Solution for Power System Operation Problems

By Mostafa Kheshti and Lei Ding

Related Book

First chapter

Genetic Algorithm Optimization of an Energy Storage System Design and Fuzzy Logic Supervision for Battery Electric Vehicles

By Stefan Breban

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us