Variable boundaries for the SG optimization.

## Abstract

This chapter will introduce the particle swarm optimization (PSO) algorithm giving an overview of it. In order to formally present the mathematical formulation of PSO algorithm, the classical version will be used, that is, the inertial version; meanwhile, PSO variants will be summarized. Besides that, hybrid methods representing a combination of heuristic and deterministic optimization methods are going to be presented as well. Before the presentation of these algorithms, the reader will be introduced to the main challenges when approaching PSO algorithm. Two study cases of diverse nature, one regarding the PSO in its classical version and another one regarding the hybrid version, are provided in this chapter showing how handful and versatile it is to work with PSO. The former case is the optimization of a mechanical structure in the nuclear fuel bundle and the last case is the optimization of the cost function of a cogeneration system using PSO in a hybrid optimization. Finally, a conclusion is presented.

### Keywords

- PSO algorithm
- hybrid methods
- nuclear fuel
- cogeneration system

## 1. Introduction

Maximizing earns or minimizing losses has always been a concern in engineering problems. For diverse fields of knowledge, the complexity of optimization problems increases as science and technology develop. Often, examples of engineering problems that might require an optimization approach are in energy conversion and distribution, in mechanical design, in logistics, and in the reload of nuclear reactors.

To maximize or minimize a function in order to find the optimum, there are several approaches that one could perform. In spite of a wide range of optimization algorithms that could be used, there is not a main one that is considered to be the best for any case. One optimization method that is suitable for a problem might not be so for another one; it depends on several features, for example, whether the function is differentiable and its concavity (convex or concave). In order to solve a problem, one must understand different optimization methods so this person is able to select the algorithm that best fits on the features’ problem.

The particle swarm optimization (PSO) algorithm, proposed by Kennedy and Eberhart [1], is a metaheuristic algorithm based on the concept of swarm intelligence capable of solving complex mathematics problems existing in engineering [2]. It is of great importance noting that dealing with PSO has some advantages when compared with other optimization algorithms, once it has fewer parameters to adjust, and the ones that must be set are widely discussed in the literature [3].

## 2. Particle swarm optimization: an overview

In the early of 1990s, several studies regarding the social behavior of animal groups were developed. These studies showed that some animals belonging to a certain group, that is, birds and fishes, are able to share information among their group , and such capability confers these animals a great survival advantage [4]. Inspired by these works, Kennedy and Eberhart proposed in 1995 the PSO algorithm [1], a metaheuristic algorithm that is appropriate to optimize nonlinear continuous functions. The author derived the algorithm inspired by the concept of swarm intelligence, often seen in animal groups, such as flocks and shoals.

In order to explain how the PSO had inspired the formulation of an optimization algorithm to solve complex mathematical problems, a discussion on the behavior of a flock is presented. A swarm of birds flying over a place must find a point to land and, in this case, the definition of which point the whole swarm should land is a complex problem, since it depends on several issues, that is, maximizing the availability of food and minimizing the risk of existence of predators. In this context, one can understand the movement of the birds as a choreography; the birds synchronically move for a period until the best place to land is defined and all the flock lands at once.

In the given example, the movement of the flock only happens as described once all the swarm members are able to share information among themselves; otherwise, each animal would most likely land at a different point and at a different time. The studies regarding the social behavior of animals from the early 1990s stated before in this text pointed out that all birds of a swarm searching for a good point to land are able to know the best point until it is found by one of the swarm’s members. By means of that, each member of the swarm balances its individual and its swarm knowledge experience, known as social knowledge. One may notice that the criteria to assess whether a point is good or not in this case is the survival conditions found at a possible landing point, such as those mentioned earlier in this text.

The problem to find the best point to land described features an optimization problem. The flock must identify the best point, for example, the latitude and the longitude, in order to maximize the survival conditions of its members. To do so, each bird flies searching and assessing different points using several surviving criteria at the same time. Each one of those has the advantage to know where the best location point is found until known by the whole swarm.

Kennedy and Eberhart inspired by the social behavior of birds, which grants them great surviving advantages when solving the problem of finding a safe point to land, proposed an algorithm called PSO that could mimic this behavior. The inertial version, also known as classical version, of the algorithm was proposed in 1995 [1]. Since then, other versions have been proposed as variations of the classical formulation, that is, the linear-decreasing inertia weight [5], the constriction factor weight [6], the dynamic inertia and maximum velocity reduction, also in Ref. [6], besides hybrid models [7] or even quantum inspired approach optimization techniques that can be applied to PSO [8]. This chapter will only present the inertial model of PSO, as it is the state-of-the-art algorithm, and to understand better the derivations of PSO, one should firstly understand its classical version.

The goal of an optimization problem is to determine a variable represented by a vector

Considering a swarm with P particles, there is a position vector * t*iteration for each one of the

*particle that composes it. These vectors are updated through the dimension*i

*according to the following equations:*j

and

where * i* = 1,2,…,

*and*P

*= 1,2,…,*j

*.*n

Eq. (1) denotes that there are three different contributions to a particle’s movement in an iteration, so there are three terms in it that are going to be further discussed. Meanwhile, Eq. (2) updates the particle’s positions. The parameter

Velocity update equation’s first term is a product between parameter w and particle’s previous velocity, which is the reason it denotes a particles’ previous motion into the current one. Hence, for example, if

The individual cognition term, which is the second term of Eq. (1), is calculated by means of the difference between the particle’s own best position, for example,

Finally, the third term is the social learning one. Because of it, all particles in the swarm are able to share the information of the best point achieved regardless of which particle had found it, for example,

Lastly, Figure 1 shows the PSO algorithm flowchart, and one may notice that the optimization logic in it searches for minimums and all position vectors are assessed by the function

## 3. Hybrid methods: coupling PSO with deterministic methods

In general, optimization methods are divided into deterministic and heuristic. Deterministic methods aim to establish an iterative process involving a gradient, which, after a certain number of iterations, will converge to the minimum of the objective function. The iterative procedure of this type of method can be written as follows:

where

Heuristic methods, in contrast to deterministic methods, do not use the objective function gradient as a downward direction. Its goal is to mimic nature in order to find the minimum or maximum of the objective function by selecting, in an elegant and organized manner, the points where such a function will be calculated [9].

Hybrid methods represent a combination of deterministic and heuristic methods in order to take advantage of both approaches. Hybrid methods typically use a heuristic method to locate the most likely region where the global minimum is. Once this region is determined, the hybrid formulation algorithm switches to a deterministic method to get closer and faster to the minimum point. Usually, the most common approach used for this formulation is using the heuristic method to generate good candidates for an optimal solution and then using the best point found as a start point for the deterministic methods in order to converge to local minimums.

Numerous papers have been published over the last few years showing the efficiency and effectiveness of hybrid formulations [10, 11, 12]. There are also a growing number of publications over the last decade regarding hybrid formulations for optimization [13].

In this context, PSO algorithm can be combined with deterministic methods, increasing the chance of finding the function’s most likely global optimal. This chapter presents the three deterministic methods in which the PSO was coupled: conjugate gradient method, Newton’s method, and quasi-Newton method (BFGS). The formulation of each one of those is briefly presented in the following sections.

### 3.1 Conjugate gradient

The conjugate gradient method improves the convergence rate of the steepest descent method by choosing descending directions that are a linear combination of the gradient direction with the descending directions of previous iterations. Therefore, their equations are:

where

### 3.2 Newton’s method

While the steepest descent and conjugate gradient methods use first derivative information, Newton’s method also uses second derivative information to accelerate the convergence of the iterative process. The algorithm used in this method is presented below:

where

### 3.3 Quasi-Newton (BFGS)

BFGS is a type of quasi-Newton method. It seeks to approximate the inverse of the Hessian using the function’s gradient information. This approximation is such that it does not involve second derivatives. Thus, this method has a slower convergence rate than Newton’s methods, although it is computationally faster. The algorithm is presented below:

## 4. Recent applications and challenges

PSO can be applied to many types of problems in the most diverse areas of science. As an example, PSO has been used in healthcare in diagnosing problems of a type of leukemia through microscopic imaging [14]. In the economic sciences, PSO has been used to test restricted and unrestricted risk investment portfolios to achieve optimal risk portfolios [15].

In the engineering field, the applications are as diverse as possible. Optimization problems involving PSO can be found in the literature in order to increase the heat transfer of systems [16] or even in algorithms to predict the heat transfer coefficient [17]. In the field of thermodynamics, one can find papers involving the optimization of thermal systems such as diesel engine–organic Rankine cycle [18], hybrid diesel-ORC/photovoltaic system [19], and integrated solar combined cycle power plants (ISCCs) [20].

PSO has also been used for geometric optimization problems in order to find the best system configurations that best fit the design constraints. In this context, we can mention studies involving optical-geometric optimization of solar concentrators [21] and geometric optimization of radiative enclosures that satisfy temperature distribution and heat flow [22].

After having numerous versions of PSO algorithm such as those mentioned in the first section, PSO is able to deal with a broad range of problems, from problems with a few numbers of goals and continuum variables to others with challenging multipurpose problems with many discreet and/or continuum variables. Besides its potential, the user must be aware that the PSO will only achieve appreciated results if one implements an objective function capable of reflecting all goals at once. To derive such a function may be a challenging task that should require a good understanding of the physical problem to be solved and the ability to abstract ideas into a mathematical equation as well. The problems presented in the fourth section of this work provide examples of objective functions capable of playing this role.

Another challenge for one using PSO is how to handle the bounds of the search domain whenever a particle moves beyond it. Many popular strategies that had already been proposed are reviewed and compared for PSO classical version in [23]. Those strategies may be reviewed and understood by PSO users so this person can pick up the one that best fits the optimization problem features.

## 5. Engineering problems

In this chapter, two engineering problems will be described, one involving the fuel element of a nuclear power plant and the other involving a thermal cogeneration system. In the first problem, the traditional PSO formulation is used to find the optimal fuel element spacing. In the second problem, hybrid optimization algorithms are used to find the operating condition that minimizes total cost of operation of a cogeneration system.

### 5.1 Springs and dimples of a nuclear fuel bundle spacer grid

In [24], the authors perform the optimization of dimples and spring geometries existing in the nuclear fuel bundle (FB) spacer grid (SG). An FB is a structured group of fuel rods (FRs), and it is also known as fuel assembly, and on the other hand, an FR is a long, slender, zirconium metal tube containing pellets of fissionable material, which provide fuel for nuclear reactors [25]. An SG is a part of the nuclear fuel bundle and, Figure 4 shows a schematic view of a nuclear FB; it is possible to see in this illustration how the FRs and the SGs are assembled together. In addition, Figure 5 gives more details on how an SG’s springs and dimples grip an FR, and Figure 6 shows exactly what parts in the SG are the springs and the dimples that may be in contact with an FR. For this work, the PSO algorithm had been developed in MATLAB® (MathWorks Inc.); meanwhile, the mechanical calculations were performed with finite element analysis (FEA), using ANSYS 15.0 software.

The springs and the dimples act as supports required having special features once an FR releases a great amount of energy, caused by the nuclear reactions occurring within it. Hence, the material of an FR must face a broad range of temperatures when in operation; for example, around a variation of 300°C, this fact is an important matter for the springs and the dimples as those must not impose an excessive gripping force on the rod, allowing it some axial thermal expansion. On the other hand, the upward water flow cooling the great amount of heat released by fission occurring within the rod creates a flow-induced vibration, so the springs and dimples must also limit the lateral displacement of the fuel rods. Besides that, the SG may also support the FRs through its dimples and springs at many loading conditions, that is, earthquakes and shipping and handling. To support safely the fuel in a nuclear reactor is an important matter during operation, and consequences such as the release of fission products from a fuel rod and a reactor safety shutdown could happen because of a poor design.

Finally, one can understand that as the springs and the dimples of an FB must have a geometry able to comply with conflicting requirements so the FRs remain laterally restrained, avoiding it from bowing and vibrating [26], using an optimization algorithm could be useful.

Jourdan et al. [13] had performed the optimization of the dimples and springs of an FB’s SG using PSO classical version algorithm. The authors chose some geometry variables that should be important to features such as the gripping stiffness and the stress distribution in the spacer grid, which are the optimization goals in their work. Thus, the position vector is written as

Variable | Lower bound (mm) | Upper bound (mm) |
---|---|---|

50 | 70 | |

10 | 15 | |

5 | 30 | |

5 | 10 | |

1 | 5 | |

1 | 5 |

In PSO simulations from Ref. [24], for each position vector

The goals of the optimization performed in [24] are three: first, to minimize the stress intensity (SI) within the structure; second, to create an SG geometry featuring a gripping stiffness value as close as possible to some

A simulation considering a population of

It should be noted that the grades assessed by the fitness function could be in an increasing scale or in a decreasing one, depending on the conception of the PSO algorithm. In [26], the authors chose to perform the search at a decreasing scale, and then the fitness function, Eq. (15), was designed to be minimized.

The fitness function implemented assesses three different terms through two conditions. The two conditions regard the fact that the SG must allow some axial thermal expansion by the FR. To do so, a parameter

The

Figure 9 shows the fitness improvement performed to optimize the geometry of an SG’s dimples and spring. This simulation resulted in an optimized geometry with an SI of 196 MPa and a gripping stiffness of 27.2 N/mm.

In [31], the authors performed an FEA and a real experiment to measure the SI and the gripping stiffness of the Chashma Nuclear Power Plant Unit 1’s (CHASNUPP-1’s) SG spring under the same conditions as considered in [24]. The results from [31] regarding a real SG that is in operation at CHASNUPP-1, which might not have been optimized, are 27.2 N/mm for the gripping stiffness and 816 MPa for the SI; meanwhile, the optimized result found in [24] has the same gripping stiffness although with an SI over 75% lower than CHASNUPP-1’s SG. Thus, when comparing the results of the most likely optimal found using the PSO algorithm with those from a real SG [31], one can conclude that PSO had played its role well to design the component under study.

### 5.2 Cost of a cogeneration system

The second problem involves minimizing the function that represents the total cost of operation of a cogeneration system called CGAM. It is named after its creators (C. Frangopoulos, G. Tsatsaronis, A. Valero, and M. von Spakovsky) who decided to use the same system to compare the solution of the optimization problem with different methodologies [13]. Figure 10 indicates the system.

The CGAM system is a cogeneration system consisting of an air compressor (AC), a combustion chamber (CC), a gas turbine (GT), an air preheater (APH), and a heat recovery steam generator (HRSG), which consists of an economizer for preheating water and an evaporator. The purpose of the cycle is the generation of 30 MW of electricity and 14 kg/s of saturated steam at a pressure of 20 bar.

The economic description of the system used in the present work is the same as the one adopted in the original work and considers the annual fuel cost and the annual cost associated with the acquisition and operation of each equipment. More details can be found in [32]. The equations for each component are presented below:

Air compressor:

Combustion chamber:

Turbine:

Preheater:

Heat recovery steam generator:

The general expression for the investment-related cost rate ($/s) of each component is given by the following equation:

* CRF*is the capital recovery factor (18.2%),

*is the number of annual plant-operating hours (8000 h), and*N

*is a maintenance factor (1.06). In addition,*φ

Air compressor | |

Combustion chamber | |

Gas turbine | |

Preheater | |

HRSG |

In order to perform the optimization of Eq. (22), the five decision variables adopted in the definition of the original problem are considered: the compression ratio (

Heuristic | Deterministic | |
---|---|---|

Hybrid 1 | Particle swarm | Conjugate gradient |

Hybrid 2 | Particle swarm | Quasi-Newton |

Hybrid 3 | Particle swarm | Newton |

To solve the thermodynamic equations of the problem, the professional process simulator IPSEpro® version 6.0 was adopted. IPSEpro® is a process simulator used to model and simulate different thermal systems through their thermodynamic equations. This program was developed by SimTech and has a user-friendly interface, as well as a library with a wide variety of components, allowing the user to model and simulate conventional plants, cogeneration systems, cooling cycles, combined cycles, and more. The optimization method routines were written in MATLAB^{®} (MathWorks Inc.), and the algorithm was integrated with IPSEpro^{®} in order to solve the thermodynamic problem and perform the optimization.

To perform the optimization, the limits for the problem variables were established, as indicated in Table 4 [33].

Limits |
---|

7 ≤ |

0.7 ≤ |

0.7 ≤ |

700 K ≤ |

1100 K ≤ |

Table 5 presents the results found for the variables in each method and the value of the objective function. Figures 11–13 present the graphs of the evolution of the cost function in relation to the function call for the performed optimizations.

Hybrid 1 | Hybrid 2 | Hybrid 3 | |
---|---|---|---|

P_{2}/P_{1} | 9.46 | 9.04 | 8.29 |

η_{CA} | 0.83 | 0.83 | 0.85 |

T_{3} | 600.43 | 612.53 | 606.47 |

η_{GT} | 0.88 | 0.88 | 0.88 |

T_{4} | 1210.95 | 1212.67 | 1214.65 |

Cost function ($/s) | 0.33948 | 0.33953 | 0.33949 |

In order to evaluate the algorithm’s efficiency, a comparison was made between the results obtained in the present work and those obtained by [32, 33]. It is worth mentioning that the thermodynamic formulation used by [32] is slightly different from that constructed in the simulator; therefore, some differences in the final value of the objective function were already expected. In [33], the CGAM system was also built in IPSEpro® and the optimization was performed in MATLAB® using the following optimization methods: differential evolution (DE), particle swarm (PSO), simulated annealing (SA), genetic algorithm (GA), and direct pattern search (DPS). A comparison between the results is presented in Figure 14.

It is possible to verify that the hybrid methods used in this work have excellent performance, and the values found are compatible with the other references. This result consolidated the use of hybrid formulations used to optimize the objective function of the problem.

## 6. Conclusions

In the present work, it was possible to present the basic fundamentals involving the PSO method. The advantages and disadvantages of the method were discussed, as well as interpretations were provided to its algorithm. It was also possible to discuss about hybrid methods that combine deterministic and heuristic methods in order to extract the advantages of each one.

As discussed earlier, it is impracticable to say that the result obtained by an optimization method such as PSO is the global maximum or minimum, so some authors call the results as the most likely optimal global. Thus, some strategies can be employed in order to verify the validity of the optimal results obtained. One of the strategies is to compare with the results obtained by other optimization algorithms, as used in the present work. In the absence of optimal data available, due to either computational limitations or even lack of results of the subject, it is possible to use as strategy the comparison of information from real physical models, that is, that were not obtained through optimization algorithms, but instead good engineering practice and judgment gained through technical experience.

In addition, it was possible to apply the PSO algorithm to different engineering problems. The first involves the spacer grid of the fuel element and the second involves the optimization of the cost function of a cogeneration system. In both problems, satisfactory results were obtained demonstrating the efficiency of the PSO method.