Open access peer-reviewed chapter

Tuning Artificial Neural Network Controller Using Particle Swarm Optimization Technique for Nonlinear System

Written By

Sabrine Slama, Ayachi Errachdi and Mohamed Benrejeb

Submitted: December 20th, 2020 Reviewed: February 4th, 2021 Published: July 14th, 2021

DOI: 10.5772/intechopen.96424

Chapter metrics overview

195 Chapter Downloads

View Full Metrics


This chapter proposes an optimization technique of Artificial Neural Network (ANN) controller, of single-input single-output time-varying discrete nonlinear system. A bio-inspired optimization technique, Particle Swarm Optimization (PSO), is proposed to be applied in ANN to avoid any possibilities from local extreme condition. Further, a PSO based neural network controller is also developed to be integrated with the designed system to control a nonlinear systems. The simulation results of an example of nonlinear system demonstrate the effectiveness of the proposed approach using Particle Swarm Optimization approach in terms of reduced oscillations compared to classical neural network optimization method. MATLAB was used as simulation tool.


  • neural networks
  • particle swarm optimization
  • indirect control
  • nonlinear system

1. Introduction

We are interested, in this chapter, in adaptive system control of a class of single-input single-output (SISO) non-linear systems using neural network. In fact, this system control is a very general approach to adaptive control since one can combine in principle any parameter estimation scheme with any control strategy. In addition, its architecture is based on two neural network blocks corresponding to the system controller and the model identification of the dynamic behavior of the system [1, 2].

The use of artificial neural network (ANN) for identification, diagnosis, modeling and control has generated a lot of interest for quite some time now, because they have proved to be excellent function approximators, mapping any function to an arbitrary degree of accuracy, coupled with their ability for generalization, self-organization and self-learning [3].

Many architectures of neural networks are used. Among them, the most common and the most popular architecture is the multilayered perceptron, implemented with the standardized backpropagation algorithm. If the initial set of weights is not selected properly, this algorithm, employing a gradient descent search technique is seriously prone to getting trapped in local optimum solutions. However, the calculations could slowly occurred and may even overflow or fluctuate between the optima. These limitations encouraged researchers to look for more powerful optimizations techniques that can help reach the optimal solution in an improved fashion, guarantee the convergence of control system and increase the learning speed [3].

A lot of optimization techniques of neural network are widely used. Among them, particle swarm optimization (PSO), the subject of this chapter, studied in different papers like in [3], and originates from the behavioral simulation of fish schooling and bird flocking. The conceptual model, at some point in the evolution of the algorithm, was an optimizer following which a number of parameters extraneous to optimization were eliminated leading to the basic PSO algorithm [3].

This technique is used in many applications of neural network optimization like identification, control and modeling. For instance, in [4], the authors used the PSO based neural network optimization for prediction of diameter errors in a boring machine. In their work, they established an improvement in the quality of optimization of the neural networks and error compensations with the use of the PSO algorithm, which achieves a better machining precision with fewer numbers of iterations.

The PSO algorithm is proposed to get optimal the parameters of ANN. This algorithm is well used because it has convergent result and not require many iterations, so in relative calculation relative quick. PSO is a population-based approach, which uses the swarm intelligence generated by the cooperation and competition between the particles in a swarm. It has been emerged successfully to a wide variety of search and optimization problems.

For example, in [5], the authors compared the performance of the PSO technique with other EAs for both continuous and discrete optimization problems in terms of processing time, convergence speed, and quality of the results. In addition, in [6], the authors proposed a PSO learning algorithm that self-generates radial basis function neural network (RBFN) to deal with three non-linear problems. This proposed PSO allows a high accuracy within a short training time when determining RBFN with small number of radial basis functions. Then, in [7], a PSO algorithm was developed by the authors to find the optimum process parameters which satisfy given limit, tool wear and surface roughness values and maximize the productivity at the same time. Also, in [8], the authors described an evolutionary algorithm for evolving the ANN which was based on the PSO technique. Both the architecture and the weights of the ANN were adaptively adjusted according to the quality of the neural network until the best architecture or a terminating criterion was reached. Moreover, the performance of the basic PSO algorithm with the constriction PSO on some test functions of different dimensions was compared by [9] and they found that the use of constriction PSO with mutation provided significant improvement in certain cases.

Further, in [10], it is presented an improved PSO algorithm for neural network training employing a population diversity method to avoid premature convergence. Furthermore, in [11], the authors used the PSO technique to optimize the grinding process parameters such as wheel and workpiece speed, depth and lead of dressing, etc. subjected to suitable constraints with the objective of minimizing the production cost and obtaining the finest possible surface finish. As well as, by comparing the PSO algorithm results with genetic algorithms and quadratic programming techniques, the PSO algorithm gives the global optimum solution with the objective to obtain minimum cost of manufacturing, [12]. Equally, in [13], the authors applied ANN - PSO approach for selection of optimum process parameters for minimizing burr size in drilling process. Besides that, the PSO algorithm was applied for optimization of multi objective problem in tile manufacturing process [14] and also for machinery fault detection [15]. Finally, in [16] the authors used PSO to tune the radial basis function networks for modeling of MIG welding process.

The PSO is well used in control system, for instance, in [17], the parameters of PID are tuned by ANN where their weights are optimized using PSO method to avoid any local minima/maxima in its searching procedure. In [18], the authors proposed a design of decentralized load-frequency controller for interconnected power systems with ac-dc parallel using PSO algorithm. The experiment result illustrated that their method have rapid dynamic response ability. In [19], the PSO algorithm is implemented to optimize the own five parameters of PIλDδcontroller via El-Khazali’s approach in order to minimize several error functions satisfying some step response specifications such as the set of time domain and frequency domain constraints; overshoot, rise time and settling time. In [20], a comparative analysis of PSO algorithms is carried out, where two PSO algorithms, namely PSO with linearly decreasing inertia weight (LDW-PSO), and PSO algorithm with constriction factor approach (CFA-PSO), are independently tested for different PID structures.

In this chapter, a comparison of the performance of the PSO optimized neural network with the standard back-propagation is presented for the adaptive indirect control of nonlinear time-varying discrete system.

The present chapter is organized as follows. After this introduction, section 2 reviews the problem statement. Furthermore, in section 3 the neural network optimization methods are shown. In Section 4, tuning neural network controller using classical approach is presented. However, the section 5 details tuning neural network controller using PSO approach. An example of nonlinear system is studied, in section 6, to illustrate the proposed efficiency of the method. Section 7 gives the conclusion of this chapter.


2. Problem statement

The indirect adaptive control that is used, in this chapter, is composed of two blocks: a block of neural network model and a block of control system. The proposed control system is a neural network controller. At the simulation, it is assume that the neural network controller parameter’s depending of the model parameter’s as given in Figure 1.

Figure 1.

The architecture of indirect neural control.

In this architecture of indirect control, rkis the desired value, ukis the control law from the controller, ykis the output of the nonlinear system, yrkis the output of the neural network model, ekis the identification error, êckis the estimated tracking error, eckis the tracking error and kis the discrete time.

The aim of this chapter is to find a control law ukto the nonlinear system, given by the Eq. (1), based on the tuning neural network controller’s parameters in order that the system output yktracks, where possible, the desired value rk.

yk+1=fykyknyukuknuE1 the nonlinear function mapping specified by the model, nyand nuare the number of past output and input samples respectively required for prediction.

In this structure, the neural network controller and the neural network model must be updated at the same time. But, it is a difficult task to have a neural identifier learn the system fast enough to adapt to varying parameters. Therefore, the neural controller is ineffective on variations of the system parameters. In this chapter, to solve this problem we propose a fast learning algorithm based on a particle swarm optimization approach.


3. Neural network optimization methods

3.1 Gradient back-propagation algorithm

An Artificial Neural Networks (ANN) with randomly initialized weights usually shows poor results. That’s why the most interesting characteristic of a neural network is its ability to learn, in other words, to adjust the weights of its connections according to the training data so that after training phase the faculty of generalization is obtained. This formulation turns the problem of learning into a problem of optimization.

In general, optimizing a system’s parameters for a given task requires defining a metric that captures the inadequacy of the system for that task. This measure is called the cost function. Using an optimization algorithm, it is about finding the optimal parameters of the neural model that minimizes the cost.

For this kind of problem, there are two important classes of cost minimization search algorithms. Classical or gradient algorithms in which the central concept is that of direction of descent, are based on the derivatives of the cost function and any constraints, and advantageously made use of the specific information provided by the derivatives of different orders of these functions.

The alternative to these approaches is the use of meta-heuristics or heuristics such as genetic, stochastic, or evolutionary algorithms. Despite the notable advantage of not assuming regularity and their ability to locate the global minimum, these algorithms are strongly penalized by the relatively low convergence speeds and long computation times.

In the case of algorithms using the information provided by the derivatives of the functions defining the problem, each iteration comprises two main phases: the search for a direction of descent dkand the determination of a step of descent ηgiven by the following formula:


The difference between these algorithms is manifested in the way these two steps are performed.

There are also three classes for such algorithms according to the strategy used to calculate the direction of descent:

  1. gradient algorithms such as:


  2. algorithms based on Newton’s method in which the direction of descent is the solution of the system


  3. the algorithms of the quasi-Newton type, in which an approximation Hkof the Hessian matrix evaluated in the iterates is built, the direction being then, as for the method of Newton, the solution of the linear system


The gradient back-propagation algorithm is the most widely used for weight adaptation, the goal of which is to find the appropriate combination of connection weights that minimizes the error function Edefined by:


ykand yrkbeing, respectively, the desired output and the actual output of the kneuron for a given input vector.

This procedure is based on an extension of the Delta rule which involves a gradient descent and which consists in propagating an observation of the input of the neural network through the neural layer, to obtain the output values.

Compared to the desired outputs, the resulting errors allow the weights of the output neurons to be adjusted. Without the presence of the hidden layer, the knowledge of these errors allows a direct calculation of the gradient and makes the adjustment of the weights of these single neurons, easy as shown by the Delta rule. So for a network with hidden layers, ignoring the desired outputs of the hidden neurons, it thus remains impossible to know the errors of these neurons. So, as it is, this process cannot be used for weight adjustment of hidden neurons. The intuition which solves this difficulty and which gave rise to back-propagation was as follows: the activity of a neuron is linked to neurons of the preceding layer. Thus, the error of an output neuron is due to the hidden neurons of the previous layer in proportion to their influence; therefore according to their activation and the weights that connect the hidden neurons to the output neuron. Therefore, we seek to obtain the contributions of the Lhidden neurons which gave the error of the output neuron k.

The back-propagation procedure consists in propagating the error gradient (error produced during the propagation of an input vector) in the network. In this phase, the propagation of an output neuron’s error starts from the output layer to the hidden neurons.

It is therefore sufficient to retrace the original activation path backwards, starting from the errors of the output neurons, to obtain the error of all the neurons in the network. Once the corresponding error for each neuron is known, the weight adaptation relationships can be obtained.

3.2 Second order optimization method

Another class of methods, more sophisticated than the previous one, is based on second order algorithms, based on Newton’s method which adapts the weights according to the following relation:


where the element Hijof the Hessian matrix Hrelates to the second partial derivatives of the cost function, compared to the weights. The elements of this matrix are defined by


Like gradient-only methods, second-order methods determine the gradient by the back-propagation algorithm and generally approximate the Hessian matrix or its inverse, since the cost of its computation may quickly become prohibitive.

This type of method localizes, in a single iteration, the minimum of a quadratic empirical error criterion and requires several iterations when this criterion is not ideally quadratic.

In practice, the convergence of the corresponding algorithm towards an optimal solution is rapid so that a good number of error hyper-surfaces present a quadratic curvature in the immediate vicinity of the minima. This method nevertheless remains subject, when the error hyper-surfaces are complex, to convergence towards non-minimal solution points for which the gradient of the empirical error criterion is canceled out at the inflection points or at the saddle points. In addition, there are the possibilities of divergence of the algorithm when the Hessian matrix is not positive definite.

The evaluation and memorization of the inverse Hessian matrix, on which the second-order methods are based, is, however, a major handicap in the context of learning large networks.

But, the main drawback lies in the calculation of the second derivatives of Ewhich is most often expensive and very difficult to carry out. A certain number of algorithms propose to get around this difficulty by using approximations of the Hessian matrix.

This approximation, at the basis of Gauss-Newton or Levenberg–Marquardt algorithms, is widely used in the identification of rheological parameters. This method is adapted especially for the problems of small dimensions since the computation of the Hessian matrix is easy. Whereas if the problem presents a large number of variables, it is generally advised to couple it with the conjugate gradient method or a Quasi-Newton method. Or, when the relative improvement in the objective function becomes too low, we automatically switch to the conjugate gradient method.

The Levenberg–Marquardt method, Marquardt method, another second-order method, very close to the Newton method described previously, in fact offers an interesting alternative by adjusting the weights as following:


μis the Levenberg–Marquardt parameter and Iis the identity matrix.

This method, making a compromise between the direction of the gradient and Newton’s method, has the particularity of adapting to the shape of the error surface. Indeed, for low values of μ, the Levenberg–Marquardt method approaches Newton’s method, and for large values of μ, the algorithm is quite simply a function of the gradient, knowing that the parameter μis automatically updated based on the convergence of each iteration.

Stabilization is possible thanks to a reiterative process, i.e. if an iteration diverges, it can be started again by increasing the parameter μuntil a convergent iteration is obtained. However, the phenomenon of strong divergence when approaching the optimum, inherent in Newton’s method, is in no way suppressed here. At most, the divergence can be reduced.

Despite the interesting properties of this method, calculating the inverse of H+μI, makes its use tricky for heavy neural networks. As a result, like Newton’s method, it is advisable to automatically switch to the conjugate gradient method when this divergence phenomenon appears. Second-order methods greatly reduce the number of iterations, but increase the computation time.

3.3 Heuristic optimization methods

The advantage of heuristic optimization methods is the minimization of non-derivable cost functions, even for a large number of parameters (1<n<105).

Among the effective methods, we distinguish the optimization algorithm by Particle Swarms introduced by Kennedy and Eberhart and improved by Clerc is an optimization technique by agents which is essentially inspired by the behavior social in flocks of birds or schools of fish.

In addition, genetic algorithms, evolutionary algorithms, also constitute stochastic optimization techniques inspired by the theory of evolution according to Darwin, now widely used in numerical optimization, when the functions to be optimized are complex, irregular, poorly known or in question. Combinatorial optimization.

These heuristic methods of the methods presented previously (Levenberg–Marquardt, Newton, Conjugate Gradient, …) by three main aspects:

  1. they do not require the gradient calculation,

  2. they study a population as a whole while deterministic methods treat an individual who will evolve towards the optimum,

  3. they involve random operations.

Experience has also shown that if the components as well as the evolution parameters are carefully tuned, it is possible to obtain extremely efficient and fast algorithms. However, this adjustment step can be very delicate and constitutes a drawback of the implementation of these methods.


4. Tuning neural network controller using classical approach

The architecture shown in Figure 1 assumes the role of two neural blocks. Indeed, the weights of the neural model are adjusted by the identification error ek, however the weights of the neural controller are trained by the tracking error eck.

The multi-layer perceptron is used in the neural model and in the neural controller. Each block consists of three layers. The sigmoid activation function sis used for all neurons.

Concerning the neural network model, the jthoutput layer of the hidden layer is described as follows


where n1is the number of nodes of the input layer, wjiis the hidden weight, xiis the input vector of the neural model, x=ukuk1uk2T, ukis the control input to the system and n2is the number of nodes of the hidden layer given in the expression (3).

The output of the neural network model is given by the following equation


where w1jis the weight from the hidden layer to the output layer and λis a scaling coefficient. The compact form of the output is given by the following equation




The incremental change of the hidden weights Δwij, i=1,,n1and j=1..n2, is


with ηis the learning rate, 0η1, SWx=diagshjT,j=1,,n2, sh1is the derivative of sh1defined as follows


ekis the identification error which is given by


and the function cost which is given by the following equation


where Nis the number of observations.

The incremental change of the hidden weights Δwijis used in the following equation


However, the output weights are updated by the following equation


where Δw1jis


Concerning the neural network controller, the jthoutput layer of the hidden layer is


where n3is the number of nodes of the input layer, vjiis the hidden weight and x1iis the input vector of the neural network controller x1=rkrk1rk2T, rkis the desired value.

The output of the neural network controller is given by the following equation


where n4is the number of nodes of the hidden layer, λcis a scaling coefficient and v1jis the output weight.

The compact form of the output of the neural network controller is given by the following equation




Concerning the hidden synaptic weights, they are updated by


where Δvjiis given by


with ηcis the learning rate, 0ηc1and the function cost defined as follows


where Nis the number of observations and eckis the tracking error which is given by the following equation


where rkis the desired output. So Δvjicomes


with SVx1=diagshjT,j=1,,n4.

The output synaptic weights of the neural network controller are updated as


where Δv1jis given by




and the Eq. (31) becomes


or in Eq. (33), ykdoes not depend on h1, for this reason we use yrkinstead of ykunder the condition that the neural model is equal to the system behavior which gives


from where approximately


the obtained incremental change Δv1jis rewritten as


In this section, we used a fixed learning rate, ηk(respectively ηck), and a derivative of sigmoid function sh1=sh11sh1.

This approach has two drawbacks. First, to find a suitable fixed learning rate ηk(respectively ηck), several tests are called which decreases the on-line operation. Second, when we use this type of derivative of sigmoid function, a large amount of error should not be spread to the weights of the output layer and the learning speed becomes very slow. In order to increase the learning speed, some of the new proposed approaches are proposed in the next section.


5. Tuning neural network controller using particle swarm optimization

An alternative technique is proposed, in this section, to optimize the neural network controller by implementing Particle Swarm Optimization algorithm. This algorithm works like animal behavior on finding foods and avoiding danger, where they will coordinate with each other to find the best position to settle. Likewise, PSO is directed by the movement of the best individual from the population, known as the social compound, and their own experience, known as the cognitive compound. The algorithm moves the set of solutions to find the best solution among them.

5.1 Mathematical formulation

In this study, the Particle Swarm Optimization Feedforward Neural Network (PSO NN) is applied to a multi-layered perceptron where the position of each particle, in a swarm, represents the set of synaptic weights of the neural network for the current iteration. The dimensionality of each particle is the number of synaptic weights.

Let us consider a search space of dimension D. A particle iof the swarm is modeled by a position vector


and a velocity vector denoted


There is no concept of backpropagation in PSO NN where the direct neural network produces the learning error, objective function of each particle, based on the set of synaptic weights and biases, the positions of the particles. Each particle moves in the weighting space trying to minimize the learning error and keeps in memory the best position through which it passed, denoted


whereas the best position reached by the swarm is denoted


Changing the position means updating the synaptic weights of the neural network controller to generate the proper control law by reducing tracking error.

In each iteration k, the particles update their position by calculating the new velocity and move to the new position. At the iteration k+1, the velocity vector is calculated as follows:



wbeing a variable parameter making it possible to control the changing of the particle at the next iteration,

wvijis a physical changing component,

c1r1pbestijkxijkis a cognitive changing component,

c2r2gbestijkxijka social component of changing,

c1and social c2are respectively, two cognitive confidence coefficients and social which are present the degree of attraction towards the best position of a particle and that of these informants,

r1and r2are two random numbers drawn uniformly in the interval 01represent the proper exploration of particles in the search space.

The smallest learning error of each particle Pbestiand the smallest learning error found in the whole learning process Gbestiare applied to produce a fit of the positions towards the best solution or the targeted tracking error.

The position, at the iteration k+1, of particle iis then defined as follows:


Once the change in positions has taken place, an update affects both Pbestiand Gbestivectors. At the iteration k+1, these two vectors will be updated according to the following two formulations:




The algorithm is executed as long as one of the three, or all at the same time, of the following convergence criteria is verified:

  • the maximum number of iterations defined has not been reached,

  • the variation in particle speed is close to zero,

  • the value of the objective is satisfactory, with respect to the following relation:


The parameter εrepresents a tolerance chosen, most often, of the order of 105and Nis a number of iterations chosen of the order of 10.

5.2 The proposed algorithm of particle swarm optimization

In this section, a summary of the proposed algorithm of the PSO neural network controller is presented.

  • Random initialization of the positions and velocity of the Nparticles in the swarm,

  • Fork: 1..N Do,

  • Repeat,

  • Forall particles i Do,

  • Calculation of the control law uikfrom the controller input vector xck,

  • Calculation of the outputs of the system yik+1,

  • Evaluation of the positions of particles in the research space,

  • Ifthe current positions of particle iproduce the best objective function in its history Then,

  • Pbestieci,

  • Ifthe objective function of particle i is the best overall objective function Then,

  • Gbestieci,

  • End If,

  • End If,

  • End For,

  • Moving of particles according to Eqs. (41) and (42),

  • Evaluation of particle positions,

  • Update Pbestiand Gbestiaccording to Eqs. (43) and (44),

  • Untilreaching the stop criterion,

  • End of PSO.

In this section, we have proposed the PSO optimization steps of an indirect neural network adaptive controller. The corresponding algorithm will be applied for discrete SISO nonlinear systems.


6. Results and discussion

In this section, a time-varying nonlinear discrete systems is used which is described by the input–output model in the following Equation [21].


where ykand ukare respectively the output and the input of the time-varying nonlinear system at instant k; a0k, a1kand a2kare given by


The trajectory of a1kand a2kare given in the following Figure 2.

Figure 2.


Figure 3.

The NN control system output and the desired values.

In this section, in order to examine the effectiveness of the proposed algorithm of neural network controller and the PSO neural network controller different performance criteria are used. Indeed, the mean squared tracking error (MSEec) and the mean absolute tracking error (MAEec) are, respectively, given by respectively, given by




where ykis the time-varying system output, rkis the desired value and the used number of observations Nis 100.

In this simulation, the desired value, rk, is given in the following


6.1 Simulation system using classical NN controller

In this section, we examine the effectiveness of the used classical neural network controller in the adaptive indirect control system. Indeed, in offline phase, using a reduced number of observations M=3to find, either, the parameter initialization of the neural network parameters (w1j, wji, v1j, vji).

In online phase, at instant k+1, we use the input vector of the neural network controller x1=xrkxrk1xrk2xrk3xrk4T. The results of simulation are given by Figures 35.

In this case, both neural network model and neural network controller consist of single input, 1hidden layer with 8nodes, and a single output node, identically. The used scaling coefficient is λ=λc=1and ε1=ε2=102.

Using a multilayered perceptron architecture, three layers: one input layer, one hidden layer and one output layer. The result of simulation are given by the following figures.

6.2 Simulation system using PSO NN controller

The PSO parameters values are respectively the number of variables (m = 50), the population size (pop = 10), the maximum of inertia weight 0.9, the minimum of the inertia weight 0.4, the first acceleration factor (c1 = 2) and the second acceleration factor (c2 = 2).

Using a multilayered perceptron architecture, three layers: one input layer, one hidden layer and one output layer. The result of simulation are given by the following figures.

Figure 6 presents the control system output and the desired values. In this case, the neural network parameters of controller are optimized by PSO technique. A concordance between the desired values and the control system output is noticed, although the time-varying parameters.

Figure 4.

The control law.

However, Figures 7 and 8 present respectively the control law and the control error. These figures reveal that the PSO NN controller has smaller errors than the other controller.

Figure 5.

The control error.

Table 1 presents the influence of the PSO technique in the control error.

NN controllerPSO NN controller
time (s)323.829100.926

Table 1.

The influence of the PSO technique in the control error.

From Table 1 we observe that, using the neural network controller, the PSO neural network controller has the smallest performance criteria in the control error eck. These results are shown in Figures 68.

Figure 6.

The PSO NN control system output and the desired values.

Figure 7.

The control law.

Figure 8.

The control error.

6.3 Effect of disturbances

An added noise vkis injected to the output of the time-varying nonlinear system in order to test the effectiveness of the proposed optimization technique of the neural network controller. To measure the correspondence between the system output and the desired value, a Signal Noise Ratio SNRis taken from the following equation:


with vkis a noise of the measurement of symmetric terminal δ, vkδδ, y¯and v¯are an output average value and a noise average value respectively. In this paper, the taken SNR is 5%.

Using the desired value rk, the sensitivity of the proposed neural network controller is examined in Table 2.

NN controllerNN PSO controller
time (s)44.456972337.728385

Table 2.

The influence of the PSO optimization in the control error.

From this table, we observe that, using the PSO as a method to optimize the parameters of neural network controller, we have got the smallest performance criteria in the control error.

According to the obtained simulation results, the lowest MSEec, MAEecand maxecare obtained using a combination between the neural network controller and the PSO technique, although the added disturbance in the system output and the time-varying parameters thanks to the PSO technique.


7. Conclusion

In this chapter, a comparative study between the neural network controller and neural network PSO controller is proposed and is applied with success in indirect adaptive control. For instance, the lowest MSEec, MAEec, minecand maxecare obtained and it is proved that the PSO method is the best. The effectiveness of the proposed algorithm is successfully applied to single-input single-output system, with and without disturbances, and it proved its robustness to reject disturbances and to accelerate the speed of the learning phase of the neural model and neural controller.


  1. 1. Slama S., Errachdi A. and Benrejeb M., Adaptive PID controller based on neural networks for MIMO nonlinear systems, Journal of Theoretical and Applied Information Technology,97, no. 2, pp. 361–371, 2019
  2. 2. Errachdi A. and Benrejeb M., Performance comparison of neural network training approaches in indirect adaptive control, International Journal of Control, Automation and Systems,16, no. 3, pp. 1448–1458, 2018
  3. 3. Saurabh G., Karali P. and Surjya K.P., Particle swarm optimization of a neural network model in a machining process, Sadhana39, Part 3, June 2014, pp. 533–548. Indian Academy of Sciences
  4. 4. Zhou J., Duan Z., Li Y., Deng J. and Yu D., PSO-based neural network optimization and its utilization in a boring machine. J. Material Process Technol.178, pp. 19–-23, 2006
  5. 5. Elbeltagi E., Hegazy T. and Grierson D., Comparison among five evolutionary-based optimization algorithms. Advanced Eng. Informatics19, pp. 43–-53, 2005
  6. 6. Feng H.M., Self-generation RBFNs using evolutional PSO learning. Neurocomputing70, pp. 241–251, 2006
  7. 7. Karpat Y. and Ozel T., Hard turning optimization using neural network modelling and swarm intelligence. Transactions of NAMRI/SME33, pp. 179–-186, 2005
  8. 8. Zhang, R., Tao, J., Lu, R. and Jin, Q. Decoupled ARX and RBF neural network modeling using PCA and GA optimization for nonlinear distributed parameter systems. IEEE Transactions on Neural Networks and Learning Systems,29, no. 2, pp. 457–469, 2018
  9. 9. Stacey A., Jancic M. and Grundy I., Particle swarm optimization with mutation. Proceedings of IEEE, pp. 1425-1430, 2003
  10. 10. Zhao F., Ren Z., Yu D. and Yang Y., Application of an improved particle swarm optimization algorithm for neural network training. Proceedings of IEEE International Conference on Neural Networks and Brain, Beijing, China, pp. 1693-–1698, 2005
  11. 11. Asokan P., Baskar N., Babu K., Prabhaharan G. and Saravanan R., Optimization of surface grinding operations using particle swarm optimization technique. J. Manufacturing Sci. Eng.127, pp. 885–892, 2005
  12. 12. Haq A.N., Sivakumar K., Saravanan R. and Karthikeyan K., Particle swarm optimization (PSO) algorithm for optimal machining allocation of clutch assembly. Int. J. Advance Manufacturing Technol.27, pp. 865–869, 2006
  13. 13. Gaitonde V.N. and Karnik S.R., Minimizing burr size in drilling using artificial neural network (ANN)-particle swarm optimization (PSO) approach. J. Intelligent Manufacturing23, pp. 1783–1793, 2012
  14. 14. Navalertporn T. and Afzulpurkar N.V., Optimization of tile manufacturing process using particle swarm optimization. Swarm and Evolutionary Computation1, pp. 97–109, 2011
  15. 15. Samanta B. and Nataraj C., Use of particle swarm optimization for machinery fault detection. Eng. Appl. Artificial Intelligence,22, pp. 308–316, 2009
  16. 16. Malviya R. and Pratihar D.K., Tuning of neural networks using particle swarm optimization to model MIG welding process. Swarm and Evolutionary Computation1, pp. 223-235, 2011
  17. 17. Garro B.A. and Vazquez R.A., Designing Neural Networks Using Particle Swarm Optimization, Research Article, Computational Intelligence in Neuroscience, Vol. 2015
  18. 18. Selvakumaran S., Parthasarathy S., Karthigaivel R. and Rajasekaran V., Optimal decentralized load frequency control in a parallel ac-dc interconnected power system through hvdc link using pso algorithm, Energy Procedia14, pp. 1849–1854, 2012
  19. 19. Shaher M., Reyad E. and Iqbal M.B. Tuning PID andPIλDδcontrollers using particle swarm optimization algorithm via El-Khazali’s Approach, Proceedings of the 45th International Conference on Application of Mathematics in Engineering and Economics (AMEE’19) AIP Conf. Proc. 2172, 050003–1–050003-8; by AIP Publishing. 978–0–7354-1919–3/30.00
  20. 20. Stimac G., Braut S. and Ziguli R, Comparative analysis of PSO algorithms for PID controller tuning, Chinese Journal of Mechanical Engineering,27, No. 5, 2014
  21. 21. Narendra K.S. and Parthasarthy K., Identification and control of dynamical systems using neural networks, IEEE Trans. on Neural Networks,1, 1, 4–27, 1990

Written By

Sabrine Slama, Ayachi Errachdi and Mohamed Benrejeb

Submitted: December 20th, 2020 Reviewed: February 4th, 2021 Published: July 14th, 2021