Open access

Fast Nonlinear Model Predictive Control using Second Order Volterra Models Based Multi-agent Approach

Written By

Bennasr Hichem and M'Sahli Faouzi

Submitted: 09 October 2010 Published: 05 July 2011

DOI: 10.5772/16347

From the Edited Volume

Advanced Model Predictive Control

Edited by Tao Zheng

Chapter metrics overview

3,512 Chapter Downloads

View Full Metrics

1. Introduction

Model predictive control (MPC) refers to a class of computer control algorithms that utilize a process model to predict the future response of a plant. During the past twenty years, a great progress has been made in the industrial MPC field. Today, MPC has become the most widely implemented process control technology. One of the main reasons for its application in the industry is that it can take account of physical and operational constraints. In classical model predictive control (MPC), the control action at each time step is obtained by solving an online optimization problem. If it is possible, MPC algorithms based on linear models should be used because of low computational complexity [Maciejowski J,2002]. Since properties of many technological processes are nonlinear, different nonlinear MPC techniques have been developed [Qin, S. J et al, 2003]. The structure of the nonlinear model and the way it is used on-line affect the accuracy, the computational burden and the reliability of nonlinear MPC. Several different attempts to reduce computational complexity have been released during the last thirty years. The simplest way to reduce on-line computation is to transform the NMPC problem into LMPC. The nonlinear system is transformed into a linear system using a feedback-linearizing law, the input constraints are mapped into constraints on the manipulated input of the transformed system and the obtained constrained linear system is controlled using LMPC [Kurtz M.J et al,1997]. An interesting strategy is presented in [Arahal M.R et al.,1998], when the linear model is used to predict future process behavior and the nonlinear model is used to compute the effect of the past input moves. The most straightforward technique used to implement fuzzy models [Fischer M et al.,1998] is based on a linearization method. The accuracy of the linear model can be improved by relinearizing the model equations several times over a sampling period or by linearizing the model along the computed trajectory [Mollov S.,et al.,2004].Another approach has been used by a number of researchers such as in [Brooms A et al., 2000], where the NMPC problem is reduced to an LMPC problem at each time step using a successive linearization. The structure of certain nonlinear empirical models allows the NMPC optimization problem to be solved more efficiently than is possible with other forms. Such an approach will be followed in [Abonyi, J et al,2000]. An algorithm for controller reconfiguration for non-linear systems based on a combination of a multiple model estimator and a generalized predictive controller is presented in [Kanev, S et al., 2000], in which a set of models are constructed. Each corresponding to a different operating condition of the system and an interacting multiple model estimators is utilized to yield a reconstruction of the state of the non-linear system. For unconstrained control based on linear process models and a quadratic cost function, the control sequence can be analytically calculated. When linear constraints are taken into account, the solution can be found using quadratic programming techniques. With the introduction of a nonlinear model into MPC scheme, a nonlinear programming technique (NLP) has to be solved at each sampling time to compute the future manipulated variables in on-line optimization that is generally non-convex which make their implementation difficult for real time control. During the past decade significant theoretical results as well as advances in the implementation strategies of NMPC have been obtained and NMPC has been successfully applied in practice to relatively slow plants, mainly in the process industry. However, the application of such techniques for fast nonlinear systems remains a widely opened problem due to the computation burden associated with solving an open loop optimal control problem. Most of the research has focused on computations carried out by one agent. In [Negenborn R et al., 2004], a survey how a distributed multi-agent MPC setting can reduce the computations of a single MPC agent. Moreover, researchers have investigated feedback linearization model predictive control (FLC-MPC) schemes for their ability to handle constraints on input and output [Soest Van W.R et al., 2005]. These approaches reduce the on-line computation by transforming the NLMPC problem into a LMPC and quadratic programming can be used to handle constraints. When sampling times become so short, the computation times for QP solution can no longer be neglected [Jaochim H et al., 2006]. In [Didier G,2006], a distributed model predictive control is considered and the proposed strategy allows dramatic reduction of the computational requirement for solving large-scale nonlinear MPC problem due to computation parallelism. However, recent advancements in MPC allow for a faster online solution by shifting some of the computational burden off-line. We can notice that many optimization algorithm solutions for NMPC have been investigated lately; however, an analytical solution in NMPC approach is usually impossible to find. One possible way to address computational complexity is to decentralize the optimization tasks. Attention has been focused on multi-agent model predictive control approach [H.Ben Nasr et al.,2008a,b,c,d,e]. There are multiple agents in multi-agent model predictive control. Each uses a model of its sub-system to determine which action to take. Decentralized agent architecture and decentralized model decomposition are then chosen, in which there are numerous agents that do not have any interaction among one another. A methodology based multiagent has been investigated in the implementation of a given predictive control law for nonlinear systems. Such procedure relies on the decomposition of the overall system into subsystems and a multiple agents each uses a model of its sub-system to determine which action to take.

In this chapter book, new NMPC scheme based MAMPC (Multiagent model predictive control) is implemented to reduce the computational effort. The performance of the proposed controllers is evaluated by applying to single input-single output (SISO) control of non linear system. Moreover, in general, the optimization problem is nonconvex and leads to many difficulties impacting on implementation of MPC. These difficulties are related to feasibility and optimality, computation and stability aspects. In order to avoid solving nonconvex optimization problem, MAMPC (Multiagent model predictive control) optimization procedure, a method for convex NMPC was also developed in this chapter book. Theoretical analysis and simulation results demonstrate better performance of the MAMPC over a conventional NMPC based on sequential quadratic programming (SQP) in tracking the set point changes as well as stabilizing the operation in the presence of input disturbances. In this work, our main objective has been to illustrate the potential advantage of nonlinear predictive control based multiagent when applied to nonlinear systems. The suggested approach was to identify a new control algorithm that in essence is a bridge between linear and nonlinear control. This resulted in the development of the MAMPC approach. Through simulation-based comparisons, it is shown that a MAMPC control algorithm is capable of delivering significantly improved control performance in comparison to a conventional NMPC, so that the difficulty of minimizing the performance function for nonlinear predictive control is avoided, which is usually carried by the use of NLP solved at each sampling time that generally is non-convex. In this chapter book we describe algorithm that find the solution of a non-convex programming and also demonstrated that global nonlinear requirements can effectively be resolved by considering smaller regimes. The simulation example shows that the multi-agent compares favorably with respect to a numerical optimization routine. Moreover, the MAMPC reduces the online computational burden and hence has the potential to be applied to the system with faster time constants.

Advertisement

2. Statement of the problem

2.1. Process model

A broad class of physical systems can be represented using the Volterra model. Particularly, it was shown that a truncated Volterra model could represent any non-linear system, time-invariant with fading memory. This model is thus particularly attractive for non-linear systems modeling and identification purpose. One of the main advantages of the Volterra model is its linearity-in-parameters, i.e. the kernel coefficients. This property allows the extension of some results established for linear model identification to this model. In this work, we consider the control of a class of single-input single output non-linear system described by the following non-linear discrete-time parametric second-order Volterra model (Haber et al. 1999a,b):

y(k)=y0+i=1nyaiy(ki)+i=1nubiu(ki)+i=1nuj=1ibiju(ki)u(kj)+ε(k)E1

Wherey0 is a bias term,y(k) is the output, u(k)is the input, ai,biand bij are the parameters of the parametric Volterra model, nuand nyare the number of lags on the input and the output, respectively. ε(k)Contains all terms up to second-order. One advantage of using the parametric Volterra model is that the one-ahead prediction problem can be formulated as a linear regression, which simplifies the identification of the parameters from input-output data. Therefore, the model given by “Equation (1)” can be written as:

y(k)=θTϕ(k)+ε(k).E2

With:

θT=[y0,a1,a2,,any,b1,b2,,bnu,b1,1,,bnu,nu]E3
ϕT(k)=[1,y(k1),,y(kny),u(k1),,u2(k1),,u2(knu)].E4

Where ϕ(k) and θ are the regressor and the parameter vectors, respectively. The model “Equation (3)” is linear in parameters, and its regressors and parameters may be identified from input output information. Moreover, from identification point of view, parametric Volterra models are superior to Volterra series models in the sense that the number of parameters needed to approximate a process are generally much less with parametric Volterra models. This is due to the fact that Volterra models only include previous inputs, while the model (1) includes previous outputs as well as previous inputs.

2.2. Optimization criteria

The purpose of the control strategy is to compute future control moves which will minimize some performance function based on the desired output trajectory over a prediction horizon, subject to constraints on input and output signals [D.W. Clarke et al.,1987]. The most common objective cost function, also used here, is:

J(N1,N2,Nu,δ)=j=N1N2(w(k+j)y(k+j/k))2+j=1Nu(λjΔu(k+j1))2E5

Subject to

ΔulowΔu(j+j1)Δuhighfor1jNuulow(k)u(k+j1)uhigh(k)for1jNuE6

Where N1 is the minimum prediction horizon, N2is the maximum prediction horizon, y(k+j/k)is an optimum j-step ahead prediction of the system output on data up to time k, w(k+j)is a sequence of future set points, NuN2is the control horizon, and (λj)j=1Nu=(λ1,,λNu)are control-weighting factors usually assumed to be equal to each other used to penalize the control increments.Δu(k+j1),j[1,Nu], is a sequence of future control increments computed by the optimization problem at time k;Δu(k+j1)=0 forj>Nu. For the constraintsulow,uhigh,Δulow,Δuhigh, are respectively the lower limit, upper limit, lower derivative limit and higher derivative limit of the control input. Using the quadratic prediction equation of the model, the cost function becomes fourth degree equation in the control increments. Th objective finction never exeeds fourth order, regardless of the value of the prediction horizon. (Haber, 1999a, 1999b)

2.3. Nonlinear Predictive Control

Despite of the wide exposure of and the intensive research efforts attracted over the past few decades on Nonlinear model predictive control (NLMPC), this control strategy is still being perceived as an academic concept rather than a practicable control technique. However, nonlinear model predictive control is gaining popularity in the industrial community. The formulations for these controllers vary widely, and almost the only common principle is to retain nonlinearities in the process model [Matthew et al.,2002]. In nonlinear control, a receding horizon approach is typically used, which can be summarized in the following steps:

  1. At time k, solve, on-line, an open-loop optimal control problem over some future interval, taking into account the current and future constraints.

  2. Apply the first step in the optimal control sequence.

  3. Receding strategy so that at each instant the horizon is displaced towards the future, which involves the application of the first control signal of the sequence calculated at each step.

The process to control is assumed to be represented by a mono-variable second order parametric Volterra model. The model given by (1) can be expressed as:

A(q1)y(k)=y0+B1(q1)u(k)+B2(q11,q21)u2(k)+ε(k)Δ(q1)E7

Where are two polynomials of the backward shifting operator q1 given by :

A(q1)=1+a1q1++anaqnaB1(q1)=1+b11q1++b1nbqnbE8

B2(q11,q21)represents the quadratic term of the Volterra model, this quantity is defined by:

B2(q11,q21)u2(k)=n=0nbm=nnbb2nmu(kn)u(km)E9

The incremental predictive form of the parametric Volterra model can be expressed as a function of the current and future control increments :

y(k+j)=v0j+v1j(q1)Δu(k+j)+v2j(q11,q21)Δu2(k+j)E10

With

v0j=yo+Gjy(k)+i=j+1nb+j1[δ1i+m=inb+j1δ2imΔu*(k+jm)]Δu*(k+ji)v1ij=v1i+m=j+1nb1+j1δ2imΔu*(k+jm)i=1,2,,jv2imj=δ2imi=1,2,,jandm=1,2,,jE11

The effect of selecting the parameters and the coefficient of the predictive control are not investigated here, for more detail see(Haber et al., 1999a). Replacing the incremental output by his expression, the cost function (5) can be written as follows:

J=(v0w+v1u˜+v2u˜2)T(v0w+v1u˜+v2u˜2)+λu˜Tu˜E12

With constraints, the cost function can be minimized numerically by a one-dimensional search algorithm (dynamic algorithm programming). Without constraints the solution leads to a third-degree one-dimensional equation [F.J.Doyle et al.,1995].

Advertisement

3. Multi-agent Model Predictive Control

3.1. Control and design

The main idea of the proposed concept model predictive control is to transform the nonlinear optimization procedure used in a standard way into sub-problems, in which the global task can be resolved. The objective of this approach is to regulate the nonlinear system output to the expected values and satisfying the above constraints. This can be done as follows. The global system can first be decomposed on sub-systems independent of one another, for each sub-system an MPC unit sub-system is made constituting the agent controller i. Based on an analytical solution, which corresponds to the solution of the local receding horizon sub-problems, a logic unit switching tries to find the best sequence of actions sent to the nonlinear system and gives the desired trajectory. Sequences of actions that bring the global system in a desired trajectory are made and avoid any violated constraints on actions. The multi-agent controller consists of synchronizing the output of the true system at every decision step k with the reference trajectory. In fact, at every decision step the right action is the one that will cause the agent to be the most successful. The parallel controller structure is based on the fact that a neural network can be used to learn from the feedback error controller non linear system. A neural network controller is also made on, in objective to take handle the results of the actions on the global system and monitor the closed-loop system. Figure 1, shows the architecture of the multi-agent controller. In the multi-agent context, the agents are the controllers and the non linear system is the environment.

Figure 1.

Architecture of Multi-agent Controller

The basic structure of the control strategy proposed is shown in figure 2. The control problem to solve should be decomposed into supposedly independent subproblems. Each subproblem is solved by designing a controller-agent. The controller-agent is realized by some control algorithm that is operational only under particular operating conditions of the plant being controlled. Moreover, the controller-agent’s action consist of the analytical optimal control sequence elaborated in each sub-system after having learned the trajectory of the control to follow and by minimizing a local cost function. The individual solutions or controller-agents are combined into one overall solution. This implies addressing the global problems by selecting an appropriate coordination mechanism. The conceptual design consists of the following three stages:

Structuring: The control problem to solve should be decomposed into supposedly independent subproblems. The global system can first be decomposed on sub-systems independent of one another.

Solving individual subproblems: Each subproblem is solved by designing a controller-agent. An MPC unit sub-system is made constituting the controller agent. A supervisor based on performance measure Jk is used. By means of the output errors εk for each agent’s action, the supervisor decides then what action should be applied to the plant during each sampling intervalk. The performance measure is given by:

Jk=εkεk1eλ,λ>0E13

Where,εk is the error for the agent I defined by:

εk=setpointyaE14

And ya is the plant output after agent’s action.

Combining individual solutions The individual solutions or controller-agents are combined into one overall solution. The parallel controller structure is based on the fact that a neural network can be used to learn from the feedback error controller nonlinear system., to take handle the results of the actions on the global system and monitor the closed-loop system.

Figure 2.

Architecture of Multi-agent Controller

3.2. Control problem decomposition

The extension of MPC for the use of nonlinear process models is one of the most interesting research topics. These algorithms generally lead to the use of computationally intensive nonlinear techniques that make application almost impossible. In order to avoid this problem, the proposed concept algorithm utilizes a linear model extracted from the nonlinear model. A decentralized model and decentralized goals are then considered. A decentralized problem model consists of multiple smaller, independent subsystems in witch subsystem in an overall nonlinear system have his own independent goals and represented by a discrete model of the form:

{xl(k+1)=Alxl(k)+Blul(k)yl(k)=Clxl(k)E15

Where xlnxis the local state space; ylnyis the measurement output of each subsystem;ulnu is the local control input. Therefore the overall nonlinear system can be seen as a collection of smaller subsystems that are completely independent from one another witch is referred as a decentralized model. The variable control of every agent sent to the nonlinear system consists of its agent's optimal input control given by minimizing local standard MPC cost function:

Jl=N1N2yl(k+j)Setpoint(k+j)Ql2+j=1NuΔul(k+j|k)Rt2E16

Where Ql,Rl are suitable weighting matrixes.

One of the advantages of the state-space representation is that it simplifies the prediction; the prediction for this model is given:

yl(k+i|k)=Cl(Alixl(k|k)+j=1iAli1Blul(k+ij|k)E17

For local suitable matrixΨl,Γl,Θl andΛl, we can rewrite the local predictive model output for future time instants as:

Yl(k)=Ψlxl(k)+Γlul(k1)+ΘlΔul(k)E18

Where

Ψ=[ClAlClAlNuClAlNu+1ClAlN2]Θl=[ClBlClj=0Nu1AljBlClj=0N21AljBl]E19
Γl=[ClBl0Cl(AlBl+Bl)0Clj=0Nu1AljBlClBlClj=0N21AlBlClj=0N2NuAljBl]E20

The cost function (16) can be rewritten as:

Jl=εl(k)TQlεl(k)Δul(k)TGl+Δul(k)THlΔul(k)E21

Where:

εl(k)=Setpoint(k)Ψlxl(k)Γlul(k1)ΛlGl=2ΘlTQlεl(k)Hl=ΘlTQlΘl+RlE22

Therefore the control law that minimizes the local cost function (16) is given by:

Δul(k)=12Hl1GlE23

In order to take into account constraints on the manipulated variables, a transformation method for each action is made. The control action based on (22) is transformed into new action with the following transformation [R. Fletcher, 1997].

{ul(k)=fmoy+famptanh(ul(k)fmoyfamp)fmoy=fmaxfmin2famp=fmax+fmin2fmax=min(ulmax,ul(k1)+Δulmax)fmin=max(ulmin,ul(k1)+Δulmin)E24

The optimum control law (22) for each agent does not guarantee the global optimum. Accordingly to that, nonlinear system requires coordination among the control agent’s action. The required coordination is done by a logic switch added to supervisory loop based neural networks which compute the global optimum control subject to constraints.

3.3. The supervisor loop

A neural network is used with the proper control architecture by changing the results of switched input ui of each agent’s action through a stable online NN weights which can guarantee the tracking performance of the overall closed-loop nonlinear system. Moreover, the neural network should reduce the deleterious effect of constraints attached with the different actions [Wenzhi, G et al., 2006]. In this work the neural network is represented by feed-forward single-input single output. The neural network tries to optimize the control actionΔu.

Δu=fNN(i=1nubiu(ki)+i=1nuj=1ibiju(ki)u(kj))E25

The method of Levenberg Marquardt was designed for the optimization due to its properties of fast convergence and robustness. The main incentive of the choice of the algorithm of Levenberg Marquardt rests on the fast guarantee of the convergence toward a minimum.

Advertisement

4. Simulation results

The chosen example used in aim to valid the theory exposed above is given [B.Laroche et al.,2000]. A continuous state space representation of this example is as follow:

{x=1x3x2ux2=x2+ux3=x2x1+2x2(ux2)E26

The system model is implanted in the Matlab-simulink environment of which the goal is to get the input/output vector for the identification phase. Matlab® discrete these equations by the 4th order Runge-Kutta method. The vector characterizing the Volterra model that linking the output x3 with the input u is given by:

A=[11.9897.9997]T,B1=[0.03180.0096]B2=(0.03960.0656000.03880000)E27

Moreover, the Chiu procedure is developed to divide the nonlinear system into independent subsystem [Chiu S.L, 1994]. The modeling of the dynamic system, led to the localization of two centers with respective valuesc1=0.0483,c2=0.5480. The classification parameters adopted for the algorithm are as follows :ra=.6;rb=1.25ra;ε1=.5;ε2=.1. The procedure of identification and modeling has been applied to the whole measures input/output come out of the global system, driving to the different following subsystem models:

A1=(0.55210.21550),B1=[0.04960.1419]A2=(0.796210.04810),B2=[0.0239.0088]E28

The result of modelisation is reported in figure 3. These results showed the application Chiu algorithm for the classification which has a better quality of local approximation of the system.

Figure 3.

Validation of the obtained model

4.1. Set point tracking

The proposed concept as seen in section 3 is used, to control the nonlinear system. The tuning parameters of the multi-agent consists of the parameters values of each agent given by:N1=1;N2=5;Nu=1;R1=R2=4;. Assuming for the sake of simplicity but without loss of generality, the prediction and control horizons are the same for each agent. The tuning parameters for the NMPC are:N1=1;N2=5;Nu=1;δ=.001. The gradient of the control Δumin and Δumax are taken, respectively, equal to −0.2 and 0.1 and the control is limited between 0 and 1. In this application example, the neural network was a feedforward network and it consisted of three hidden layer nodes with tangent sigmoid transfer functions and one output layer node with linear transfer function.In this section, we present a comparative study between the proposed method and the NMPC procedure. The results shown in Fig. 4 and Fig. 5 are obtained in the constrained case

Figure 4.

Evolution of the set point, the output and the control (NMPC): constrained case

Figure 5.

Evolution of the set point, the output and the control (MAMPC): constrained case

It is clear from this figures that the new strategy of control leads to satisfactory results with respect to set-point changes. Indeed, the tracking error is reduced and with a smooth control action. It is shown that NMPC also gives consistently a good performance for the range examined. The two controllers are remarkably similar, which indicates that the MAMPC controller is close to optimal for this control problem. Moreover, the new controller meets all the required performance specifications within given input constraints and the results show a significant improvement in the system performance compared with the results obtained when only nonlinear programming model is used and the multi-agent compares favorably with respect to a numerical optimization routine as shown in Figure 6, the final control law to the nonlinear system obeying the specified constraints and with the proposed concept the constrained input and rate of change inputs cannot violate the specified range premise.

4.2. Effect of load disruptions and noise

In order to test the effect of load disruptions, we have added to the system output a constant equals to 0.02 from iteration 100 to iteration 125 and from iteration 200 to iteration 225. And in the case of noise, we have added to the output of the process an uncertain pseudo-noise of maximal amplitude equal to 0.025. Figs. 6 and 7 present the evolutions of the set point, the outputs obtained, respectively, with the presence of load disruption and noise. Fig. 6 shows the evolutions of the set point, the outputs signals obtained with both NMPC and MAMPC control strategy. It is clear from this figure that the presence of load disruptions, from iteration70 to iteration 90 and from iteration 120 to iteration 140, does not lead to a correct pursuit. Thus, the presence of load disruptions has more effect on NMPC control than the MAMPC strategy. Fig. 8 shows the evolutions of the set point, the outputs obtained with NMPC and MAMPC strategy. According to the obtained results, we notice that the MAMPC controller is capable to deliver a less fluctuate output than that obtained with NMPC approach.

Figure 6.

Evolution of the set point, the output NMPC and MAMPC control in the case of load disruptions.

Figure 7.

Evolution of the set point, the outputs NMPC and MAMPC control in the case of the effect of the noise.

4.3. Convex optimization approach

In order to avoid solving nonconvex optimization problem, MAMPC optimization procedure, a method for convex NMPC was also developed in this chapter book. The performance of the proposed controllers is evaluated by applying to the same process and the attention has been focused on multi-agent model predictive control approach as a possible way to resolve non-convex optimization tasks. We have shown in Figure 8, a new constraint where the control is limited between 0 and.5. The nonlinear programming algorithm (NLP) cannot find a solution for the optimization problem. So because of the use of a nonlinear model, the NMPC calculation usually involves a non-convex nonlinear program, for which the numerical solution is very challenging. Therefore, finding a global optimum can be a difficult and computationally very demanding task, if possible at all. In other words, non-convexity makes the solution of the NLP uncertain. The proposed approach describe algorithm that find the solution of a non-convex programming.

Figure 8.

Evolution of the set point, the outputs NMPC and MAMPC control: Restriction applicability of the NMPC

4.4. Computational time study

The load computational time constitutes a gate in the scheme of the predictive control indeed (Leonidas et al, 2005). The performances of the computational load established with the proposed concept are compared to a nonlinear programming In Figure 9 the time required to compute the control input at each time step k for the two approaches is plotted. We also reported in Table I, the mean and the maximum value of the implementation time required for the control law for the two cases. In Figure 10, the CPU time required to compute the control input at each time step k for the two approaches is plotted. It is very easy to see, from figure 9, 10 and table 1, that the NLMPC controller is too CPU time consuming and the computation for optimization in the new design procedure is simpler, faster and has good response curve and control performance because it uses a simple analytical solution to the minimization of the performance objective. On average, the NMPC method was about ten times slower than the novel approach and the control input in the MAMPC procedure require a twenty time smaller in the operating action.

Figure 9.

Computational time requirement

Figure 10.

CPU time comparison

MeanMax
NMPC0.02240.7190
MAMPC9.6875e-0040.032

Table 1.

Comparison of operating time

4.5. Controller performance comparison

Through simulation-based comparisons, it is shown that a MAMPC control system is capable of delivering significantly improved control performance in comparison to a conventional NMPC, so that the difficulty of minimizing the performance function for nonlinear predictive control is avoided, which is usually carried by the use of NLP solved at each sampling time that generally is non-convex. Moreover, the nonlinear controls based on MAMPC approach provide excellent performance, both in terms of disturbance rejection, noise suppression and set point tracking. The NMPC controller is also good for disturbance rejection and noise suppression, but the set point tracking is not succeeded. In order to make a comparison of the novel concept to the NLMPC controller, the performance of the controller was measured by the following performance indices in unconstrained and constrained cases given by [Abonyi J,2003]:

SSE=k=1N(Setpointy(k))2.SSU=k=1N(u(k)u(k1))2.SSΔU=k=1NΔu2(k).E29

Where SSE denotes the sum of the square error, SSU the sum of the square of the control signal, SSΔU the sum of the square of the change of the control signal and N is the number of samples. The values are summarized in Table1, shows that the MAMPC achieving control performance improves more with the use of the NLMPC controller. Moreover the MAMPC produces the best tracking performance and the smallest energy consumption.

Constrained case
SSESSU e-011SSΔU e-013
NMPC0.01355.541516754
MAMPC0.00421.24643.8622

Table 2.

Control performance comparison

Advertisement

5. Conclusions

One of the main drawbacks of NMPC schemes is the enormous computational effort these controllers require. On the other hand, linear MPC methods can be implemented solving just Quadratic Programming (QP) or Linear Programming problems (LP).The main focus of this chapter is to develop a new control algorithm that in essence is a bridge between linear and nonlinear control. This resulted in the development of the MAMPC (Multiagent model predictive control) approach. The new NMPC scheme based MAMPC is implemented to reduce the computational effort. The control performance of MAMPC algorithm is evaluated by illustrative comparison with general NMPC. All the results prove that MAMPC approach is a fairly promising algorithm by delivering significantly improved control. The performance of the proposed controllers is evaluated by applying to single input-single output control of non linear system. Theoretical analysis and simulation results demonstrate better performance of the MAMPC over a conventional NMPC based on sequential quadratic programming in tracking the setpoint changes as well as stabilizing the operation in the presence of input disturbances.

References

  1. 1. MaciejowskiJ. 2002 Predictive Control with constraints, Prentice Hall, London.
  2. 2. QinS. J.BadgwellT. A. 2003 A survey of industrial model predictive control technology, Control Engineering Practice 11733 .
  3. 3. KurtzM. J.HensonM. A. 1997 Input-output linearizing control of constrained nonlinear process, Journal of Process Control, 7 1 317 .
  4. 4. ArahalM. R.BerenguelM.CamachoE. F. 1998 Neural identification applied to predictive control of a solar plant, Control Engineering Practice 6334 .
  5. 5. FischerM.NellesO. 1998 Predictive control based on local linear fuzzy models, International Journal of system Science, 29679 .
  6. 6. MollovS.BabuskaR.AbonyiJ.HenkB. 2004 Effective Optimization for Fuzzy Model Predictive Control. IEEE Transactions on fuzzy systems, 12 5 661-675
  7. 7. BroomsA.KouvaritakisB. 2000 Successive constrained optimization and interpolation on non-Linear model based predictive control, International journal of control, 68 3 599623
  8. 8. AbonyiJ.BabuskaR.AbottoM.SzeifertF.NagyL. Identification and control of nonlinear systems using fuzzy Hammerstein models, Ind.Eng.Chem.Res, 3943.
  9. 9. KanevS.VergaegenM. 2000 “Controller e-configuration for non-linear systems”, Control Engineering Practice, 8 11 12231235 .
  10. 10. NegenbornR. R.De SchutterB.HellendroomJ. Multiagent model predictive control A survey, Technical report 04010 Delf Center for systems and control. (2004)
  11. 11. Soest VanW. R.ChuQ. P.MulderJ. Combined Feedback Linearization and Constrained Model Predictive Control for Entry Flight Journal of Guidance, Control, and Dynamics, 292, 427- 434 (2005)
  12. 12. JaochimH.BockH. G.DiehlM. Online active set strategy for fast parametric quadratic programming in mpc applications, Proceeding IFAC Workshop on Nonlinear Model Predictive Control for fast Systems NMPC-06 France,1322 . (2006)
  13. 13. DidierG. Distributed model predictive control via decomposition coordination techniques and the use of an augmented lagrangian. IFAC. Workshop on NMPC, France, 111-116(,2006)
  14. 14. BenNasr. H.M’SahliF. Computational Time Requirement Comparison Between Two Approaches In MBPC, International Journal of soft computing, 32 (2), 147154 (2008a)
  15. 15. BenNasr. H.M’SahliF. Une supervision d’action pour la commande prédictive multi-agent, 1st International Workshop on Systems Engineering Design & Applications,,SENDA 2008b
  16. 16. BenNasr. H.M’SahliF. Une nouvelle stratégie de commande prédictive des systèmes non linéaires à dynamique rapides. Conférence Internationale Francophone d’Automatique Bucarest, Roumanie, 35 septembre CIFA(2008c)
  17. 17. BenNasr. H.M’SahliF. Multi agent Approach To TS-Fuzzy Modeling and Predictive Control of Constrained Nonlinear System”, International Journal of Intelligent Computing and Cybernetics. 3 398424 2008d
  18. 18. BenNasr. H.M’SahliF. Multi agent predictive control approach based on fuzzy supervisory loop for fast dynamic systems. The fifth International Multi- Conference on Systems, Signal and Devices, SSD 2008e
  19. 19. ClarkeD. W.MohtadiC.TuffsP. S. Generalized predictive control. I: The basic algorithm II: Extensions and interpretations, Automatica, 23 137160 , 1987.
  20. 20. HaberR.BarsR.EngelL. 1999a Three Extended Horizon Adaptive Nonlinear predictive Control Schemes on the Parametric Volterra Model, ECC, Proceedings of the European Control Conference, Karlshruhe, Germany
  21. 21. HaberR.BarsR.EngelL. 1999b Sub-optimal Nonlinear Predictive and Adaptive Control based on the Parametric Volterra Model, Applied Mathematics and Computer Science 70161
  22. 22. MatthewJ.RawlingsT. B.RawlingsJ. B. Texas Wisconsin Modeling and Control Consortium. Technical report 2002-04 200204 200204
  23. 23. DoyleF. J.OgunnikeB. A.PearsonR. K. Non linear model based control using seconds order Volterra models”. Automatica.1995,5 N31.697714 .
  24. 24. LarocheB.PhilippeM.RochonP.“. “Motion planning for the heat equation”, international journal of robotic and nonlinear control. 629643 629643 2000.
  25. 25. FletcherR. Pratical Methods of Optimization. JohnWiley and Sons, 1987
  26. 26. WenzhiG.RastkoR. “. “Neural network control of a class of nonlinear systems with actuator saturation”, IEEE Trans. On Neural Networks, 17 ( 1) (2006) 147-56.
  27. 27. ChiuS. L. . "Fuzzy Model identification based on cluster estimation", Journal of Intelligent Fuzzy Systems 2 21994 1994 267278 .
  28. 28. LeonidasG. B.MayureshV. K. 2005 “Real-time implementation of model predictive control”, American Control Conference, Portland, OR, 41664171 .
  29. 29. AbonyiJ. 2003 Fuzzy Model identification for control, Birkhauser Boston.

Written By

Bennasr Hichem and M'Sahli Faouzi

Submitted: 09 October 2010 Published: 05 July 2011