Open access

Discrete-Time Model Predictive Control

Written By

Li Dai, Yuanqing Xia, Mengyin Fu and Magdi S. Mahmoud

Submitted: 07 March 2012 Published: 05 December 2012

DOI: 10.5772/51122

From the Edited Volume

Advances in Discrete Time Systems

Edited by Magdi S. Mahmoud

Chapter metrics overview

5,563 Chapter Downloads

View Full Metrics

1. Introduction

More than 25 years after model predictive control (MPC) or receding horizon control (RHC) appeared in industry as an effective tool to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge, see [1-3] for reviews of results in this area.

The focus of this chapter is on MPC of constrained dynamic systems, both linear and nonlinear, to illuminate the ability of MPC to handle constraints that makes it so attractive to industry. We first give an overview of the origin of MPC and introduce the definitions, characteristics, mathematical formulation and properties underlying the MPC. Furthermore, MPC methods for linear or nonlinear systems are developed by assuming that the plant under control is described by a discrete-time one. Although continuous-time representation would be more natural, since the plant model is usually derived by resorting to first principles equations, it results in a more difficult development of the MPC control law, since it in principle calls for the solution of a functional optimization problem. As a matter of fact, the performance index to be minimized is defined in a continuous-time setting and the overall optimization procedure is assumed to be continuously repeated after any vanishingly small sampling time, which often turns out to be a computationally intractable task. On the contrary, MPC algorithms based on discrete-time system representation are computationally simpler. The system to be controlled which usually described, or approximated by an ordinary differential equation is usually modeled by a difference equation in the MPC literature since the control is normally piecewise constant. Hence, we concentrate our attention from now onwards on results related to discrete-time systems.

By and large, the main disadvantage of the MPC is that it cannot be able of explicitly dealing with plant model uncertainties. For confronting such problems, several robust model predictive control (RMPC) techniques have been developed in recent decades. We review different RMPC methods which are employed widely and mention the advantages and disadvantages of these methods. The basic idea of each method and some method applications are stated as well.

Most MPC strategies consider hard constraints, and a number of RMPC techniques exist to handle uncertainty. However model and measurement uncertainties are often stochastic, and therefore RMPC can be conservative since it ignores information on the probabilistic distribution of the uncertainty. It is possible to adopt a stochastic uncertainty description (instead of a set-based description) and develop a stochastic MPC (SMPC) algorithm. Some of the recent advances in this area are reviewed.

In recent years, there has been much interest in networked control systems (NCSs), that is, control systems close via possibly shared communication links with delay/bandwidth constraints. The main advantages of NCSs are low cost, simple installation and maintenance, and potentially high reliability. However, the use of the network will lead to intermittent losses or delays of the communicated information. These losses will tend to deteriorate the performance and may even cause the system to become unstable. MPC framework is particularly appropriate for controlling systems subject to data losses because the actuator can profit from the predicted evolution of the system. In section 7, results from our recent research are summarized. We propose a new networked control scheme, which can overcome the effects caused by the network delay.

At the beginning of research on NCSs, more attention was paid on single plant through network. Recently, fruitful research results on multi-plant, especially, on multi-agent networked control systems have been obtained. MPC lends itself as a natural control framework to deal with the design of coordinated, distributed control systems because it can account for the action of other actuators while computing the control action of a given set of actuators in real-time. In section 8, a number of distributed control architectures for interconnected systems are reviewed. Attention is focused on the design approaches based on model predictive control.

Advertisement

2. Model Predictive Control

Model predictive control is a form of control scheme in which the current control action is obtained by solving, at each sampling instant, a finite horizon open-loop optimal control problem, using the current state of the plant as the initial state. Then the optimization yields an optimal control sequence and the first control in this sequence is applied to the plant. This is the main difference from conventional control which uses a pre-computed control law.

2.1. Characteristics of MPC

MPC is by now a mature technology. It is fair to mention that it is the standard approach for implementing constrained, multivariable control in the process industries today. Furthermore, MPC can handle control problems where off-line computation of a control law is difficult or impossible.

Specifically, an important characteristic of this type of control is its ability to cope with hard constraints on controls and states. Nearly every application imposes constraints. For instance, actuators are naturally limited in the force (or equivalent) they can apply, safety limits states such as temperature, pressure and velocity, and efficiency, which often dictates steady-state operation close to the boundary of the set of permissible states. In this regard, MPC is one of few methods having applied utility, and this fact makes it an important tool for the control engineer, particularly in the process industries where plants being controlled are sufficiently slow to permit its implementation.

In addition, another important characteristic of MPC is its ability to handle control problems where off-line computation of a control law is difficult or impossible. Examples where MPC may be advantageously employed include unconstrained nonlinear plants, for which on-line computation of a control law usually requires the plant dynamics to possess a special structure, and time-varying plants.

A fairly complete discussion of several design techniques based on MPC and their relative merits and demerits can be found in the review article by [4].

2.2. Essence of MPC

As mentioned in the excellent review article [2], MPC is not a new method of control design. Rather, it essentially solves standard optimal control problems which is required to have a finite horizon in contrast to the infinite horizon usually employed in H 2 and H linear optimal control. Where it differs from other controllers is that it solves the optimal control problem on-line for the current state of the plant, rather than providing the optimal control for all states, that is determining a feedback policy on-line.

The on-line solution is to solve an open-loop optimal control problem where the initial state is the current state of the system being controlled which is just a mathematical programming problem. However, determining the feedback solution, requires solution of Hamilton-Jacobi-Bellman (Dynamic Programming) differential or difference equation, a vastly more difficult task (except in those cases, such as H 2 and H linear optimal control, where the value function can be finitely parameterized). From this point of view, MPC differs from other control methods merely in its implementation. The requirement that the open-loop optimal control problem be solvable in a reasonable time (compared with plant dynamics) necessitates, however, the use of a finite horizon and this raises interesting problems.

Advertisement

3. Linear Model Predictive Control

MPC has become an attractive feedback strategy, especially for linear processes. By now, linear MPC theory is quite mature. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models.

3.1. Mathematical formulation

The idea of MPC is not limited to a particular system description, but the computation and implementation depend on the model representation. Depending on the context, we will readily switch between state space, transfer matrix and convolution type models [4]. In addition, nowadays in the research literature, MPC is formulated almost always in the state space. We will assume the system to be described in state space by a linear discrete-time model.

x(k+1)=Ax(k)+Bu(k),x(0)= x 0 E1

where x(k) n is the state vector at time k , and u(k) r is the vector of manipulated variables to be determined by the controller. The control and state sequences must satisfy u ( k ) U , x ( k ) X . Usually, U is a convex, compact subset of r and X convex, closed subset of ^ { n } MathType@MTEF@5@5@+=feaagCart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiabl2riHoaaCaaaleqabaacbaGaa8NxaaaakmaacmaabaGaamOBaaGaay5Eaiaaw2haaaaa@3B9A@ , each set containing the origin in its interior.

The control objective is usually to steer the state to the origin or to an equilibrium state x r for which the output y r = h ( x r ) = r where r is the constant reference. A suitable change of coordinates reduces the second problem to the first which, therefore, we consider in the sequel. Assuming that a full measurement of the state x ( k ) is available at the current time k . Then for event ( x , k ) (i.e. for state x at time k ), a receding horizon implementation is typically formulated by introducing the following open-loop optimization problem.

J ( p , m ) ( x ( k ) ) E2

subject to

u ( k ) U , x ( k ) X , E3
( p m ) where p denotes the length of the prediction horizon or output horizon, and m denotes the length of the control horizon or input horizon. (When, p = we refer to this as the infinite horizon problem, and similarly, when p is finite, we refer to it as a finite horizon problem). For the problem to be meaningful we assume that the origin ( x = 0 , u = 0 ) is in the interior of the feasible region.

Several choices of the objective function J ( p , m ) ( x ( k ) ) in the optimization eq.(2) have been reported in [4-7] and have been compared in [8]. In this Chapter, we consider the following quadratic objective

E4

where u(·):= [u (k) T ,...,u (k+m1|k) T ] T MathType@MTEF@5@5@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaabaaaaaaaaapeGaamyDaiaacIcacaGG3cGaaiykaiaacQdacqGH9aqpcaGGBbGaamyDaiaacIcacaWGRbGaaiyka8aadaahaaWcbeqaa8qacaWGubaaaOGaaiilaiaac6cacaGGUaGaaiOlaiaacYcacaWG1bGaaiikaiaadUgacqGHRaWkcaWGTbGaeyOeI0IaaGymaiaacYhacaWGRbGaaiyka8aadaahaaWcbeqaa8qacaWGubaaaOGaaiyxa8aadaahaaWcbeqaa8qacaWGubaaaaaa@5005@ is the sequence of manipulated variables to be optimized; x(k+i|k),i=1,2,...,p denote the state prediction generated by the nominal model (1) on the basis of the state informations at time k under the action of the control sequence u(k),u(k+1|k),...,u(k+i1|k); MathType@MTEF@5@5@+=feaagCart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaabaaaaaaaaapeGaamyDaiaacIcacaWGRbGaaiykaiaacYcacaaMc8UaaGPaVlaadwhacaGGOaGaam4AaiabgUcaRiaaigdacaGG8bGaam4AaiaacMcacaGGSaGaaiOlaiaac6cacaGGUaGaaiilaiaadwhacaGGOaGaam4AaiabgUcaRiaadMgacqGHsislcaaIXaGaaiiFaiaadUgacaGGPaGaai4oaaaa@50C1@ P 0 , Q and R are strictly positive definite symmetric weighting matrices. Let u ( p , m ) * ( i | k ) , i = k , ... , k + m 1 be the minimizing control sequence for J (p,m) (x(k)) subject to the system dynamics (1) and the constraint (3), and J ( p , m ) * be the optimizing value function.

A receding horizon policy proceeds by implementing only the first control u (p,m) * (k|k) to obtain x(k+1)=Ax(k)+B u (p,m) * (k|k) . The rest of the control sequence u (p,m) * (i|k),i=k+1,...,k+m1 is discarded and x(k+1) is used to update the optimization problem (2) as a new initial condition. This process is repeated, each time using only the first control action to obtain a new initial condition, then shifting the cost ahead one time step and repeating. This is the reason why MPC is also sometimes referred to as receding horizon control (RHC) or moving horizon control (MHC). The purpose of taking new measurements at each time step is to compensate for unmeasured disturbances and model inaccuracy, both of which cause the system output to be different from the one predicted by the model. Fig.1 presents a conceptual picture of MPC.

Figure 1.

Principle of model predictive control.

Three practical questions are immediate [1]:

  1. When is the problem formulated above feasible, so that the algorithm yields a control action which can be implemented?

  2. When does the sequence of computed control actions lead to a system which is closed-loop stable?

  3. What closed-loop performance results from repeated solution of the specified open-loop optimal control problem?

These questions will be explained in the following sections.

3.2. Feasibility

The constraints stipulated in (3) may render the optimization problem infeasible. It may happen, for example, because of a disturbance, that the optimization problem posed above becomes infeasible at a particular time step. It may also happen, that the algorithm which minimizes an open-loop objective, inadvertently drives the closed-loop system outside the feasible region.

3.3. Closed loop stability

In either the infinite or the finite horizon constrained case it is not clear under what conditions the closed loop system is stable. Much research on linear MPC has focused on this problem. Two approaches have been proposed to guarantee stability: one based on the original problem (1), (2), and (3) and the other where a contraction constraint is added [9, 10]. With the contraction constraint the norm of the state is forced to decrease with time and stability follows trivially independent of the various parameters in the objective function. Without the contraction constraint the stability problem is more complicated.

General proofs of stability for constrained MPC based on the monotonicity property of the value function have been proposed by [11] and [12]. The most comprehensive and also most compact analysis has been presented by [13] and [14] whose arguments we will sketch here.

To simplify the exposition we assume p=m=N , then J (p,m) = J N as defined in eq.(2). The key idea is to use the optimal finite horizon cost J N * , the value function, as a Lyapunov function. One wishes to show that

J N * (x(k)) J N * (x(k+1))>0,forx0 E5

Rewriting J N * (x(k)) J N * (x(k+1)) gives,

J N * (x(k)) J N * (x(k+1)) = [ x T (k)Qx(k)+ u * T (x(k))R u * (x(k))] +[ J N1 * (x(k+1)) J N * (x(k+1))] E6

If it can be shown that the right hand side of (6) is positive, then stability is proven. Due to Q>0 and R>0 , the first term [ x T (k)Qx(k)+ u * T (x(k))R u * (x(k))] is positive. In general, it cannot be asserted that the second term [ J N1 * (x(k+1)) J N * (x(k+1))] is nonnegative.

Several approaches have been presented to assure that the right hand side of (6) is positive, please refer to [1]. The various constraints introduced to guarantee stability (end constraint for all states, end constraint for unstable modes, terminal region, etc.) may lead to feasibility problems. For instance, the terminal equality constraint may become infeasible unless a sufficiently large horizon is used.

3.4. Open-loop performance objective versus closed loop performance

In receding horizon control only the first of the computed control moves is implemented, and the remaining ones are discarded. Therefore the sequence of actually implemented control moves may differ significantly from the sequence of control moves calculated at a particular time step. Consequently the finite horizon objective which is minimized may have only a tentative connection with the value of the objective function as it is obtained when the control moves are implemented.

Advertisement

4. Nonlinear model predictive control

Linear MPC has been developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. While linear model predictive control has been popular since the 70s of the past century, the 90s have witnessed a steadily increasing attention from control theorists as well as control practitioners in the area of nonlinear model predictive control (NMPC).

The practical interest is driven by the fact that many systems are in general inherently nonlinear and today’s processes need to be operated under tighter performance specifications. At the same time more and more constraints for example from environmental and safety considerations, need to be satisfied. In these cases, linear models are often inadequate to describe the process dynamics and nonlinear models have to be used. This motivates the use of nonlinear model predictive control.

The system to be controlled is described, or approximated by a discrete-time model

x(k+1)=f(x(k),u(k)), y(k)=h(x(k)), E7

where f(·) is implicitly defined by the originating differential equation that has an equilibrium point at the origin (i.e. f ( 0 , 0 ) = 0 ). The control and state sequences must satisfy (3).

4.1. Difficulties of NMPC

The same receding horizon idea which we discussed in section 3 is also the principle underlying nonlinear MPC, with the exception that the model describing the process dynamics is nonlinear. Contrary to the linear case, however, feasibility and the possible mismatch between the open-loop performance objective and the actual closed loop performance are largely unresolved research issues in nonlinear MPC. An additional difficulty is that the optimization problems to be solved on line are generally nonlinear programs without any redeeming features, which implies that convergence to a global optimum cannot be assured. For the quadratic programs arising in the linear case this is guaranteed. As most proofs of stability for constrained MPC are based on the monotonicity property of the value function, global optimality is usually not required, as long as the cost attained at the minimizer decreases (which is usually the case, especially when the optimization algorithm is initialized from the previous shifted optimal sequence). However, although stability is not altered by local minimum, performance clearly deteriorates.

The next section focuses on system theoretical aspects of NMPC. Especially the question on closed-loop stability is considered.

4.2. Closed-loop stability

One of the key questions in NMPC is certainly, whether a finite horizon NMPC strategy does lead to stability of the closed-loop or not. Here only the key ideas are reviewed and no detailed proofs are given. Furthermore, notice that we will not cover all existing NMPC approaches, instead we refer the reader to the overview papers [2, 15, 16]. For all the following sections it is assumed that the prediction horizon is set equal to the control horizon, that is, p=m .

4.2.1. Infinite horizon NMPC

As pointed out, the key problem with a finite prediction and control horizon stems from the fact that the predicted open and the resulting closed-loop behavior is in general different. The most intuitive way to achieve stability is the use of an infinite horizon cost [17, 18], that is, p in (4) is set to . As mentioned in [19], in the nominal case, feasibility at one sampling instance also implies feasibility and optimality at the next sampling instance. This follows from Bellman’s Principle of Optimality [20], that is the input and state trajectories computed as the solution of the NMPC optimization problem (2), (3) and (7) at a specific instance in time, are in fact equal to the closed-loop trajectories of the nonlinear system, i.e. the remaining parts of the trajectories after one sampling instance are the optimal solution at the next sampling instance. This fact also implies closed-loop stability. When the system is both infinite-horizon and constrained, [21] considered this case for T-S fuzzy systems with PDC law and non-PDC law. New sufficient conditions were proposed in terms of LMIs. Both the corresponding PDC and non-PDC state-feedback controllers were designed, which could guarantee that the resulting closed-loop fuzzy system be asymptotically stable. In addition, the feedback controllers would meet the specifications for the fuzzy systems with input or output constraints.

4.2.2. Finite horizon NMPC

In this section, different possibilities to achieve closed-loop stability for NMPC using a finite horizon length have been proposed. Just as outlined for the linear case, in the proof the value function is employed as a Lyapunov function. A global optimum must be found at each time step to guarantee stability. As mentioned above, when the horizon is infinity, feasibility at a particular time step implies feasibility at all future time steps. Unfortunately, contrary to the linear case, the infinite horizon problem cannot be solved numerically.

Most of approaches modify the NMPC setup such that stability of the closed-loop can be guaranteed independently of the plant and performance specifications. This is usually achieved by adding suitable equality or inequality constraints and suitable additional penalty terms to the objective functional. These additional constraints are usually not motivated by physical restrictions or desired performance requirements but have the sole purpose to enforce stability of the closed-loop. Therefore, they are usually termed stability constraints.

Terminal equality constraint. The simplest possibility to enforce stability with a finite prediction horizon is to add a so called zero terminal equality constraint at the end of the prediction horizon [17, 23, 28], i.e. to add the equality constraint x(k+p|k)=0 to the optimization problem (2), (3) and (7). This leads to stability of the closed-loop, if the optimal control problem possesses a solution at k , since the feasibility at one time instance does also lead to feasibility at the following time instances and a decrease in the value function.

The first proposal for this form of model predictive control for time-varying, constrained, nonlinear, discrete-time systems was made by [17]. This paper is particularly important, because it provides a definitive stability analysis of this version of discrete-time receding horizon control (under mild conditions of controllability and observability) and shows the value function J N * associated with the finite horizon optimal control problem approaches that of the infinite horizon problem as the horizon approaches infinity. This paper remains a key reference on the stabilizing properties of model predictive control and subsumes much of the later literature on discrete-time MPC that uses a terminal equality constraint.

In fact, the main advantages of this version are the straightforward application and the conceptual simplicity. On the other hand, one disadvantage of a zero terminal constraint is that the system must be brought to the origin in finite time. This leads in general to feasibility problems for short prediction/control horizon lengths, i.e. a small region of attraction. From a computational point of view, the optimization problem with terminal constraint can be solved in principle, but equality constraints are computationally very expensive and can only be met asymptotically [24]. In addition, one cannot guarantee convergence to a feasible solution even when a feasible solution exists, a discomforting fact. Furthermore, specifying a terminal constraint which is not met in actual operation is always somewhat artificial and may lead to aggressive behavior. Finally, to reduce the complexity of the optimization problem it is desirable to keep the control horizon small, or, more generally, characterize the control input sequence with a small number of parameters. However, a small number of degrees of freedom may lead to quite a gap between the open-loop performance objective and the actual closed loop performance.

Terminal constraint set and terminal cost function. Many schemes have been proposed [24, 26-28, 30, 31, 36, 37], to try to overcome the use of a zero terminal constraint. Most of them either use the so called terminal region constraint

x(k+p|k) X f X E8

and/or a terminal penalty term E(x(k+p|k)) which is added to the cost functional. Note that the terminal penalty term is not a performance specification that can be chosen freely. Rather E and the terminal region X f in (8) are determined off-line such that stability is enforced.

  • Terminal constraint set. In this version of model predictive control, X f is a subset of X containing a neighborhood of the origin. The purpose of the model predictive controller is to steer the state to X f in finite time. Inside X f , a local stabilizing controller k f (·) is employed. This form of model predictive control is therefore sometimes referred to as dual mode, and was proposed. Fixed horizon versions for constrained, nonlinear, discrete-time systems are proposed in [32] and [33].

  • Terminal cost function. One of the earliest proposals for modifying (2), (3) and (7) to ensure closed-loop stability was the addition of a terminal cost. In this version of model predictive control, the terminal cost E(·) is nontrivial and there is no terminal constraint so that X f = n . The proposal [34] was made in the context of predictive control of unconstrained linear system. Can this technique for achieving stability (by adding only a terminal cost) be successfully employed for constrained and/or nonlinear systems? From the literature the answer may appear affirmative. However, in this literature there is an implicit requirement that x(k+p|k) X f is satisfied for every initial state in a given compact set, and this is automatically satisfied if N is chosen sufficiently large. The constraint x ( k + p | k ) X f then need not be included explicitly in the optimal control problem actually solved on-line. Whether this type of model predictive control is regarded as having only a terminal cost or having both a terminal cost and a terminal constraint is a matter of definition. We prefer to consider it as belonging to the latter category as the constraint is necessary even though it is automatically satisfied if N is chosen sufficiently large.

Terminal cost and constraint set. Most recent model predictive controllers belong to this category. There are a variety of good reasons for incorporating both a terminal cost and a terminal constraint set in the optimal control problem. Ideally, the terminal cost E(·) should be the infinite horizon value function J * (·) if this were the case, then J N * (·)= J * (·) , on-line optimization would be unnecessary, and the known advantages of an infinite horizon, such as stability and robustness, would automatically accrue. Nonlinearity and/or constraints render this impossible, but it is possible to choose E ( · ) so that it is exactly or approximately equal to J * (·) in a suitable neighborhood of the origin. Choosing X f to be an appropriate subset of this neighborhood yields many advantages and motivates the choice of E ( · ) and X f in most of the examples of this form of model predictive control.

For the case when the system is nonlinear but there are no state or control constraints, [35] use a stabilizing local control law k f (·) , a terminal cost function E ( · ) that is a (local) Lyapunov function for the stabilized system, and a terminal constraint set X f that is a level set of E(·) and is positively invariant for the system x(k+1)=f(x(k), k f (x(k))) . The terminal constraint is omitted from the optimization problem solved on-line, but it is nevertheless shown that this constraint is automatically satisfied for all initial states in a level set of J N * ( · ) . The resultant closed-loop system is asymptotically (or exponentially) stabilizing with a region of attraction that is this level set of J N * ( · ) .

When the system is both nonlinear and constrained, E ( · ) and X f include features from the example immediately above. In [36], k f (·) is chosen to stabilize the linearized system x ( k + 1 ) = A x ( k ) + B u ( k ) , where A : = f x ( 0 , 0 ) and B : = f u ( 0 , 0 ) . Then the author of [36] employs a non-quadratic terminal cost E ( · ) and a terminal constraint set X f that is positively invariant for the nonlinear system x ( k + 1 ) = f ( x ( k ) , k f ( x ( k ) ) ) and that satisfies X f X and k f ( X f ) U .

Variable horizon/Hybrid model predictive control. These techniques were proposed by [37] and developed by [38] to deal with both the global optimality and the feasibility problems, which plague nonlinear MPC with a terminal constraint. Variable horizon MPC also employs a terminal constraint, but the time horizon at the end of which this constraint must be satisfied is itself an optimization variable. It is assumed that inside this region another controller is employed for which it is somehow known that it asymptotically stabilizes the system. Variable horizon has also been employed in contractive model predictive control (see the next section). With these modifications a global optimum is no longer needed and feasibility at a particular time step implies feasibility at all future time steps. The terminal constraint is somewhat less artificial here because it may be met in actual operation. However, a variable horizon is inconvenient to handle on-line, an exact end constraint is difficult to satisfy, and the exact determination of the terminal region is all but impossible except maybe for low order systems. In order to show that this region is invariant and that the system is asymptotically stable in this region, usually a global optimization problem needs to be solved.

Contractive model predictive control. The idea of contractive MPC was mentioned by [39], the complete algorithm and stability proof were developed by [40]. In this approach a constraint is added to the usual formulation which forces the actual and not only the predicted state to contract at discrete intervals in the future. From this requirement a Lyapunov function can be constructed easily and stability can be established. The stability is independent of the objective function and the convergence of the optimization algorithm as long as a solution is found which satisfies the contraction constraint. The feasibility at future time steps is not necessarily guaranteed unless further assumptions are made. Because the contraction parameter implies a specific speed of convergence, its choice comes natural to the operating personnel.

Model predictive control with linearization. All the methods discussed so far require a nonlinear program to be solved on-line at each time step. The effort varies somewhat because some methods require only that a feasible (and not necessarily optimal) solution be found or that only an improvement be achieved from time step to time step. Nevertheless the effort is usually formidable when compared to the linear case and stopping with a feasible rather than optimal solution can have unpredictable consequences for the performance. The computational effort can be greatly reduced when the system is linearized first in some manner and then the techniques developed for linear systems are employed on-line. Some approaches have been proposed.

Linearization theory may, in some applications, be employed to transform the original nonlinear system, using state and feedback control transformations, into a linear system. Model predictive control may be applied to the transformed system [41, 42]. [42] applies first feedback linearization and then uses MPC in a cascade arrangement for the resulting linear system. The optimal control problem is not, however, transformed into a convex problem, because the transformed control and state constraint sets and the transformed cost are no longer necessarily convex. [43, 44] employ linear transformation ( x ( k + 1 ) = A x + B u is replaced by x ( k + 1 ) = ( A + B K ) x + B v , where v : = u K x is the re-parameterized control) to improve conditioning of the optimal control problem solved on-line.

Conclusions. MPC for linear constrained systems has been shown to provide an excellent control solution both theoretically and practically. The incorporation of nonlinear models poses a much more challenging problem mainly because of computational and control theoretical difficulties, but also holds much promise for practical applications. In this section an overview over the stability analysis of NMPC is given. As outlined some of the challenges occurring in NMPC are already solvable. Nevertheless in the nonlinear area a variety of issues remain which are technically complex but have potentially significant practical implications for stability and performance.

Advertisement

5. Robust model predictive control

MPC is a class of model-based control theories that use linear or nonlinear process models to forecast system behavior. The success of the MPC control performance depends on the accuracy of the open loop predictions, which in turn depends on the accuracy of the process models. It is possible for the predicted trajectory to differ from the actual plant behavior [45]. Needless to say, such control systems that provide optimal performance for a particular model may perform very poorly when implemented on a physical system that is not exactly described by the model (see e.g. [46]).

When we say that a control system is robust we mean that stability is maintained and that the performance specifications are met for a specified range of model variations (uncertainty range). To be meaningful, any statement about robustness of a particular control algorithm must make reference to a specific uncertainty range as well as specific stability and performance criteria.

Predictive controllers that explicitly consider the process and model uncertainties, when determining the optimal control policies, are called robust predictive controllers. The main concept of such controllers is similar to the idea of H controllers and consists on the minimization of worst disturbance effect to the process behavior [47]. Several applications for the formulation of robust predictive control laws began to appear in the literature in the 1990s, focusing on both model uncertainties and disturbances.

Although a rich theory has been developed for the robust control of linear systems, very little is known about the robust control of linear systems with constraints. Most studies on robustness consider unconstrained systems. According to the Lyapunov theory, we know that if a Lyapunov function for the nominal closed-loop system maintains its descent property if the disturbance (uncertainty) is sufficiently small, then stability is maintained in the presence of uncertainty. However, when constraints on states and controls are present, it is necessary to ensure, in addition, that disturbances do not cause transgression of the constraints. This adds an extra level of complexity.

In this section, we review two robust model predictive control (RMPC) methods and mention the advantages and disadvantages of methods below. The basic idea of each method and some method applications are stated.

5.1. Min-Max RMPC methods

In the main stream robust control literature, “robust performance” is measured by determining the worst performance over the specified uncertainty range. In direct extension of this definition it is natural to set up a new RMPC objective where the control action is selected to minimize the worst value the objective function can attain as a function of the uncertain model parameters. This describes the first attempt toward a RMPC algorithm which was proposed by [48]. They showed that for FIR models the optimization problem which must be solved on-line at each time step is a linear program of moderate size with uncertain coefficients and an -norm objective function. Unfortunately, it is well known now that robust stability is not guaranteed with this algorithm [46].

In literature [48], the Campo algorithm fails to address the fact that only the first element of the optimal input trajectory is implemented and the whole min-max optimization is repeated at the next time step with a feedback update. In the subsequent optimization, the worst-case parameter values may change because of the feedback’s update. In the case of a system with uncertainties, the open-loop optimal solution differs from the feedback optimal solution, thereby violating the basic premise behind MPC. This is why robust stability cannot be assured with the Campo algorithm.

The literature [49] proposed the RMPC formulations which explicitly take into account uncertainties in the prediction model

f ( x k , u k , w k , v k ) = A ( w k ) x k + B ( w k ) u k + E v k E9

where A ( w ) = A 0 + i = 1 q A i w i , B ( w ) = B 0 + i = 1 q B i w i

w k W n w , v k V n v

Let v k , w k be modeled as unknown but bounded exogenous disturbances and parametric uncertainties and W , V be polytopes respectively. A RMPC strategy often used is to solve a min-max problem that minimize the worst-case performance while enforcing input and state constraints for all possible disturbances. The following min-max control problem is referred as open-loop constrained robust optimal control problem (OL-CROC).

min u 0 ,, u N1 { max v 0 ,, v N1 V w 0 ,, w N1 W k=0 N1 l ( x k , u k )+F( x N )} s.t.  dynamics(9),constraints(3)satisfied v k V, w k W E10

Other papers in the literature aim at explicitly or implicitly approximating the problem above by simplifying the objective and uncertainty description, and making the on-line effort more manageable, but still guaranteeing at least robust stability. For example, the authors of [50] use an -norm open-loop objective function and both assume FIR models with uncertain coefficients. A similar but more general technique has also been proposed for state-space systems with a bounded input matrix [51].

The authors of [52] have defined a dynamic programming problem (thus accounting for feedback) to determine the control sequence minimizing the worst case cost. They show that with the horizon set to infinity this procedure guarantees robust stability. However, the approach suffers from the curse of dimensionality and the optimization problem at each stage of the dynamic program is non-convex. Thus, in its generality the method is unsuitable for on-line (or even off-line) use except for low order systems with simple uncertainty descriptions.

These formulations may be conservative for certain problems leading to sluggish behavior because of three reasons. First of all, arbitrarily time-varying uncertain parameters are usually not a good description of the model uncertainty encountered in practice, where the parameters may be either constant or slowly varying but unknown. Second, the computationally simple open-loop formulations neglect the effect of feedback. Third, the worst-case error minimization itself may be a conservative formulation for most problems.

The authors of [50, 53, 54] propose to optimize nominal rather than robust performance and to achieve robust stability by enforcing a robust contraction constraint, i.e.requiring the worst-case prediction of the state to contract. With this formulation robust global asymptotic stability can be guaranteed for a set of linear time-invariant stable systems. The optimization problem can be cast as a quadratic program of moderate size for a broad class of uncertainty descriptions.

To account for the effect of feedback, the authors of [55] propose to calculate at each time step not a sequence of control moves but a state feedback gain matrix which is determined to minimize an upper bound on robust performance. For fairly general uncertainty descriptions, the optimization problem can be expressed as a set of linear matrix inequalities for which efficient solution techniques exist.

5.2. LMI-based RMPC methods

In the above method, a cost function is minimized considering the worst case into all the plants described by the uncertainties. Barriers of RMPC algorithms include: the computational cost, the applicability depending on the speed and size of the plant on which the control will act. In this section, we present one such MPC-based technique for the control of plants with uncertainties. This technique is motivated by developments in the theory and application (to control) of optimization involving linear matrix inequalities (LMIs) [56].

In this regard, the authors in [55] used the formulation in LMIs to solve the optimization problem. The basic idea of LMIs is to interpret a control problem as a semi-definite programming (SDP), that is, an optimization problem with linear objective and positive-definite constraints involving symmetric matrices that are related to the decision variables.

There are two reasons why LMI optimization is relevant to MPC. Firstly, LMI-based optimization problems can be solved in polynomial time, which means that they have low computational complexity. From a practical standpoint, there are effective and powerful algorithms for the solution of these problems, that is, algorithms that rapidly compute the global optimum, with non-heuristic stopping criteria. It is comparable to that required for the elevation of an analytical solution for a similar problem. Thus LMI optimization is well suited for on-line implementation, which is essential for MPC. Secondly, it is possible to recast much of existing robust control theory in the framework of LMIs [55].

The implication is that we can devise an MPC scheme where, at each time instant, an LMI optimization problem (as opposed to conventional linear or quadratic programs) is solved that incorporates input/output constraints and a description of the plant uncertainty. What’s more, it can guarantee certain robustness properties.

5.3. Our works

In recent decades, many research results in the design of RMPC have appeared, see for examples, [55-62] and the references therein. The main drawback associated with the above-mentioned methods proposed in MPC is that a single Lyapunov matrix is used to guarantee the desired closed-loop multi-objective specifications. This must work for all matrices in the uncertain domain to ensure that the hard constraints on inputs and outputs are satisfied. This condition is generally conservative if used in time-invariant systems. Furthermore, the hard constraints on outputs of closed-loop systems cannot be transformed into a linear matrix inequality (LMI) form using the method proposed in [57, 58, 60].

We present a multi-model paradigm for robust control. Underlying this paradigm is a linear time-varying (LTV) system.

x ( k + 1 ) = A ( k ) x ( k ) + B ( k ) u ( k ) y ( k ) = C ( k ) x ( k ) [ A ( k ) B ( k ) C ( k ) 0 ] Ω E11

where u ( k ) n u is the control input, x ( k ) n x is the state of the plant and y ( k ) n y is the plant output, and Ω is some pre-specified set.

For polytopic systems, the set Ω is the polytope

Ω = C o { [ A 1 B 1 C 1 0 ] , [ A 2 B 2 C 2 0 ] , ... , [ A L B L C L 0 ] } , E12

where C o devotes to the convex hull. In other words, if [ A ( k ) B ( k ) C ( k ) 0 ] Ω , then, for some nonnegative ξ 1 ( k ) , ξ 2 ( k ) ,..., ξ L ( k ) summing to one, we have

[ A ( k ) B ( k ) C ( k ) 0 ] = i = 1 L ξ i ( k ) [ A i B i C i 0 ] , E13

where L = 1 corresponds to the nominal LTI system. The system described in equation (11) subject to input and output constraints

                                         | u h ( k + i | k ) | u h , m a x , i 0 , h = 1 , 2 , ... , n u , | y h ( k + i | k ) | y h , m a x , i 1 , h = 1 , 2 , ... , n y .

In 2001, the authors of [63] firstly put forward the idea of using the parameter-dependent Lyapunov function to solve the problem of robust constrained MPC for linear continuous-time uncertain systems, and hereafter, this idea was applied to linear discrete-time uncertain systems in [64, 65].

Inspired by above-mentioned work, we addressed the problem of robust constrained MPC based on parameter-dependent Lyapunov functions with polytopic-type uncertainties in [66]. The results are based on a new extended LMI characterization of the quadratic objective, with hard constraints on inputs and outputs. Sufficient conditions in LMI do not involve the product of the Lyapunov matrices and the system dynamic matrices. The state feedback control guarantees that the closed-loop system is robustly stable and the hard constraints on inputs and outputs are satisfied. The approach provides a way to reduce the conservativeness of the existing conditions by decoupling the control parameterization from the Lyapunov matrix. An example will be provided to illustrate the effectiveness of the techniques developed in [66]. As the method proposed in [55] is a special case of our results, the optimization problem should be feasible using the method proposed in our paper since it is solvable using the approach in [55]. However, the optimization may not have a solution by the result in [55], while it has a solution by our result.

Example (Input and Output Constraints) Consider the linear discrete-time parameter uncertain system (11) with

A 1 = [ 0.90 0.80 0.35 0.45 ] , A 2 = [ 0.90 0.85 0.40 0.85 ] , A 3 = [ 0.96 0.13 0.28 0.90 ] , B 1 = [ 1 1 ] , B 2 = [ 1 0.8 ] , B 3 = [ 1 0.86 ] , C 1 = [ 1 0.3 ] , C 2 = [ 0.8 0.2 ] , C 3 = [ 1.2 0.4 ] . E14

It is shown that the optimization is infeasible with the method proposed in [55] without the constraints. However, taking output constraints with y 1 , m a x = 2 and input constraints with u 1 , m a x = 0.8 , and with uncertain parameters assumed to be ξ 1 ( k ) = 0.5 c o s ( k ) 2 , ξ 2 ( k ) = 0.6 s i n ( k ) 2 , ξ 3 ( k ) = 0.5 c o s ( k ) 2 + 0.4 s i n ( k ) 2 , it is feasible using the method proposed in this paper. The simulation results are given in Figure 2 and Figure 3.

5.4. Conclusion

An overview of some methods on RMPC is presented. The methods are studied based on LMI and Min-Max. The basic idea and applications of methods are stated in each part. Advantages and disadvantages of methods are stated in this section too.

Advertisement

6. Recent developments in stochastic MPC

Despite the extensive literature that exists on predictive control and robustness to uncertainty, both multiplicative (e.g. parametric) and additive (e.g. exogenous), very little attention

has been paid to the case of stochastic uncertainty. Although robust predictive control can handle constrained systems that are subject to stochastic uncertainty, it will propagate the effects of uncertainty over a prediction horizon which can be computationally expensive and conservative. Yet this situation arises naturally in many control applications. The aim of this section is to review some of the recent advances in stochastic model predictive control (SMPC).

The basic SMPC problem is defined in Subsection 6.1. and a review of earlier work is given in Subsection 6.2.

Figure 2.

States ( x 1 , x 2 ) (method in [66]).

Figure 3.

Output y and input u (method in [66]).

6.1. Basic SMPC problem

Consider the system described by the model

x ( k + 1 ) = A x ( k ) + B u ( k ) + w ( k ) E15

where x n is the state, u m is the input and the disturbance wk are assumed to be independent and identically distributed (i.i.d.), with zero mean, known distribution, and

α w k α E16

where α>0 and inequalities apply element wise. (15) is subject to probabilistic constraints

P( e j T ϕ k h j ) p j ,j=1,...,ρ   ϕ k =G x k+1 +F u k E17

where G ρ × n , F ρ × m and e j T denotes the j th row of the identity matrix. This formulation covers the case of state only, input only and state/input constraints which can be probabilistic (soft) or deterministic (hard) since p j = 1 can be chosen for some or all j . For each j (17) can be invoked separately so that in this section is taken to be scalar

ϕ k = g T x k + 1 + f T u k E18

The problem is to devise a receding horizon MPC strategy that minimizes the cost

J = k = 0 E ( x T ( k ) Q x ( k ) + u T ( k ) R u ( k ) ) E19

(where E denotes expectation) and guarantees that the closed loop system is stable, while its state converges to a neighborhood of the origin subject to the constraint (17).

As is common in the literature on probabilistic robustness (e.g. [67]), all stochastic uncertainties are assumed to have bounded support. Not only is this necessary for asserting feasibility and stability, but it matches the real world more closely than the mathematically convenient Gaussian assumption which permits w to become arbitrarily large (albeit with small probability), since noise and disturbances derived from physical processes are finite.

6.2. Earlier work

Stochastic predictive control (SMPC) is emerging as a research area of both practical and theoretical interest.

MPC has proved successful because it attains approximate optimality in the presence of constraints. In addition, RMPC can maintain a satisfactory level of performance and guarantee constraint satisfaction when the system is subject to bounded uncertainty [2]. However, such an approach does not cater for the case in which model and measurement uncertainties are stochastic in nature, subject to some statistical regularity, and neither can it handle the case of random uncertainty whose distribution does not have finite support (e.g. normal distributions). Therefore RMPC can be conservative since it ignores information on the probabilistic distribution of the uncertainty.

It is possible to adopt a stochastic uncertainty description (instead of a set-based description) and develop an MPC algorithm that minimizes the expected value of a cost function. In general, the same difficulties that plagued the set-based approach are encountered here. One notable exception is that, when the stochastic parameters are independent sequences, the true closed-loop optimal control problem can be solved analytically using dynamic programming [68]. In many cases, the expected error may be a more meaningful performance measure than the worst-case error.

SMPC also derives from the fact that most real life applications are subject to stochastic uncertainty and have to obey constraints. However, not all constraints are hard (i.e. inviolable), and it may be possible to improve performance by tolerating violations of constraints providing that the frequency of violations remains within allowable limits, namely soft constraints (see e.g. [69, 70] or [71, 73]).

These concerns are addressed by stochastic MPC. Early work [74] considered additive disturbances and ignored the presence of constraints. Later contributions [68, 75-78] took constraints into account, but suffered from either excessive computation or a high degree of conservativeness, or did not consider issues of closed loop stability/feasibility.

An approach that arose in the context of sustainable development [70, 79] overcame some of these difficulties by using stochastic moving average models and equality stability constraints. This was extended to state space models with stochastic output maps and to inequality maps involving terminal invariant sets [81]. The restriction of model uncertainty to the output map was removed in [81], but the need to propagate the effects of uncertainty over the prediction horizon prevented the statement of results in respect of feasibility. [82] overcomes these issues through an augmented autonomous prediction formulation, and provides a method of handling probabilistic constraints and ensuring closed loop stability through the use of an extension of the concept of invariance, namely invariance with probability p.

Recent work [83, 84] proposed SMPC algorithms that use probabilistic information on additive disturbances in order to minimize the expected value of a predicted cost subject to hard and soft (probabilistic) constraints. Stochastic tubes were used to provide a recursive guarantee of feasibility and thus ensure closed loop stability and constraint satisfaction. Moreover, the authors of [84] proposed conditions that, for the parameterization of predictions employed, are necessary and sufficient for recursive feasibility, thereby incurring no additional conservatism. The approach was based on state feedback, which assumed that the states are measurable. In practice this is often not the case, and it is then necessary to estimate the state via an observer. The introduction of state estimation into RMPC is well understood and uses lifting to describe the combined system and observer dynamics. In [85], these ideas are extended to include probabilistic information on measurement noise and the unknown initial plant state, and extends the approach of [84].

Advertisement

Applications

In the next two sections, we will show that many important practical and theoretical problems can be formulated in the MPC framework. Pursuing them will assure MPC of its stature as a vibrant research area, where theory is seen to support practice more directly than in most other areas of control research.

Advertisement

7. Networked control systems

Traditionally, the different components (i.e., sensor, controller, and actuator) in a control system are connected via wired, point-to-point links, and the control laws are designed and operate based on local continuously-sampled process output measurements.

In recent years, there has been a growing interest in the design of controllers based on the network systems in several areas such as traffic, communication, aviation and spaceflight [86]. The networked control systems (NCSs) is defined as a feedback control system where control loops are closed through a real-time network [96, 97], which is different from traditional control systems. For an overview, the readers can refer to [97], which systematically addresses several key issues (band-limited channels, sampling and delay, packet loss, system architecture) that make NCSs distinct from other control systems.

7.1. Characteristics of NCSs

Advantages. Communication networks make the transmission of data much easier and provide a higher degree of freedom in the configuration of control systems. Network-based communication allows for easy modification of the control strategy by rerouting signals, having redundant systems that can be activated automatically when component failure occurs. Particularly, NCSs allow remote monitoring and adjustment of plants over the Internet. This enables the control system to benefit from the way it retrieves data and reacts to plant fluctuations from anywhere around the world at any time, see for example, [98-101] and references therein.

Disadvantages. Although the network makes it convenient to control large distributed systems, new issues arise in the design of a NCSs. Augmenting existing control networks with real-time wired or wireless sensor and actuator networks challenges many of the assumptions made in the development of traditional process control methods dealing with dynamical systems linked through ideal channels with flawless, continuous communication. In the context of networked control systems, key issues that need to be carefully handled at the control system design level include data losses due to field interference and time-delays due to network traffic as well as due to the potentially heterogeneous nature of the additional measurements (for example, continuous, asynchronous and delayed) [102]. These issues will deteriorate the performance and may even cause the system to be unstable.

Hence, the main question is how to design the NCSs include the handling of data losses, time-varying delays, and the utilization of heterogeneous measurements to maintain the closed-loop stability while improving the closed-loop performance.

7.2. Results on NCSs

To solve these problems, various methods have been developed, e.g., augmented deterministic discrete-time model, queuing, optimal stochastic control, perturbation, sampling time scheduling, robust control, fuzzy logic modulation, event-based control, end-user control adaptation, data packet dropout analysis, and hybrid systems stability analysis. However, these methods have put some strict assumptions on NCSs, e.g., the network time delay is less than the sampling period [109, 110]. The work of [111] presents an approach for stability analysis of NCSs that decouples the scheduling protocol from properties of the network-free nominal closed-loop system. The problem of the design of robust H controllers for uncertain NCSs with the effects of both the network-induced delay and data dropout has been considered in [112] the network-induced time delay is larger than one sampling period, but there is no compensation for the time delay and data dropout.

A common approach is to insert network behavior between the nodes of a conventional control loop, designed without taking the network behavior into account. More specifically, in [114], it was proposed to first design the controller using established techniques considering the network transparent, and then to analyze the effect of the network on closed-loop system stability and performance. This approach was further developed in [115] using a small gain analysis approach.

In the last few years, however, several research papers have studied control using the I E E E 802.11 and Bluetooth wireless networks, see, for example, [116-119] and the references therein. In the design and analysis of networked control systems, the most frequently studied problem considers control over a network having constant or time-varying delays. This network behavior is typical of communications over the Internet but does not necessarily represent the behavior of dedicated wireless networks in which the sensor, controller, and actuator nodes communicate directly with one another but might experience data losses. An appropriate framework to model lost data, is the use of asynchronous systems [120-122] and the process is considered to operate in an open-loop fashion when data is lost.

The most destabilizing cause of packet loss is due to bursts of poor network performance in which case large groups of packets are lost nearly consecutively. A more detailed description of bursty network performance using a two-state Markov chain was considered in [123]. Modeling networks, using Markov chains results in describing the overall closed-loop system as a stochastic hybrid system [120]. Stability results have been presented for particular cases of stochastic hybrid systems (e.g., [124, 125]). However, these results do not directly address the problem of augmentation of dedicated, wired control systems with networked actuator and sensor devices to improve closed-loop performance.

With respect to other results on networked control, in [126], stability and disturbance attenuation issues for a class of linear networked control systems subject to data losses modeled as a discrete-time switched linear system with arbitrary switching was studied. In [127], (see also [128-130]), optimal control of linear time-invariant systems over unreliable communication links under different communication protocols (with and without acknowledgment of successful communication) was investigated and sufficient conditions for the existence of stabilizing control laws were derived.

Although, within control theory, the study of control over networks has attracted considerable attention in the literature, most of the available results deal with linear systems (e.g., [100, 131]).

7.3. Our works

MPC framework is particularly appropriate for controlling systems subject to data losses because the actuator can profit from the predicted evolution of the system. In this section, results from our works are summarized.

Several methodologies have been reported in the open literature to handle with the problems mentioned above in networked systems. Among these papers, two basic control strategies are applied when the packet dropping happens, they are zero-input schemes, by which the actuator input is set to zero when the control packet is lost, and hold-input scheme which implies the previous control input is used again when the control packet drops. The further research is proposed in [132] by directly comparing the two control methods.

The work of [133] presents a novel control technique combining modified MPC and modified Smith predictor to guarantee the stability of NCSs. Especially, the key point in this paper is that the future control sequence is used to compensate for the forward communication time delay and predictor is responsible for compensating the time delay in the backward channel.

Although much research work have been done in NCSs, many of those results simply treat the NCSs as a system with time delay, which ignores NCSs features, e.g., random network delay and data transmission in packets [134]. In order to solve the problem, Markovian jump system can be used to model the random time-delay. Moreover, most work have also ignored another important feature of NCSs. This feature is that the communication networks can transmit a packet of data at the same time, which is not done in traditional control systems. We have proposed a new networked control scheme – networked predictive control which mainly consists of a control prediction generator and a network dropout/delay compensator. It is first assumed that control predictions based on received data are packed and sent to the plant side through a network. Then the network dropout/delay compensator chooses the latest control value from the control prediction sequences available on the plant side, which can compensate for the time delay and data dropouts. The structure of the networked predictive control system (NPCS) is shown as Figure 4. The random network delay in the forward channel in NCS has been studied in [135]. Some other results has been obtained in [136] and [137], where the network-induced delay is not in the form of a Markov chain.

Figure 4.

The networked predictive control system.

But, the random network delay in the forward and feedback channels makes the control design and stability analysis much more difficult. In [100] proposes a predictive control scheme for NCS with random network delay in both the feedback and forward channels and also provides an analytical stability criteria for closed-loop networked predictive control (NPC) systems. Furthermore, [138] can overcome the effects caused by both the unknown network delay and data dropout. Recently, [139] mainly focus on the random transmission data dropout existing in both feedback and forward channels in NCSs. So, the network-induced time delay is not discussed here.

In fact, using the networked predictive control scheme presented in this section, the control performance of the closed-loop system with data dropout is very similar to the one without data dropout.

Advertisement

8. Distributed MPC

At the beginning of research on NCSs, more attention was paid on single plant through network. Recently, fruitful research results on multi-plant, especially, on multi-agent networked control systems have been obtained. The aim of this section is to review a classification of a number of distributed control architectures for large scale systems. Attention is focused on the design approaches based on model predictive control. The controllers apply MPC policies to their local subsystems. They exchange their predictions by communication and incorporate the information from other controllers into their local MPC problem so as to coordinate with each other. For the considered architecture, the underlying rationale, the fields of application, the merits and limitations are discussed and the main references to the literature are reported.

8.1. Background

Technological and economical reasons motivate the development of process plants, manufacturing systems, traffic networks, water or power networks [140] with an ever increasing complexity. In addition, there is an increasing interest in networked control systems, where dedicated local control networks can be augmented with additional networked (wired and/or wireless) actuator/sensor devices have become cheap and easy-to-install [141, 142]. These large scale systems are composed by many physically or geographically divided subsystems. Each subsystem interacts with some so called neighbouring subsystems by their states and their inputs. The technical target is to achieve some global performance of entire system (or a common goal of all subsystems). Actually, it is difficult to control with a centralized control structure due to the required inherent computational complexity, robustness and reliability problems and communication bandwidth limitations.

For all these reasons, many distributed control structures have been developed and applied over the recent decades.

8.2. The reasons why DMPC is adopted

About MPC. The aim of this section is to review the distributed control approaches adopted, and to provide a wide list of references focusing the attention on the methods based on MPC. This choice is motivated by the ever increasing popularity of MPC in the process industry, see e.g. the survey papers [143, 144] on the industrial applications of linear and nonlinear MPC. Moreover, in recent years many MPC algorithms have been developed to guarantee some fundamental properties, such as the stability of the resulting closed-loop system or its robustness with respect to a wide class of external disturbances and/or model uncertainties, see e.g. the survey paper [2]. Especially, MPC is also a natural control framework to deal with the design of coordinated, distributed control systems because of its ability to handle input and state constraints, and also because it can account for the actions of other actuators in computing the control action of a given set of control actuators in real-time. Therefore, MPC is now recognized as a very powerful approach with well established theoretical foundations and proven capability to handle the problems of large scale systems.

Other control structures

1) Centralized Control. MPC is normally implemented in a centralized fashion. One controller is able to acquire the information of the global system, computes all the control inputs for the system, and could obtain a good global performance. In large-scale interconnected systems, such as power systems, water distribution systems, traffic systems, etc., such a centralized control scheme may not suitable or even possible apply to large scale system for some reasons: (1) there are hundreds of inputs and outputs. It requires a large computational efforts in online implementation (2) when the centralized controller fails, the entire system is out of control and the control integrity cannot be guaranteed when a control component fails (3) in some cases, e.g. in multi-intelligent vehicle system, the global information is unavailable to each controller and (4) objections to centralized control are often not computational, however, but organizational. All subsystems rely upon the central agent, making plantwide control difficult to coordinate and maintain. These obstacles deter implementation of centralized control for large-scale plants.

In recent years, there is a trend for the development of decentralized and distributed MPC due to the disadvantages of centralized MPC mentioned above (e.g.,[145, 146]).

2) Decentralized Control. Most large-scale and networked control systems are based on a decentralized architecture, that is, the system is divided into several subsystems, each controlled by a different agent that does not share information with the rest. Each of the agents implements an MPC based on a reduced model of the system and on partial state information, which in general results in an optimization problem with a lower computational burden. Figure 5 shows a decentralized control structure, where the system under control is assumed to be composed by two subsystems S1 and S2, with states, control and output variables ( x 1 , u 1 , y 1 ) and ( x 2 , u 2 , y 2 ), respectively, and the interaction between the subsystems is due the inputs and the outputs of different pairs are weak. These interactions can either be direct (input coupling) or caused by the mutual effects of the internal states of the subsystems under control, like in Figure 5.

For example, in [147], a MPC algorithm was proposed under the main assumptions that the system is nonlinear, discrete-time and no information is exchanged between local controllers.The decentralized framework has the advantages of being flexible to system structure, error-tolerance, less computational efforts and no global information requirements [148].

Figure 5.

Decentralized control of a two input ( u 1 , u 2 )-two output ( y 1 , y 2 ) system.

In plants where the subsystems interact weakly, local feedback action provided by these subsystem (decentralized) controllers may be sufficient to overcome the effect of interactions. For such cases, a decentralized control strategy is expected to work adequately. On the contrary, it is well known that strong interactions can even prevent one from achieving stability and/or performance with decentralized control, see for example [149, 150], where the role played by the so-called fixed modes in the stabilization problem is highlighted.

Distributed MPC. While these paradigms (centralized control and decentralized Control) to process control have been successful, there is an increasing interest in developing distributed model predictive control (DMPC) schemes, where agents share information in order to improve closed-loop performance, robustness and fault-tolerance. As a middle ground between the decentralized and centralized strategies, distributed control preserves the topology and flexibility of decentralized control yet offers a nominal closed-loop stability guarantee.

For each decentralized MPC, a sequence of open-loop controls are determined through the solution of a constrained optimal control problem. A local objective is used. A subsystem model, which ignores the interactions, is used to obtain a prediction of future process behavior along the control horizon. For distributed control, one natural advantage that MPC offers over other controller paradigms is its ability to generate a prediction of future subsystem behavior. If the likely influence of interconnected subsystems is known, each local controller can possibly determine suitable feedback action that accounts for these external influences. Intuitively, one expects this additional information to help improve systemwide control performance. Thus the distributed control framework is usually adopted in large-scale plants [151], in spite of that the dynamic performance of centralized frame work is better than it.

Figure 6.

Distributed control of a two input ( u 1 , u 2 )-two output ( y 1 , y 2 ) system.

In distributed control structures, like the simple example shown in Figure 6, it is assumed that some information is transmitted among the local regulators ( R 1 and R 2 in Figure 6), so that each one of them has some knowledge on the behavior of the others. When the local regulators are designed with MPC, the information transmitted typically consists of the future predicted control or state variables computed locally, so that any local regulator can predict the interaction effects over the considered prediction horizon. With reference to the simple case of Figure 6, the MPC regulators R 1 and R 2 are designed to control the subsystems S 1 and S 2 , respectively. If the information was exchange among the local regulators ( R 1 and R 2 ) concerns the predicted evolution of the system states ( x 1 and x 2 ), any local regulator needs only to know the dynamics of the subsystem directly controlled ( S 1 and S 2 ).

In any case, it is apparent that the performance of the closed-loop system depends on the decisions that all the agents take. Hence, cooperation and communication policies become very important issues.

With respect to available results in this direction, several DMPC methods have been proposed in the literature that deal with the coordination of separate MPCs. These communicate in order to obtain optimal input trajectories in a distributed manner see [145, 146, 152] for reviews of results in this area. Some distributed MPC formulations are available in the literatures [153-157].

8.3. DMPC over network information exchange

However, all of the above results are based on the assumption of continuous sampling of the entire plant state vector and assuming no delays and perfect communication between subsystems. In practice, individual subsystems exchange information over a communication network, especially wireless communication network, where the data is transmitted in discrete packets. These packets may be lost during communication. Moreover, the communication media is a resource that is usually accessed in a mutually exclusive manner by neighborhood agents. This means that the throughput capacity of such networks is limited. Thus, how to improve the global performance of each subsystem with the limited network communication or limited available information is a valuable problem.

Previous work on MPC design for systems subject to asynchronous or delayed measurements has primarily focused on centralized MPC design [158], [159] and little attention has been given to the design of DMPC. In [160], the issue of delays in the communication between distributed controllers was addressed. The authors of [161] consider the design of distributed MPC schemes for nonlinear systems in a more common setting. That is, measurements of the state are not available continuously but asynchronously and with delays.

Advertisement

9. Conclusions

Recently, there has been much interest in model predictive control which allows researchers to address problems like feasibility, stability and performance in a rigorous manner. We first give a review of discrete-time model predictive control of constrained dynamic systems, both linear and nonlinear. The min-max approach for handling uncertainties are illustrated, then the LMIs methods are showed, and the advantages and disadvantages of methods are mentioned. The basic idea of each method and some method applications are stated. Despite the extensive literature that exists on predictive control and robustness to uncertainty, very little attention has been paid to the case of stochastic uncertainty. SMPC is emerging to adopt a stochastic uncertainty description (instead of a set-based description). Some of the recent advances in this area are reviewed. We show that many important practical and theoretical problems can be formulated in the MPC framework, such as DMPC. Some considerable attention has been directed to NCSs. Although the network makes it convenient to control large distributed systems, there also exist many control issues, such as network delay and data dropout, which cannot be addressed using conventional control theory, sampling and transmission methods. Results from our recent research are summarized in Section 7. We have proposed a new networked control scheme, which can overcome the effects caused by the network delay. In the last section we review a number of distributed control architectures based on model predictive control. For the considered architectures, the underlying rationale, the fields of application, the merits and limitations are discussed.

  

 

Acknowledgments

The work of Yuanqing Xia was supported by the National Basic Research Program of China (973 Program) (2012CB720000), the National Natural Science Foundation of China (60974011), Program for New Century Excellent Talents in University of China (NCET-08-0047), the Ph.D. Programs Foundation of Ministry of Education of China (20091101110023, 20111101110012), and Program for Changjiang Scholars and Innovative Research Team in University, and Beijing Municipal Natural Science Foundation (4102053,4101001). The work of Magdi S. Mahmoud is supported by the research group project no. RG1105-1 from DSR-KFUPM.

References

  1. 1. Morari M. Lee J. H. 1999 Model predictive control: past, present and future. Computers and Chemical Engineering 23667 682
  2. 2. Mayne D. Q. Rawlings J. B. Rao C. V. Scokaert P. O. M. 2000 Constrained model predictive control: stability and optimality. Automatica 26 6 789 814
  3. 3. Bemporad A. 2006 Model Predictive Control Design: New Trends and Tools. Proceedings of the 45th IEEE Conference on Decision & Control Manchester Grand Hyatt Hotel San Diego, CA, USA, 13-15December 2006.
  4. 4. Garcia C. E. Prett D. M. Morari M. 1989 Model Predictive Control: Theory and Practice a Survey. Automatica 25 3 335 348
  5. 5. Zafiriou E. Marchal A. 1991 Stability of SISO quadratic dynamic matrix control with hard output constraints. AlChE Journal 37 1550 1560
  6. 6. Muske K. R. Rawlings J. B. 1993Model predictive control with linear models. AIChe J 39 262 287
  7. 7. Genceli H. Nikolaou M. 1993 Robust stability analysis of constrained l1-norm model predictive control. AIChE Journal 39 1954 1965
  8. 8. Campo P. J. Morari M. 1986 Norm formulation of model predictive control problems. Proc. Am. Control Conf. Seattle, Washington 339 343
  9. 9. Polak E. Yang T. H. 1993a Moving horizon control of linear systems with input saturation and plant uncertainty- Part 1: robustness. International Journal of Control 58 3 613 638
  10. 10. Polak E. Yang T. H. 1993b Moving horizon control of linear systems with input saturation and plant uncertainty- Part 2: disturbance rejection and tracking. International Journal of Control 58 3 639 663
  11. 11. Keerthi S. Gilbert E. 1988 Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: stability and moving-horizon approximations. Journal of Optimization Theory and Applications 57 2 265 293
  12. 12. Bemporad A. Chisci L. Mosca E. 1994 On the stabilizing property of the zero terminal state receding horizon regulation. Automatica 30 12 2013 2015
  13. 13. Nevistić V. Primbs J. A. 1997 Finite receding horizon linear quadratic control: a unifying theory for stability and performance analysis. Technical Report CIT-CDS 97-001California Institute of Technology. Pasadena, CA.
  14. 14. Primbs J. Nevistić V. 1997 Constrained finite receding horizon linear quadratic control. Technical Report CIT-CDS 97-002 California Institute of Technology. Pasadena, CA.
  15. 15. Allgöwer F. Badgwell T. A. Qin J. S. Rawlings J. B. Wright S. J. 1999 Nonlinear predictive control and moving horizon estimation: introductory overview. In: P. M. Frank, editorAdvances in Control, Highlights of ECC99Springer391 449
  16. 16. Nicolao G. D. Magni L. Scattolini R. 2000 Stability and robustness of nonlinear receding horizon controlIn: F. Allgöwer and A. Zheng, editorsNonlinear Predictive ControlBirkhäuser 3 23
  17. 17. Keerthi S. S. Gilbert E. G. 1998 Optimal infinite-horizon feedback laws for a general class of constrained discrete time systems: Stability and moving-horizon approximations. Journal of Optimization Theory and Applications 57 2 265 2938
  18. 18. Meadows E. S. Rawlings J. B. 1993 Receding horizon control with an infinite horizon. In Proc. Amer. Contr. Conf., San Francisco 2926 2930
  19. 19. Findeisen R. Allgöwer F. 2002 An Introduction to Nonlinear Model Predictive Control. Control, 21st Benelux Meeting on Systems and Control Veidhoven
  20. 20. Bellman R. 1957 Dynamic Programming. Princeton, New Jersey Princeton University Press
  21. 21. Xia Y. Yang H. Shi P. Fu M. C. 2010 Constrained Infinite-Horizon Model Predictive Control for Fuzzy-Discrete-Time Systems. IEEE Transactions on Fuzzy Systems 18 2 429 432
  22. 22. Mayne D. Q. Michalska H. 1990 Receding horizon control of nonlinear systems. IEEE Transactions Automatic Control 35 7 814 824
  23. 23. Meadows E. S. Henson M. A. Eaton J. W. Rawlings J. B. 1995 Receding horizon control and discontinuous state feedback stabilization. International Journal of Control 62 5 1217 1229
  24. 24. Chen H. Allgöwer F. 1998 A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability. Automatica 34 10 1205 1218
  25. 25. Nicolao G. D. Magni L. Scattolini R. 1996 Stabilizing nonlinear receding horizon control via a nonquadratic terminal state penalty. In Symposium on Control, Optimization and Supervision, CESA96 IMACS Multi-conference, Lille185 187
  26. 26. Fontes F. A. 2000 A general framework to design stabilizing nonlinear model predictive controllers. Systems & Control Letters 42 2 127 143
  27. 27. Jadbabaie A. Yu J. Hauser J. 2001 Unconstrained receding horizon control of nonlinear systems. IEEE Transactions Automatic Control 46 5 776 783
  28. 28. Mayne D. Q. Michalska H. 1990Receding horizon control of nonlinear systems.IEEE Transactions Automatic Control 35 7 814 824
  29. 29. Michalska H. Mayne D. Q. 1993 Robust receding horizon control of constrained nonlinear systems. IEEE Transactions Automatic Control 38 11 1623 1633
  30. 30. Nevistić V. Morari M. 1995 Constrained control of feedback-linearizable systems. In Proc. 3rd European Control Conference ECC95, Rome 1726 1731
  31. 31. Primbs J. Nevistić V. Doyle J. 1999 Nonlinear optimal control: A control Lyapunov function and receding horizon perspective. Asian Journal of Control Ј 1 1 14 24
  32. 32. Chisci L. Lombardi A. Mosca E. 1996 Dual receding horizon control of constrained discrete-time systems. European Journal of Control Ј 2 278 285
  33. 33. Scokaert P. O. M. Mayne D. Q. Rawlings J. B. 1999 Suboptimal model predictive control (feasibility implies stability). IEEE Transactions on Automatic Control Ј 44 3 648 654
  34. 34. Bitmead R. R. Gevers M. Wertz V. 1990 Adaptive optimal control-The thinking man’s GPC. Englewood Cliffs, NJ Prentice-Hall
  35. 35. Parisini T. Zoppoli R. 1995 A receding horizon regulator for nonlinear systems and a neural approximation. Automatica 31 10 1443 1451
  36. 36. Nicolao D. G. Magni L. Scattolini R. 1996 Stabilizing nonlinear receding horizon control via a nonquadratic penalty. Proceedings of the IMACS Multi-conference CESA Lille 1 185
  37. 37. Michalska H. Mayne D. Q. 1993 Robust receding horizon control of constrained non linear systems. IEEE Transactions on Automatic Control 38 11 1623 1633
  38. 38. Michalska H. 1997 A new formulation of receding horizon control without a terminal constraint on the state. European Journal of Control 3 1 2 14
  39. 39. Yang T. H. Polak E. 1993 Moving horizon control of non linear systems with input saturation, disturbances and plant uncertainty. International Journal of Control 58 4 875 903
  40. 40. Oliveira D. S. L. Morari M. 2000 Contractive model predictive control for constrained nonlinear systems. IEEE Transactions on Automatic Control 45 6 1053 1071
  41. 41. Oliveira D. S. L. Nevistic V. Morari M. 1995 Control of nonlinear systems subject to input constraints. IFAC symposium on nonlinear control system design, Tahoe City, CA 1520
  42. 42. Kurtz M. J. Henson M. A. 1997 Input-output linearizing control of constrained nonlinear processes. Journal of Process Control 7 1 3 17
  43. 43. Keerthi S. S. 1986 Optimal feedback control of discrete-time systems with state-control constraints and general cost functions. PhD thesis. University of Michigan
  44. 44. Rossiter J. A. Kouvaritakis B. Rice M. J. 1998 A numerically robust state-space approach to stable-predictive control strategies. Automatica 34 1 65 74
  45. 45. Berber R. 1995 Methods of Model Based Process Control. NATO ASI series. series E, Applied Sciences 293 Kluwer Academic, Dordrecht, The Netherlands
  46. 46. Zheng Z. Q. Morari M. 1993 Robust stability of constrained model predictive control. Proceedings of the American Control Conference San Francisco, CA 1 379 383
  47. 47. Zeman J. Ilkiv B. 2003 Robust min-max model predictive control of linear systems with constraints. IEEE Transactions Automatic Control 930 935
  48. 48. Campo P. J. Morari M. 1987Robust model predictive control. Proceedings of the American control conference 2 1021 1026
  49. 49. Witsenhausen H. S. 1968 A min-max control problem for sampled linear systems. IEEE Transactions Automatic Control 13 1 5 21
  50. 50. Zheng A. Morari M. 1994)Robust control of linear time varying systems with constraints. Proceedings of the American Control Conference Baltimore, ML 2416 2420
  51. 51. Lee J. H. Cooley B. L. 1997 Stable minimax control for state-space systems with bounded input matrix. Proceedings of the American Control Conference, Alberquerque, NM 5 2945
  52. 52. Lee J. H. Yu Z. 1997 Worst-case formulations of model predictive control for systems with bounded parameters. Automatica 33 5 763 781
  53. 53. Zheng Z. Q. Morari M. 1995 Control of linear unstable systems with constraints. Proceedings of the American Control Conference Seattle, WA 3704 3708
  54. 54. Zheng Z. Q. Morari M. 1995 Stability of model predictive control with mixed constraints. IEEE Transactions on Automatic Control 40 10 1818 1823
  55. 55. Kothare K. V. Balakrishnan V. Morari M. 1996 Robust constrained model predictive control using linear matrix inequalities. Automatica 32 10 1361 1379
  56. 56. Boyd S. Ghaoui L. E. Feron E. Balakrishnan V. 1994 Linear Matrix Inequalities in System and Control Theory. SIAMPhiladelphia
  57. 57. Wan Z. Kothare M. V. 2003 An efficient off-line formulation of robust model predictive control using linear matrix inequalities. Automatica 39 837 846
  58. 58. Pluymers B. Suykens J. A. K. Moor B. D. 2005 Min-max feedback MPC using a time-varying terminal constraint set and comments on efficient robust constrained model predictive control with a time varying terminal constraint set Systems & Control Letters 541143 1148
  59. 59. Lee Y. I. Cannon M. Kouvaritakis B. 2005 Extended invariance and its use in model predictive control. Automatica 41 2163 2169
  60. 60. Jeong S. C. Park P. 2005 Constrained MPC algorithm for uncertain time-varying systems with state-delay. IEEE Transactions on Automatic Control 50 2 257 263
  61. 61. Sato T. Inoue A. 2006 Improvement of tracking performance in self-tuning PID controller based on generalized predictive control. International Journal of Innovative Computing, Information and Control 2 3 491 503
  62. 62. Kemih K. Tekkouk O. Filali S. 2006 Constrained generalized predictive control with estimation by genetic algorithm for a magnetic levitation system International Journal of Innovative Computing, Information and Control 2 3 543 552
  63. 63. Tuan H. Apkarian P. Nguyen T. 2001 Robust and reduced-order filtering: new LMI-based characterizations and methods. IEEE Transactions on Signal Processing 19 2975 2984
  64. 64. Oliveira D. M. C. Bernussou J. Geromel J. C. 1999 A new discrete-time robust stability condition. Systems & Control Letters 37 261 265
  65. 65. Geromel J. C. Oliveira D. M. C. Bernussou J. 2002 Robust filtering of discrete-time linear systems with parameter dependent Lyapunov functions. SIAM Journal on Control and Optimization 41 700 711
  66. 66. Xia Y. Liu G. P. Shi P. J. etc 2008 Robust Constrained Model Predictive Control Based on Parameter-Dependent Lyapunov Functions. Circuits, Systems and Signal Processing 27 4 429 446
  67. 67. Calafiore G. C. Dabbene F. Tempo R. 2000 Randomized algorithms for probabilistic robustness with real and complex structured uncertainty. IEEE Transactions on Automatic Control 45 12 2218 2235
  68. 68. Lee J. H. Cooley B. L. 1998 Optimal feedback control strategies for state-space systems with stochastic parameters. IEEE Transactions on Automatic Control 43 10 1469 1474
  69. 69. Hessem V. D. H. Bosgra O. H. 2002 A conic reformulation of model predictive control including bounded and stochastic disturbances under state and input constraints Proceedings of the 41st IEEE International Conference on Decision and ControlLas Vegas, NV, 4 4643 4648
  70. 70. Kouvaritakis B. Cannon M. Tsachouridis V. 2004 Recent developments in stochastic MPC and sustainable development. Annual Reviews in Control 28 1 23 35
  71. 71. Magni L. Pala D. Scattolini R. 2009 Stochastic model predictive control of constrained linear systems with additive uncertainty. In Proc. European Control Conference. Hungary: Budapest.
  72. 72. Oldewurter F. Jones C. N. Morari M. 2009 A tractable approximation of chance constrained stochastic MPC based on affine disturbance feedback. In Proc. 48th IEEE Conf. Decision Control. Cancun.
  73. 73. Primbs J. A. 2007 A soft constraint approach to stochastic receding horizon control. In Proc. 46th IEEE Conf. Decision Control.
  74. 74. Astrom K. J. 1970 Introduction to stochastic control theory. New York Academic Press
  75. 75. Batina I. AA Stoorvogel Weiland. S. 2002 Optimal control of linear, stochastic systems with state and input constraints. In Proc. 41st IEEE Conf. Decision and Control 1564 1569
  76. 76. Munoz D. Bemporad A. Alamo T. 2005 Stochastic programming applied to model predictive control. In Proc. CDC-ECC’051361 1366
  77. 77. Schwarm A. Nikolaou M. 1999 Chance-constrained model predictive control. AIChE Journal 45 8 1743 1752
  78. 78. Hessem V. D. H. Bosgra O. H. 2002 A conic reformulation of model predictive control including bounded and stochastic disturbances under state and input constraintsProc. 41st IEEE Conf. Decision and Control4643 4648
  79. 79. Kouvaritakis B. Cannon M. Couchman P. 2006 MPC as a tool for sustainable development integrated policy assessment. IEEE Transactions on Automatic Control 51 1 145 149
  80. 80. Couchman P. Cannon M. Kouvaritakis B. 2006 Stochastic MPC with inequality stability constraints. Automatica 42 2169 2174
  81. 81. Couchman P. Kouvaritakis B. Cannon M. 2006 MPC on state space models with stochastic input map. In Proc. 45th IEEE Conf. Decision and Control3216 3221
  82. 82. Cannon M. Kouvaritakis B. Wu X. J. 2009 Model predictive control for systems with stochastic multiplicative uncertainty and probabilistic constraints Automatica 45 1 167 172
  83. 83. Cannon M. Kouvaritakis B. Raković S. V. Cheng Q. 2011 Stochastic tubes in model predictive control with probabilistic constraints. IEEE Transactions on Automatic Control 56 1 194 200
  84. 84. Kouvaritakis B. Cannon M. Raković S. V. Cheng Q. 2010 Explicit use of probabilistic distributions in linear predictive control. Automatica 46 10 1719 1724
  85. 85. Cannon M. Chenga Q. F. Kouvaritakis B. Raković S. V. 2012 Stochastic tube MPC with state estimation. Automatic 48 3 536 541
  86. 86. Zhivoglyadov P. V. Middleton R. H. 2003 Networked control design for linear systems. Automatica 39 743 750
  87. 87. Hespanha J. P. Naghshtabrizi P. Xu Y. 2007 A survey of recent results in networked control systems. Proceedings of the IEEE 95 138
  88. 88. Wong S. Brockett R. W. 1999 Systems with finite communication bandwidth constraints II: Stabilization with limited information feedback. IEEE Transactions on Automatic Control 44 5 1049 1053
  89. 89. Ye H. Walsh G. Bushnell L. G. 2001 Real-time mixed-traffic wireless networks. IEEE Transactions on Industrial Electronics 48 5 883 890
  90. 90. Zhang W. MS Brannicky Philips. S. M. 2001 Stability of networked control systems. IEEE Control Systems Magazine 21 1 84 99
  91. 91. Walsh G. C. Beldiman O. Bushnell L. G. 2001 Asymptotic behavior of nonlinear networked control systems. IEEE Transactions on Automatic Control 46 7 1093 1097
  92. 92. Yook J. K. Tilbury D. M. Soparkar N. R. 2001 A design methodology for distributed control systems to optimize performance in the presence of time delays. International Journal of Control 74 1 58 76
  93. 93. Park H. S. Kim Y. H. Kim D. S. Kwon W. H. 2002 A scheduling method for network based control systems. IEEE Transactions on Control Systems and Technology 10 3 318 330
  94. 94. Lin H. Zhai G. Antsaklis P. J. 2003 Robust stability and disturbance attenuation analysis of a class of networked control systems. in Proc. 42nd IEEE Conf. Control, Maui, HI 1182 1187
  95. 95. Lee K. C. Lee S. Lee M. H. 2003 Remote fuzzy logic control of networked control system via Profibus-dp. IEEE Transactions on Industrial Electronics 50 4 784 792
  96. 96. Guerrero J. M. Matas J. Vicuna L. G. D. Castilla M. Miret J. 2006 Wireless-control strategy for parallel operation of distributed-generation inverters. IEEE Transactions on Industrial Electronics 53 5 1461 1470
  97. 97. Hespanha J. P. Naghshtabrizi P. Xu Y. 2007 A survey of recent results in networked control systems. Proceedings of the IEEE 95 138 162
  98. 98. Gao H. Chen T. James L. 2008 A new delay system approach to network-based control. Automatica 4439 52
  99. 99. Gao H. Chen T. 2007 H1 estimation for uncertain systems with limited communication capacity. IEEE Transactions on Industrial Electronics 52 2070 2084
  100. 100. Liu G. P. Xia Y. Chen J. Rees D. Hu W. 2007 Networked predictive control of systems with random networked delays in both forward and feedback channels. IEEE Transactions on Industrial Electronics 54 3 1282 1297
  101. 101. Hu W. Liu G. P. Rees D. 2007 Event-driven networked predictive control. IEEE Transactions on Industrial Electronics 54 1603 1613
  102. 102. Yang S. H. Chen X. Edwards D. W. Alty J. L. 2003 Design issues and implementation of internet based process control. Control Engineering Practices 11709 720
  103. 103. Christofides P. D. El -Farra N. H. 2005 Control of nonlinear and hybrid process systems: Designs for uncertainty, constraints and time-delays. Berlin Springer
  104. 104. Mhaskar P. Gani A. Mc Fall C. Christofides P. D. Davis J. F. 2007 Fault-tolerant control of nonlinear process systems subject to sensor faults. AIChE Journal 53654 668
  105. 105. Nair G. N. Evans R. J. 2007 Stabilization with data-rate-limited feedback: tightest attainable bounds. Systems & Control Letters 41 49
  106. 106. Tipsuwan Y. Chow M. 2003 Control methodologies in networked control systems. Control Engineering Practice 11099 1111
  107. 107. Hong S. H. 1995 Scheduling algorithm of data sampling times in the integrated communication and control-systems. IEEE Transactions on Control Systems Technology 3225 230
  108. 108. Shin K. G. 1991 Real-time communications in a computer-controlled workcell. IEEE Transactions on Robotics and Automation 7105 113
  109. 109. Nilsson J. 1998 Real-time control systems with delays. PhD thesis, Lund Inst. Technol.
  110. 110. Lian F. L. Moyne J. 2003 Modelling and optimal controller design of networked control systems with multiple delays. International Journal of Control 76 6 591 606
  111. 111. Nesic D. Teel A. R. 2004 Input-output stability properties of networked control systems. IEEE Transactions on Automatic Control 49 10 1650 1667
  112. 112. Yue D. Han Q. L. Lam J. 2005 Network-based robust H∞ control of systems with uncertaintyAutomatica 41 6 999 1007
  113. 113. Brockett R. W. Liberzon D. 2000 Quantized feedback stabilization of linear systems. IEEE Transactions on Automatic Control 45 1279 1289
  114. 114. Walsh G. Ye H. Bushnell L. 2002 Stability analysis of networked control systems. IEEE Transactions on Control Systems Technology 10 438 446
  115. 115. Nevsic V. 2004 Input-output stability properties of networked control systems. IEEE Transactions on Automatic Control 491650 1667
  116. 116. Ploplys N. J. Kawka P. A. Alleyne A. G. 2004 Closed-loop control over wireless networks: developing a novel timing scheme for real-time control systems. IEEE Control Systems Magazine 2452 71
  117. 117. Tabbara M. 2007 Stability of wireless and wireline networked control systems. IEEE Transactions on Automatic Control 521615 1630
  118. 118. Ye H. Walsh G. 2001 Real-time mixed-traffic wireless networks. IEEE Transactions on Industrial Electronics 48883 890
  119. 119. Ye H. Walsh G. Bushnell L. 2000 Wireless local area networks in the manufacturing industry. In Proceedings of the American control conference, Chicago, Illinois 2363 2367
  120. 120. Hassibi A. Boyd S. P. How J. P. 1999Control of asynchronous dynamical systems with rate constraints on events. In Proceedings of IEEE conference on decision and control, Phoenix, Arizona 1345 1351
  121. 121. Ritchey V. S. Franklin G. F. 1989 A stability criterion for asynchronous multirate linear systems. IEEE Transactions on Automatic Control 34529 535
  122. 122. Su Y. F. Bhaya A. Kaszkurewicz E. Kozyakin V. S. 1997 Further results on stability of asynchronous discrete-time linear systems. In Proceedings of the 36th IEEE conference on decision and control San Diego, California 915 920
  123. 123. Nguyen G. T. Katz R. H. Noble B. Satyanarayananm M. 1996 A trace based approach for modeling wireless channel behavior. In Proceedings of the winter simulation conference, Coronado, California 597 604
  124. 124. Hespanha J. P. 2005 A model for stochastic hybrid systems with application to communication networks. Nonlinear Analysis 621353 1383
  125. 125. Mao X. 1999 Stability of stochastic differential equations with Markovian switching. Stochastic Processes and Their Applications 7945 67
  126. 126. Lin H. Antsaklis P. J. 2005 Stability and persistent disturbance attenuation properties for a class of networked control systems: switched system approach. International Journal of Control 781447 1458
  127. 127. Imer O. C. Yüksel S. Basar T. 2006 Optimal control of LTI systems over unreliable communications links. Automatica 421429 1439
  128. 128. Azimi-Sadjadi B. 2003 Stability of networked control systems in the presence of packet losses. In Proceedings of the 42nd IEEE conference on decision and control, Maui, Hawaii 676 681
  129. 129. Elia N. Eisenbeis J. N. 2004 Limitations of linear remote control over packet drop networks. In Proceedings of IEEE conference on decision and control, Nassau, Bahamas 5152 5157
  130. 130. Hadjicostis C. N. Touri R. 2002 Feedback control utilizing packet dropping networks links. In Proceedings of the 41st IEEE conference on decision and control, Las Vegas, Nevada 1205 1210
  131. 131. Jeong S. C. Park P. 2005 Constrained MPC algorithm for uncertain time-varying systems with state-delay. IEEE Transactions on Automatic Control 50 257
  132. 132. Schenato L. 2009 To zero or to hold control inputs with lossy links?. IEEE Transactions on Automatic Control 501099 1105
  133. 133. Liu G. P. Mu J. Rees D. Chai S. C. 2006 Design and stability of networked control systems with random communication time delay using the modified MPC. International Journal of Control 79288 297
  134. 134. Xia Y. Fu M. Liu B. Liu G. P. 2009 Design and performance analysis of networked control systems with random delay. Journal of Systems Engineering and Electronics 20807 822
  135. 135. Liu G. P. Mu J. Rees D. 2004 Networked predictive control of systems with random communication delays. presented at the UKACC Int. Conf. Control, BathU.K. ; Paper ID-015.
  136. 136. Liu G. P. Xia Y. Rees D. 2005 Predictive control of networked systems with random delays. IFAC World CongressPrague
  137. 137. Liu G. P. Rees D. Chai S. C. Nie X. Y. 2005 Design and Practical Implementation of Networked Predictive Control Systems. Networking, Sensing and Control 3817 21
  138. 138. Xia Y. Liu G. P. Fu M. Rees D. 2009 Predictive Control of Networked Systems with Random Delay and Data Dropout. IET Control Theory and Applications 3 11 1476 1486
  139. 139. Xia Y. Fu M. Liu G. P. 2011 Analysis and Synthesis of Networked Control Systems. Springer
  140. 140. Negenborn R. R De Schutter B. Hellendoorn H. 2006 Multi-agent model predictive control of transportation networks. Proceedings of the 2006 IEEE International Conference on Networking, Sensing and Control (ICNSC2006), Ft. Lauderdale, FL, 296 301
  141. 141. Yang T. C. 2006 Networked control systems: a brief survey. IEE Proceedings Control Theory and Applications 152 403
  142. 142. Neumann P. 2007 Communication in industrial automation what is going on? Control Engineering Practice 15 1332 1347
  143. 143. Qin S. J. Badgwell T. A. 2000 An overview of nonlinear model predictive control applications, in: F. Allgower, A. Zheng (Eds.), Nonlinear Model Predictive Control, Birkhauser Berlin 369 392
  144. 144. Qin S. J. Badgwell T. A. 2003 A survey of industrial model predictive control technology. Control Engineering Practice 11 7 733 764
  145. 145. Rawlings J. B. Stewart B. T. 2008 Coordinating multiple optimization-based controllers: New opportunities and challenges. Journal of Process Control 18839 845
  146. 146. Scattolini R. 2009 Architectures for distributed and hierarchical model predictive control: a review. Journal of Process Control 19723 731
  147. 147. Magni L. Scattolini R. 2006 Stabilizing decentralized model predictive control of nonlinear systems. Automatica 421231 1236
  148. 148. Vaccarini M. Longhi S. Katebi M. R. 2009 Unconstrained networked decentralized model predictive control. Journal of Process Control 19 2 328 339
  149. 149. Wang S. H. Davison E. 1973 On the stabilization of decentralized control systems. IEEE Transactions on Automatic Control 18 473 478
  150. 150. Davison E. J. Chang T. N. 1990 Decentralized stabilization and pole assignment for general proper systems. IEEE Transactions on Automatic Control 35 652 664
  151. 151. Du X. Xi Y. Li S. 2001 Distributed model predictive control for large-scale systems. In: Proceedings of the American control conference 4 3142 3143
  152. 152. Camponogara E. Jia D. Krogh B. H. Talukdar S. 2002 Distributed model predictive control. IEEE Control Systems Magazine 2244 52
  153. 153. Dunbar W. B. 2007 Distributed receding horizon control of dynamically coupled nonlinear systems. IEEE Transactions on Automatic Control 52 7 1249 1263
  154. 154. Dunbar W. B. Murray R. M. 2006 Distributed receding horizon control for multi-vehicle formation stabilization. Automatica 42 4 549 558
  155. 155. Li S. Zhang Y. Zhu Q. 2005 Nash-optimization enhanced distributed model predictive control applied to the Shell benchmark problem. Information Sciences 170 2-4329 349
  156. 156. Richards A. How J. P. 2007 Robust distributed model predictive control. International Journal of Control 80 9 1517 1531
  157. 157. Venkat A. N. Rawlings J. B. Wright S. J. 2007 Distributed model predictive control of large-scale systems. In: Proceedings of the assessment and future directions of nonlinear model predictive control. Berlin Heidelberg Springer 591 605
  158. 158. Liu J. Muñoz la Peña. D. Christofides P. D. Davis J. F. 2009 Lyapunovbased model predictive control of nonlinear systems subject to time-varying measurement delays. International Journal of Adaptive Control and Signal Processing 23 788
  159. 159. Muñoz la Peña. D. Christofides P. D. 2008 Lyapunov-based model predictive control of nonlinear systems subject to data losses. IEEE Transactions on Automatic Control 53 2076 2089
  160. 160. Franco E. Magni L. Parisini T. Polycarpou M. M. Raimondo D. M. 2008 Cooperative constrained control of distributed agents with nonlinear dynamics and delayed information exchange: a stabilizing receding-horizon approach. IEEE Transactions on Automatic Control 53 324 338
  161. 161. Liu J. de la Peña D. M. Christofides P. D. 2010 Distributed model predictive control of nonlinear systems subject to asynchronous and delayed measurements. Automatica 4652 61

Written By

Li Dai, Yuanqing Xia, Mengyin Fu and Magdi S. Mahmoud

Submitted: 07 March 2012 Published: 05 December 2012