Open access peer-reviewed chapter - ONLINE FIRST

Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality Differences

By Sie Long Kek, Sy Yi Sim, Wah June Leong and Kok Lay Teo

Submitted: May 31st 2019Reviewed: September 13th 2019Published: November 22nd 2019

DOI: 10.5772/intechopen.89711

Downloaded: 18

Abstract

In this chapter, an efficient computation approach is proposed for solving a general class of discrete-time optimal control problems. In our approach, a simplified optimal control model, which is adding the adjusted parameters into the model used, is solved iteratively. In this way, the differences between the real plant and the model used are calculated, in turn, to update the optimal solution of the model used. During the computation procedure, the equivalent optimization problem is formulated, where the conjugate gradient algorithm is applied in solving the optimization problem. On this basis, the optimal solution of the modified model-based optimal control problem is obtained repeatedly. Once the convergence is achieved, the iterative solution approximates to the correct optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, both linear and nonlinear examples are demonstrated to show the performance of the approach proposed. In conclusion, the efficiency of the approach proposed is highly presented.

Keywords

  • optimal control
  • conjugate gradient
  • adjusted parameters
  • iterative solution
  • model-reality differences

1. Introduction

Optimal control problems are existing in engineering and natural sciences for so long, and the applications of the optimal control have been well defined in the literature [1, 2, 3, 4]. With the rapid evolution of computer technology, the development of the optimal control techniques is reached a mature level, from classical control to modern control, from proportional-integral-derivative (PID) control to feedback control, and from adaptive control to intelligent control [5, 6, 7, 8]. The studies in the optimal control field are still progressing, and attract the interest of, not only engineers and applied mathematicians but also biologists and financialists, to investigate and contribute to the optimal control theory.

In particular, the optimal control algorithm, which integrates system optimization and parameter estimation, gives a new insight into the control community. This algorithm is known as the integrated system optimization and parameter estimation (ISOPE), and its dynamic version is called the dynamic ISOPE (DISOPE). Both of these algorithms were introduced by Robert [9, 10, 11], and Robert and Becerra [12, 13, 14], respectively. The basic idea of DISOPE is applying the model-based optimal control, which has different structures and parameters compared to the original optimal control problem, to obtain the correct optimal solution of the original optimal control problem, in spite of model-reality differences. Recently, this algorithm has been extended to cover both deterministic and stochastic versions, and it is known as an integrated optimal control and parameter estimation (IOCPE) algorithm [15, 16]. On the other hand, the application of the optimization techniques, particularly, using the conjugate gradient method for solving the optimal control problem [17, 18, 19] has also been studied, where the open-loop control strategy is concerned [3, 8].

In this chapter, the conjugate gradient approach [17, 19] is employed to solve the linear model-based optimal control problem for obtaining the optimal solution of the original optimal control problem. In our approach, the simplified model, which is adding the adjusted parameters, is formulated initially. Then, an expanded optimal control problem, which combines the system dynamic and the cost function from the original optimal control problem into the simplified model, is introduced. By defining the Hamiltonian function and the augmented cost function, the corresponding necessary conditions for optimality are derived. Among these necessary conditions, a set of necessary conditions is for the modified model-based optimal control problem, a set of necessary conditions defines the parameter estimation problem, and a set of necessary conditions calculates the multipliers [15].

By virtue of the modified model-based optimal control problem, an equivalence optimization problem is defined, and the related gradient function is determined. With an initial control sequence, the initial gradient and the initial search direction are computed. Then, the control sequences are updated through the line search technique, where the gradient and search direction would satisfy the conjugacy condition [17, 18]. During the iteration, the state and the costate are updated by the control sequence obtained from the conjugate gradient approach. When the convergence is achieved within a tolerance given, the iterative solution approximates to the correct optimal solution of the original optimal control problem, in spite of model-reality differences. For illustration, examples of linear and nonlinear cases, which are damped harmonic oscillator [7] and continuous stirred-tank chemical tank [8], are studied.

The chapter is organized as follows. In Section 2, the problem statement is described in detail, where the original optimal control problem and the simplified model are discussed. In Section 3, the methodology used is further explained. The necessary conditions for optimality are derived, and the use of the conjugate gradient method is delivered in solving the equivalence optimization problem. In Section 4, examples of a damped harmonic oscillator and a continuous stirred-tank chemical reactor are studied. The results show the efficiency of the algorithm proposed. Finally, concluding remarks are made.

2. Problem statement

Consider a general class of the discrete-time nonlinear optimal control problem, given by

minukJ0u=φxNN+k=0N1Lxkukksubject toxk+1=fxkukk,x0=x0E1

where ukm,k=0,1,,N1,and xkn,k=0,1,,N,are the control sequences and the state sequences, respectively, while f:n×m×nrepresents the real plant, L:n×m×is the cost under summation, and φ:n×is the terminal cost. Here, J0is the scalar cost function, and x0is the known initial state vector. It is assumed that all functions in (1) are continuously differentiable with respect to their respective arguments.

This problem, which is referred to as Problem (P), is regarded as the real optimal control problem. Due on the complex and nonlinear structure, solving Problem (P) actually requires the efficient computation techniques. For this reason, the simplified model of Problem (P) is identified to be solved such that the true optimal solution of Problem (P) could be approximated. Hence, this simplified model-based optimal control problem is defined as follows:

minukJ1u=12xNTSNxN+γN+k=0N112xkTQxk+ukTRuk+γksubject toxk+1=Axk+Buk+αk,x0=x0E2

where γk,k=0,1,,N,and αk,k=0,1,,N1,are introduced as the adjusted parameters, whereas Ais an n×ntransition matrix, and Bis an n×mcontrol coefficient matrix. Besides, SNand Qare n×npositive semi-definite matrices, and Ris a m×mpositive definite matrix. Here, J1is the scalar cost function.

Let this problem is referred to as Problem (M). It can be seen that, because of the different structures and parameters, only solving Problem (M) would not obtain the optimal solution of Problem (P) for not taking the adjusted parameters into account. Notice, adding the adjusted parameters into Problem (M) could let us calculate the differences between the real plant and the model used. On this basis, Problem (M) would be solved iteratively to give the correct optimal solution of Problem (P), in spite of model-reality differences.

3. System optimization with parameter estimation

Now, an expanded optimal control problem, which combines the real plant and the cost function in Problem (P) into Problem (M) and is referred to as Problem (E), is introduced by

minukJ2u=12xNTSNxN+γN+k=0N112xkTQxk+ukTRuk+γk+12r1ukvk2+12r2xkzk2subject toxk+1=Axk+Buk+αk,x0=x012zNTSNzN+γN=φzNN12zkTQzk+vkTRvk+γk=LzkvkkAzk+Bvk+αk=fzkvkkvk=ukzk=xkE3

where vkm,k=0,1,,N1,and zkn,k=0,1,,N,are introduced to separate the sequences of control and state in the optimization problem from the respective signals in the parameter estimation problem, and denotes the usual Euclidean norm. The terms 12r1ukvk2and 12r2xkzk2with r1,r2are introduced to improve the convexity and to facilitate the convergence of the resulting iterative algorithm. Here, we clarify that the algorithm is designed such that the constraints vk=ukand zk=xkare satisfied upon termination of the iterations, assuming that convergence is achieved. Moreover, the state constraint zkand the control constraint vkare used for the computation of the parameter estimation and matching scheme, while the corresponding state constraint xkand control constraint ukare reserved for optimizing the model-based optimal control problem. Therefore, system optimization and parameter estimation are declared and mutually integrated.

3.1 Necessary conditions for optimality

Define the Hamiltonian function for Problem (E), given by:

H2k=12xkTQxk+ukTRuk+γk+12r1ukvk2+12r2xkzk2+pk+1TAxk+Buk+αkλkTukβkTxkE4

where λkm,k=0,1,,N1,βkn,k=0,1,,N,and pkn,k=0,1,,N,are modifiers. Using this Hamiltonian function in (4), write the cost function in (3) to be the augmented cost function, that is,

J2u=12xNTSNxN+γN+p0Tx0pNTxN+ξNφzNN12zNTSNzNγN+ΓTxNzN+k=0N1H2kpkTxk+λkTvk+βkTzk+ξkLzkvkk12zkTQzk+vkTRvkγk+μkTfzkvkkAzkBvkαkE5

where pk,ξk,λk,βk,μkand Γare the appropriate multipliers to be determined later.

Applying the calculus of variation [7, 9, 11, 13, 15] to the augmented cost function in (5), the following necessary conditions for optimality are obtained:

(a) Stationary condition:

Ruk+BTpk+1λk+r1ukvk=0E6

(b) Co-state equation:

pk=Qxk+ATpk+1βk+r2xkzkE7

(c) State equation:

xk+1=Axk+Buk+αkE8

(d) Boundary conditions:

pN=SNxN+Γandx0=x0E9

(e) Adjusted parameter equations:

φzNN=12zNTSNzN+γNE10
Lzkvkk=12zkTQzk+vkTRvk+γkE11
fzkvkk=Azk+Bvk+αkE12

(f) Modifier equations:

Γ=zNφSNzNE13
λk=vkLRvkfvkBTp̂k+1E14
βk=zkLQzkfzkATp̂k+1E15

with ξk=1and μk=p̂k+1.

(g) Separable variables:

vk=uk,zk=xk,p̂k=pk.E16

Notice that for the optimality necessary conditions obtained above, they are divided into three sets of necessary conditions. The first set of necessary conditions in (6)(9) is the necessary conditions for the system optimization problem. The second set of necessary conditions in (10)(12) defines the parameter estimation problem. The third set of necessary conditions in (13)(15) provides the computation of multipliers. In fact, the necessary conditions, which are defined in (6)(9), are the optimality for the modified model-based optimal control problem, and the adjusted parameters, which are calculated from the necessary conditions in (10)(12), measure the differences between the real plant and the model used.

3.2 Modified model-based optimal control problem

As a consequence, the modified model-based optimal control problem, which is referred to as Problem (MM), is defined by

minukJ3u=12xNTSNxN+ΓTxN+γN+k=0N112xkTQxk+ukTRuk+γk+12r1ukvk2+12r2xkzk2λkTukβkTxksubject toxk+1=Axk+Buk+αk,x0=x0E17

with the specified αk,γk,λk,βk,Γ,vkand zk,where the boundary conditions are given by x0and pNwith the specified multiplier Γ.

It is obvious that Problem (MM), which is derived from Problem (E), is a modification of optimal control problem and is also known as a modified linear quadratic regular problem. Importantly, the set of the necessary conditions in (6)(9) for Problem (E) is the necessary conditions that are satisfied by Problem (MM). In addition, due to the quadratic criterion feature of the objective function, the conjugate gradient method [17, 18], which is one of the numerical optimization techniques, could be applied to solve Problem (MM).

3.3 Conjugate gradient algorithm

For simplicity [19], establish Problem (MM) as a nonlinear optimization problem with the initial control given by u0=uk0as follows:

minukJ3usubject tou=ukmfork=0,1,,N1E18

Let this problem as Problem (Q). Moreover, the Hamiltonian function defined in (4) is taken into consideration as an equivalent objective function. Hence, this Hamiltonian function allows the evaluation of the gradient function, which is the stationary condition in (6), and by using the iterative solution ui=ukito satisfy the state Eq. (8), which is solved forward in time, and the co-state Eq. (7), which is solved backward in time.

Define the gradient function g:mmas

gui=uJ3uiE19

which is represented by the stationary condition in (6). For arbitrary initial control u0m, the initial gradient and the initial search direction are calculated from

g0=gu0E20
d0=g0.E21

The following line search equation is applied to update the control sequence:

ui+1=ui+aidiE22

where aiis the step size, and its value can be determined from

ai=argmina0J3ui+adi.E23

After that, the gradient and the search direction are updated by

gi+1=gui+1E24
di+1=gi+1+bidiE25

with

bi=gi+1Tgi+1giTgiE26

for i = 0, 1, 2, … represents the iteration numbers.

From the discussion above, we present the result as a proposition given as follows:

Proposition 1. Consider Problem (Q). The control sequenceui, which is defined in(22)and is represented by

ui=u0Tu1TuN1T,

is generated through a set of the search direction vectordiwhose components are linearly independent. Also, the directiondiis conjugacy.

The conjugate gradient algorithm is summarized below:

Conjugate gradient algorithm

Data: Choose the arbitrary initial control u0and the tolerance ε.

Step 0: Compute the initial gradient g0from (20) and the initial search direction d0from (21), respectively. Set i=0.

Step 1: Solve the state Eq. (8) forward in time from k=0to k=Nwith the initial condition (9) to obtain xki,k=0,1,,N.

Step 2: Solve the costate Eq. (7) backward in time from k=Nto k=0with the boundary condition (9), where pkiis the solution obtained.

Step 3: Calculate the value of the cost functional J3uifrom (17).

Step 4: Solve (23) to obtain the step size ai.

Step 5: Calculate the control ui+1from (22).

Step 6: Evaluate the gradient gi+1and the search direction di+1,respectively, from (24) and (25) with computing bifrom (26). If the gradient gi+1=gi,within a given tolerance, stop, else set i=i+1,go to Step 1.

Remark 1:

  1. Step 0 is the preliminary step for setting the initial search direction based on the gradient direction in using the conjugate gradient algorithm.

  2. Steps 1, 2, and 3 are performed to solve the system optimization by using the corresponding control sequence ui.

  3. Steps 4, 5, and 6 are the computation steps in implementing the conjugate direction.

3.4 Iterative calculation procedure

Accordingly, Problem (Q) is solved by using the conjugate gradient algorithm. Indeed, the solution procedure for system optimization with parameter estimation could be described by joining the conjugate gradient algorithm with the parameters estimated. A summary of the calculation procedure including the principle of model-reality differences is listed as follows:

Iterative algorithm based on model-reality differences

Data: A,B,Q,R,SN,N,x0,r1,r2,kv,kz,kp,f,L.Note that Aand Bcould be determined based on the linearization of fat x0or from the linear terms of f.

Step 0: Compute a nominal solution. Assume that αk=0,k=0,1,,N1,and r1=r2=0.Solve Problem (M) defined by (2) to obtain uk0,k=0,1,,N1,and xk0,pk0,k=0,1,,N.Then, with αk=0,k=0,1,,N1,and using r1,r2from the data. Set i=0,vk0=uk0,zk0=xk0and p̂k0=pk0.

Step 1: Compute the parameters γki,k=0,1,,N,and αki,k=0,1,,N1,from (10)(12). This is called the parameter estimation step.

Step 2: Compute the modifiers Γi,λkiand βki,k=0,1,,N1,from (13)(15). Notice that this step requires taking the derivatives of fand Lwith respect to vkiand zki.

Step 3: With γki,αki,Γi,λki,βki,vki, and zki,solve Problem (Q) using the conjugate gradient algorithm. This is called the system optimization step.

Step 4: Test the convergence and update the optimal solution of Problem (P). In order to provide a mechanism for regulating convergence, a simple relaxation method is employed:

vki+1=vki+kvukivkiE27
zki+1=zki+kzxkizkiE28
p̂ki+1=p̂ki+kppkip̂kiE29

where kv,kz,kp01are scalar gains. If vki+1=vki,k=0,1,,N1,and zki+1=zki,k=0,1,,N,within a given tolerance, stop; else set i=i+1,and repeat the procedure starting from Step 1.

Remark 2:

  1. In Step 0, the nominal solution could be obtained by using the standard procedure of the linear quadratic regulator approach, where the feedback gain and the Riccati equation are calculated offline.

  2. In Step 3, applying the conjugate gradient algorithm to obtain the new control sequence will give a good effect if the conjugacy of the search direction is satisfied.

  3. In Step 4, the simple relaxation method in (27)(29) is used, so that the matching scheme for the parameters and the optimal solution can be established.

4. Illustrative examples

In this section, two examples are studied. The first example is for optimizing and controlling a damped harmonic oscillator [7], and the second example is related to optimal control of a continuous stirred-tank chemical reactor [8]. The mathematical models of these examples are discussed, and their optimal solution is obtained by using the algorithm discussed in Section 3. Here, the algorithm is implemented in the Octave 5.1.0 environment.

4.1 Example 1: a damped harmonic oscillator

Consider a damped harmonic oscillator [7] given by

ẋ=01ω22δωx+01uE30

with the natural frequency ω= 0.8, the damping ratio δ= 0.1, and the initial state x0=1010T.Define the state x=x1x2T,where x1is the displacement and x2is the velocity. For the purpose of controlling this oscillator, the following objective function

J0=120.09.4x1t2+x2t2+ut2dtE31

is minimized. This problem is a continuous-time linear optimal control problem, and the equivalence discrete time optimal control problem, which is regarded as Problem (P), is given by:

minuJu=k=01012Δtx1k2+x2k2+uk2subject toxk+1=1.000.940.600.85xk+0.000.94ukE32

with the initial state x0=1010T.and the sampling time Δt=0.94s is taken for the discretization transform.

Consider the model-based optimal control problem, which is regarded as Problem (M), given by:

minuJu=k=01012x1k2+x2k2+uk2+γkΔtsubject toxk+1=1001xk+10uk+αkE33

with the initial state x0=1010T,and the adjusted parameters γk, k=0,1,,N,and αk, k=0,1,,N1,are supplied to the model used.

By using the algorithm proposed, the simulation result is shown in Table 1. Notice that the minimum cost for Problem (M) is 546.05 units without adding the adjusted parameters. Once the adjusted parameters are taken into consideration, the iterative solution approximates to the true optimal solution of the original optimal control problem, in spite of model-reality differences. It is highlighted that there is a 99% of the cost reduction to obtain the final cost of 128.50 units.

Number of iterationInitial costFinal costElapsed time (s)
2017053.11128.501.38021

Table 1.

Simulation result, Example 1.

Figures 1 and 2 show the trajectories of control and state, respectively. With this control effort, the state reaches at the steady state after 4 units of time, which presents the oscillator stopped from moving. Figure 3 shows the changes of the costate at the first 2 units of time. The optimal solution obtained is verified by satisfying the stationary condition as shown in Figure 4. Figures 5 and 6 show the adjusted parameters after the convergence is achieved, where the model-reality differences are measured during the iterative procedure.

Figure 1.

Final control u(k), Example 1.

Figure 2.

Final state x(k), Example 1.

Figure 3.

Final costate p(k), Example 1.

Figure 4.

Stationary Hu(k), Example 1.

Figure 5.

Adjusted parameter α(k), Example 1.

Figure 6.

Adjusted parameter γ(k), Example 1.

Therefore, this damped harmonic oscillator is controlled, and the cost function is minimized as desired.

4.2 Example 2: a continuous stirred-tank chemical reactor

Consider a continuous stirred-tank chemical reactor, which consists of two state equations [8]. The flow of a coolant through a coil inserted in the reactor is to control the first order, irreversible exothermic reaction taking place in the reactor. Assume that x1tis the deviation from the steady-state temperature, x2tis the deviation from the steady-state concentration, and utis the normalized control variable that represents the effect of coolant flow on the chemical reaction. The corresponding state equations are given by

ẋ1t=2x1t+0.25+x2t+0.5exp25x1tx1t+2x1t+0.25utE34
ẋ2t=0.5x2tx2t+0.5exp25x1tx1t+2E35

with the initial state x0=0.050.00T. The cost function to be minimized is given by

J0=0.00.8x1t2+x2t2+0.1ut2dt.E36

Here, the desired objective is to maintain the temperature and the concentration close to their respective steady-state values without expending large amounts of the control effort.

This problem is a continuous time nonlinear optimal control problem. For doing the discretization transform, the sampling time Δt=0.0057s is used to formulate the equivalence discrete-time optimal control problem, which is referred to as Problem (P), given by:

minuJu=k=080122x1k2+2x2k2+0.2uk2Δtsubject tox1k+1=x1k2x1k+0.25Δt+x2k+0.5Δtexp25x1kx1k+2x1k+0.25ukΔt)x2k+1=x2k+0.5x2kΔtx2k+0.5Δtexp25x1kx1k+2E37

with the initial state x0=0.050.00T.

By applying the algorithm proposed to obtain the optimal solution for Problem (P), the following model, which is referred to as Problem (M), is introduced,

minuJu=k=080122x1k2+2x2k2+0.2uk2Δtsubject toxk+1=1.0480.0100.0620.984xk+0.0020.000uk+αkE38

with the initial state x0=0.050.00T,and the adjusted parameters γk, k=0,1,,N,and αk, k=0,1,,N1,are added into the model.

Table 2 shows the simulation result obtained by using the algorithm proposed. It is mentioned that the minimum cost for the linear model-based optimal control problem is 5.9589 units. At the beginning of the iteration calculation procedure, the initial cost is 0.147463 unit, and a 90% of cost reduction is addressed to give the final cost of 0.014167 unit.

Number of iterationInitial costFinal costElapsed time (s)
90.1474630.0141674.60934

Table 2.

Simulation result, Example 2.

The trajectories of the final control and the final state are, respectively, shown in Figures 7 and 8. It is noted that the state reaches to the steady state after 40 units of time by associating the control effort taken. This situation indicates that the temperature and the concentration are maintained at their steady state. Thus, the desired objective is confirmed. Figure 9 shows the costate behavior, which is reduced gradually to zero at the terminal time, and Figure 10 shows the stationary condition, which examines the existing of the optimal solution. The adjusted parameters, which are shown in Figures 11 and 12, respectively, measure the differences between the model used and the real plant.

Figure 7.

Final control u(k), Example 2.

Figure 8.

Final state x(k), Example 2.

Figure 9.

Final costate p(k), Example 2.

Figure 10.

Stationary Hu(k), Example 2.

Figure 11.

Adjusted parameter α(k), Example 2.

Figure 12.

Adjusted parameter γ(k), Example 2.

Hence, the correct optimal solution of Problem (P) is approximated successfully by solving the model in Problem (M), and the efficiency of the algorithm proposed is demonstrated.

5. Concluding remarks

The approach, which integrates system optimization and parameter estimation, was discussed in this chapter. The use of the conjugate gradient method in solving the model-based optimal control problem has been examined, and the applicability of the conjugate gradient approach in associating the principle of model-reality differences was identified. Definitely, many computational approaches could be used to solve the model-based optimal control; however, the algorithm proposed in this chapter gives a tractable solution procedure for handling the optimal control problems with different structures and parameters, especially for obtaining the optimal solution for the nonlinear optimal control problem. In conclusion, the efficiency of the algorithm is highly recommended. In future research, it is strongly suggested to investigate the application of optimization techniques in stochastic optimization and control.

Acknowledgments

The authors would like to acknowledge the Universiti Tun Hussein Onn Malaysia (UTHM) and the Ministry of Higher Education (MOHE) for the financial support for this study under the research grant FRGS VOT. 1561.

Conflict of interest

The authors declare no conflict of interest.

Download

chapter PDF

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Sie Long Kek, Sy Yi Sim, Wah June Leong and Kok Lay Teo (November 22nd 2019). Conjugate Gradient Approach for Discrete Time Optimal Control Problems with Model-Reality Differences [Online First], IntechOpen, DOI: 10.5772/intechopen.89711. Available from:

chapter statistics

18total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us