Open access peer-reviewed chapter

Robust Observer-Based Output Feedback Control of a Nonlinear Time-Varying System

Written By

Chieh-Chuan Feng

Submitted: 28 October 2015 Reviewed: 24 February 2016 Published: 06 July 2016

DOI: 10.5772/62697

From the Edited Volume

Robust Control - Theoretical Models and Case Studies

Edited by Moises Rivas López and Wendy Flores-Fuentes

Chapter metrics overview

2,141 Chapter Downloads

View Full Metrics

Abstract

A class of time-varying systems can be quadratically stabilized with satisfactory performance by a modified time-invariant-based observer. The modified observer driven by the additional adaptation forces with static correction gains is used to estimate the time-varying system states. Under the frame of quadratic stability, the closed-loop systems satisfying induced norm bounded performance criterion are exponentially stabilized while the states are exponentially approaching by the modified observer. This paper deals with the time-varying systems that can be characterized as the multiplicative type of time-invariant and time-varying parts. The time-invariant part is then used to construct the modified observer with additional driving forces, which are ready to adjust time-varying effect coming from the measured outputs feeding into the modified observer. The determination of the adaptation forces can be derived from the minimization of the cost of error dynamics with modified least-squares algorithms. The synthesis of control and observer static correction gains are also demonstrated. The developed systems have been tested in a mass-spring-damper system to illustrate the effectiveness of the design.

Keywords

  • quadratic stablilization
  • time-invariant-based observer
  • error dynamics
  • least-squares algorithm
  • adaptation forces
  • time-varying parts

1. Introduction

The study of optimal control for time-varying systems involves, in general, the solutions of Riccati differential equations (RDEs) and computations of the time-varying correction gains [14]. It is noted that the system is typically computer-implemented system, upon which the RDE and correction gains are calculated. The computations, however, induces unavoidable time delay. Although the time-delayed control has been considered, it leads to two disadvantages—complication of control mechanism and bulk of the control board. For some systems, for example, hard disk drives (a typically time-varying system) can only tolerate no delayed or very limited time delay control [5, 6] and use very small compartment. Hence, many literatures have focused on the static gain control of time-varying systems or the systems with time-varying or nonlinear uncertainties [7, 8]. It represents the simplest closed-loop control form but still encounters problems. One should aware that static output control is nonconvex, in which iterative linear matrix inequality approaches are exploited after it is expressed as a bilinear matrix inequality formulation( see [912]). As a result, it cannot be easily implemented in controlling the time-varying system and time delay problems remain.

It is a great challenge problem to design a linear continuous time-invariant observer with constant correction gains that regulate linear continuous time-varying plants. Although the vast majority of continuous TV control applications are implemented in digital computers [6, 13, 14], there are still opportunities to implement control with Kalman observer in continuous time (i.e., in analog circuits) [Hug88]. In particular, those control systems requiring fast response ask no or little delay effects. The difficulties for setting up those boards are because the algorithm of the design is too complex to implement in board level design, too expensive which can only be realized in a laboratory, or digital computation time induced unsatisfactory delay. It should be noticed that to realize the Kalman observer involves the computation of Riccati differential equations and inversion of matrices, which cause the obstacles of the board level design. A survey of linear and nonlinear observer design for control systems has been conducted in the literatures [1518] and references therein. For controlling an linear time-invariant (LTI) system, the Lungerber observer [19] design with constant correction gain is straightforward and can be implemented on a circuit board with ease.

Many practical control systems implement time-invariant controllers with observers in the feedback loop, which can be easily realized not only in the laboratory but also in the industrial merchandize [20]. The advantages of realization for the time-invariant controllers and observers are due to the constant parameters, which can be easily assembled by using resistors and other analog integrated elements in circuits board. The use of observers is also essential in industrial controls due to, in some cases, the states can be either not reachable or expensive to be sensed. Therefore, the use of observers are undoubtedly required to estimate unmeasured states since not merely full-state feedback control can be easily implemented but unmeasured states can be monitored [2125].

With the aforementioned disadvantages and advantages, the control of time-varying systems is naturally arisen by designing a time-invariant observer-based controller that stabilizes, in particular exponentially, this time-varying plant. It is believed that this is a great challenge problem since we found no literatures tackling this problem. In what follows time-varying system control is first reviewed for laying the foundation of the robust control of the system with optimality property.

The feedback control of linear time-varying system has been extensively studied [1, 6, 7, 2631]. The key observation of early works for exponential stability of time-varying systems requires that the time-dependent matrix-valued functions be bounded and piecewise continuous satisfying Lyapunov quadratic stability [29, 31]. In this regard, many, but not all, of them can be translated to robust control framework since time-dependent matrices are essentially bounded and are treated as uncertainties [8, 32]. This gives an opportunity for the control system design without solving RDEs, although what we pay for the avoiding solving RDEs is the conservative of control. The conservativeness comes from two reasons—solutions of RDEs are avoided and admit fast varying parameters. This, however, can be reduced by designing parameter-dependent type of criterion or introducing slack variables such that reduces the tightness of dependent variables( see, e.g., [33] and reference therein).

This paper is organized as follows. The following section, Section 2, sets up the time-varying systems to be tackled, time-invariant observer to be built, feedback control problems to be solved, and system properties (assumptions) to be with the systems. Section 3 gives the main results for solving the feedback control problems, in which LMIs characterize the quadratically stability of the closed-loop system with L2-gain of the closed-loop system is preserved; in addition, least-squares algorithm is suggested to drive the time-varying observer such that the time-varying plant states can be estimated asymptotically. Consequently, Section 4 demonstrates the synthesis of the static gains of control input and correction gains of observer. To verify the effectiveness, illustrative applications are to test the overall design of the feedback closed-loop system. The last section, Section 5, concludes the overall paper.

Advertisement

2. System formulation and problem statement

We consider a nonlinear time-varying system described by a set of equations

x˙(t) = A(t)x(t) + B1u(t) + B2w(t)z(t) = Dx(t) + D2w(t)y(t) = Cx(t)E1

The first equation describes the plant with n-vector of state x and control input um and is subject to exogenous input wl, which include disturbances (to be rejected) or references (to be tracked). The second equation defines the regulated outputs zq, which, for example, may include tracking error, expressed as a linear combination of the plant state x and of the exogenous input w. The last part is the measured outputs yp. The matrices in (1) are assumed to have the following system property:

(S1) A(t) denotes the matrix with nonlinear time-varying properties satisfying

A(t) = F(t)A,E2

where A is the n×n constant matrix that extracts from A(t). The n×n matrix F(t) lumps all time-varying elements associated with plant matrix A(t), and it is possible to find a vertex set Ψ1 defined as follows

Ψ1 = Co{F1,F2,,Fi}

such that F(t)Ψ1, which is equivalent to saying that F(t) is within the convex set Ψ1 for all time t 0.

(S2) The matrices B1, B2, D, and D2 are all constant matrices, in which B1 and B2 quantify the range spaces of control input u and exogenous input w, respectively, and D2 is chosen to be zero matrix that is D2 = 0 for computational simplicity.

Remark 1. It is highlighted that F(t) in (S1) is not merely to lump all possible time-varying functions but to include the parametric uncertainties. For the parametric uncertainties, it is seen by simply observing that F(t) can be multiplicative uncertainties shown in [8]. For representing time-varying matrix, an example is set as follows. Let

A(t) =(2f(t)01) = F(t)A,

where

F(t) = (1f(t)01),A=(2001).

It should be aware that another equally good choice is to use additive type of representation, that is, A(t) = A + F1(t), where F1(t) lumps all time-varying factors. As a matter of fact, multiplicative and additive type of representations are interchangeable. Let F2(t) be such that F1(t) = F2(t)A. Thus, A(t) = F(t)A, where F(t) = I + F2(t).

Remark 2. A number of examples are found to show the time-varying bound for F(t), such as aircraft control systems in which constantly weight decreasing due to fuel consumption, the switching operations of a power circuit board for voltage and current regulations, and the hard disk drives with rotational disks induced time-varying dynamic phenomena [6].

The control action to (1) is to design an observer-based output feedback control system, which processes the measured outputs y(t) in order to determine the plant states and generate an appropriate control inputs u(t) based on the estimated plant states. The following observer dynamics is developed for system (1),

x^˙(t) = Ax^(t) + B1u(t) + B2w(t) + Le(t)y^(t) = Cx^(t)E3

where

e(t) = diag[y^(t)]ς(t) y(t),

x^(t) is the observed state of x(t) and the gain L is to be designed for the sake of stability. It should be noted that the usage of constant matrix A in (3) instead of using time-varying A(t) is due to the fact that it is not possible or may be too expensive to build the time-varying plant matrix A(t) for the time-varying observers in a real analog circuit board that controls the system. On the contrary, we are able to establish a time-invariant observer with ease for constant system matrix A, B1, and B2 as stated in the Section 2. It is also seen that the observer (3) is Luenberger-like observer because of the use of observer gain L.

The time-varying vector-valued function ς(t)p to be determined in the sequel is an additional degree of freedom for driving observer (3) to estimate the plant state x(t). We should be aware that the intention of ς(t) is designed and meant to compensate time-varying effects of F(t) to the system, that is, the effects of the time-varying functions will be adjusted by one such function ς(t). Therefore, in addition to input u(t), e(t) becomes an additional driving force to (3) such that x^(t) tracks x(t) is possible. If F(t) =I and all elements of the vector ς(t) being equal to 1, then the system (1) with the observer (3) is a typical textbook example of Luenberger observer control system [21].

In order to facilitate the closed-loop system, the error dynamics can thus be found by manipulating (1) and (3) as follows

x˜˙(t) = F(t)Ax˜(t) + (I F(t))Ax˜(t) + Le(t)E4

or, equivalently, by taking the advantages of polytopic bound of (S1)

x˜˙(t) = (A + LC)x˜(t) + (I F(t))Ax(t) + Ldiag[ε(t)]y^(t)E5

where x˜(t) = x^(t)x(t) and ε(t) = ς(t)1, in which 1 denotes the vector with all elements being equal to 1.

Once the observed state x^ is available, the control input u is chosen to be a memoryless system of the form

u(t) = Kx^(t),E6

where K is the static gain to be designed. The control purpose has twofold: to achieve closed-loop stability and to attenuate the influence of the exogenous input w on the penalty variable z, in the sense of rendering the L2 gain of the corresponding closed-loop system less than a prescribed number γ, in the presence of time-varying plant. The problem of finding controllers achieving these goals can be formally stated in the following terms.

Observer-based control via measured feedback. Given a real number γ>0 and {A(t), B1, B2, C, D, D2} satisfying system properties (S1) and (S2). Find, if possible, two constant matrices K and L such that

(O1) the matrix

(F(t)A + B1KB1K(I F(t))AA + LC)E7

has all eigenvalues in C,

(O2) the L2-gain of the closed-loop system

x˙(t) = (F(t)A + B1K)x(t) + B1Kx˜(t) + B2w(t),y(t) = Cx(t)x˙(t) = (A + B1K)x^(t) + B2w(t) + Le(t),y^(t) = Cx^(t)z(t) = Dx(t),withe(t) = diag[y^(t)]ς(t) y(t)E8

is strictly less than γ, or equivalently, for each input u(t) = Kx^(t)L2[0,), the response z(t) of (8) from initial state (x(0),x˜(0)) = (0,0) is such that the following performance index is satisfied

=0z(t)2dtγ20w(t)2dtE9

for some γ> 0 and every w(t)L2 [0,).

Remark 3. Here, we will be using the notion of quadratic stability with an L2-gain measure which was introduced in [32]. This concept is a generalization of that of quadratic stabilization to handle L2-gain performance constraint to time-varying system attenuation. To this end, the characterizations of robust performance based on quadratic stability will be given in terms of LMIs, where if LMIs can be found, then the computations by finite dimensional convex programming are efficient (see, for example, [32]).

Figure 1.

Overall control structure.

Remark 4. Figure 2 shows the overall feedback control structure of (8) to be designed in the sequel, where the feedback loop, namely observer–error dynamics, serves as filtering process with y(t) and w(t) as inputs such that proper control inputs u(t) and additional driving force of (3) e(t) are produced.

Figure 2.

Observer–error dynamics.

Advertisement

3. Analyses and characterizations

Two issues will be addressed in this section. Firstly, the theorem states the sufficiency condition showing that the problem of observer-based control via measured feedback of time-varying system is solvable. Secondly, an identification process based on least-squares algorithms for ς(t) is derived to construct the feedback structure of the closed-loop system (8).

3.1. LMI characterizations

Theorem 1. Consider the time-varying system (1), observer dynamics (3), and error dynamics (4) satisfying system property (S1) and (S2). Then, (T1) implies (T2), where (T1) and (T2) are as follows.

(T1) There exist matrices P10, P20, K, and L and positive scalars γ and β such that

(Π1(P1,K)*KTB1TP1+P2(IF(t))AΠ2(P2,L))0,E10

and matrices P30 and Q0 with adaptive scheme of ϵ(t) satisfying

ϵ˙(t)=P3(Q+12β2diag[y^(t)]Tdiag[y^(t)])ϵ(t).E11

The matrices, Π1 and Π2, defined in (10) are

Π1(P1,K) = (F(t)A + B1K)TP1 + P1(F(t)A + B1K) + γ2P1B2B2TP1 + DTD,E12
P2(P2,L) = (A + LC)TP2 + P2(A + LC) +β2P2LLTP2.E13

(T2) (O1) and (O2) hold, that is, the problem of observer-based control via contaminated measured feedback is solvable.

Proof: the implication between (T1) and (T2) is shown in the Appendix.

Remark 5. It is shown in the Theorem 1 that if the matrix inequality (10) is satisfied and if ϵ(t) is computationally adjusted according to (11), then the overall closed-loop system is not merely quadratically stabilizable, but the performance index (9) is fulfilled. It is highlighted that ϵ(t) in (11) is exponentially approaching zero as t for any y^(t). It is also noticed that there remains a problem, that is, to compute the observed states x^(t) in (3), in addition to the input u(t) and exogenous signal w(t), the time-varying vector function ς(t) is needed in the computation. Therefore, the following modified least-squares algorithm is derived for recursive estimation of time-varying vector-valued function ς(t).

3.2. Modified least-squares algorithms

Prior to stating the modified least-squares scheme for computing ς(t), the following assumption is made

ς(α) = ς(β),α,βIi,i = 0,1,2,,E14

where Ii = {t|ti t<ti + Δt}. This is to say that ς(t) is kept constant within the small time interval Δt, which, equivalently, is assumed that ς(t) is a piecewise continuous time-varying function. The problem in this section is to determine an adaptation law for the vector-valued function ς(t) in such a way that the x˜(t) computed from the model (4) agree as closely as possible to zero in the sense of least squares. The following least-squares algorithms are developed by summing the index of each small time interval with cost function defined as follows

I= minςiIi(ς) = minςi{12titi+Δtx˜˙Tx˜˙dτ} .E15

To minimize the cost function , each index i should be minimized as well and the following conditions may be obtained for each time interval

ςJi=titWT(τ) F(τ)Ax˜(τ) + (IF(τ))Ax˜(τ) + W(τ)ς(τ) Ly(τ)dτ = 0,E16

where tIi and

W(τ) = Ldiag[y^(τ)].

In view of (16), the least-squares estimate for ς(t) is given by

ς^(t) = Γ(t)titWT (τ) Ly(τ)F(τ)Ax˜(τ)(IF(τ))Ax^(τ)dτ,E17

where Γ(t) is called covariance matrix and is defined as follows

Γ(t)=(titWT(τ)W(τ)dτ)1

To assure positive definiteness and thus the invertibility, the covariance matrix will be further polished in the sequel. The covariance matrix plays an important role in the estimation of ς(t) and is worth noting that

ddt(Γ1(t)) = WT(t)W(t).E18

To find the least-squares estimator with recursive formulations, which parameters are updated continuously on the basis of available data, we differentiate (17) with respect to time and obtain

ddtς^(t) = Γ(t)WT(t)W(t)ς^(t) + f(t),ς^(ti)=ς^i,E19

where

f(t) = Γ(t)WT(t) (Ly(t) F(t)Ax˜(t)(I F(t))Ax^(t)),E20

for tIi, i = 0,1,2,. The covariance matrix Γ(t) acts in the ς^(t) update law as a time-varying, directional adaptation gain. We have to aware that by observing (18), which indicates positive semi definite of ddtΓ1(t), implies that Γ1(t) may go without bound and hence Γ(t) will become very small in some directions and adaptation in those directions becomes very slow. Therefore, to avoid slowing adaptive propagation speed and to assure the positive definiteness of covariance matrix Γ(t) such that invertibility exists, the following covariance resetting propagation law is developed. Within each time window, we modify (18) as follows,

ddt(Γ1(t)) = gWT(t)W(t),Γ(ti) = k0I,tIi,E21

and

Γ(tr+)=k0I0,tr={t|λmin(Γ(t))k1<k0,tIi,i=0,1,2,}.E22

The scalar g> 0 is chosen such that the adaptation maintains suitable rate of propagation. The covariance resetting propagation is adjusted by (21), in which the initial condition is also reset. The condition (22) shows that the covariance matrix can also be reset within the time window if the covariance matrix is close to the singularity. That is, the covariance matrix is reset if its minimum eigenvalue is less than or equal to k1, that is, λmin(Γ(t))k1. The following Lemma shows that the covariance matrix Γ(t) is bounded and is positive definite based on the covariance resetting propagation law (21) and (22).

Lemma 1. Assuming that (21) and (22) hold. Then, k0IΓ(t)k1I0 and, thus, k0Γ(t)k1 for tIi, i = 0,1,2,.

Proof: At the resettings, the covariance matrix Γ1(t) is reset at t=tr+, hence Γ(tr+)=k0I. Then, followed by ddtΓ1(t)=gWT(t)W(t)0, we have Γ1(t1)Γ1(t2)0 for all t1t2>tr between covariance resettings. The computation will progress until the next resetting time t, if it exists, on which λmin(Γ(t))k1. Hence, we may conclude that k0IΓ(t)k1I0, which says that k0Γ(t)k1.

Before presenting the theorem for modified least-squares algorithms of ς^(t) showing that it is bounded, the following transition matrix Lemma for the solutions of (19) is essential.

Lemma 2. There exists a positive number k such that the transition matrix, Φ(t,τ), of (19) is bounded, that is, Φ(t,τ)k<, for tIi, i = 0,1,2,.

Proof: The proof is constructive. We first notice that the solution to (19) is given by

ς^(t)=Φ(t,ti)ς^i+titΦ(t,τ)f(τ)dτ,

where Φ(t,τ) is the transition matrix of ς^˙(t)=Γ(t)WT(t)W(t)ς^(t) or the unique solution of

Φ˙(t,τ)=Γ(t)WT(t)W(t)Φ(t,τ),Φ(τ,τ)=I.E23

A constructive method is suggested by letting a differential equation η˙(t)=η(t), η(ti)=ηi, where η(t) is a vector of appropriate dimensions. We may conclude that η(t),η˙(t)L2L..

Let π(t)=Φ(t,ti)η(t) and Lyapunov candidate, Vπ = πT(t)Γ1(t)π(t), where Γ(t) is chosen as satisfying Lemma 1. Then, computing V˙π along solutions of π˙(t) = Φ˙(t,ti)η(t)+Φ(t,ti)η˙(t) between the covariance resettings is as follows,

V˙π = πT(t)((2+g)WT(t)W(t)Γ1(t)π(t).)

Without loss of generality, let g = 2. Then,

V˙π = πT(t)(Γ1(t))π(t)=Vπ<0.E24

At the point of resetting, that is, the point of discontinuity of Γ(t), we obtain

Vπ(tr+)Vπ(tr)=πT(t)(Γ1(tr+)Γ1(tr))π(t)0.E25

It follows from (24) and (25), we conclude that the Lyapunov candidate along the solution π(t) has the property, 0Vπ(t)Vπ(ti). This shows that π(t)L, which implies that Φ(t,ti)k< for some k>0.

Theorem 2. Assuming that the problem of observer-based control via contaminated measured feedback is solvable. If there exist the identifier structure of least-squares algorithm (19) with covariance resetting propagation law (21) and (22), then ς^(t)L for all t0.

Proof: To prove the claim is true, we need to show that ς^(t)=suptς^(t)< for tIi, i = 0,1,2,. We have the solution to (19) is given by

ς^(t)=Φ(t,ti)ς^i+titΦ(t,τ)f(τ)dτ,

where Φ(t,τ) is the transition matrix shown in (23). In view of Lemma 2, we obtain

ς^(t) =Φ(t,ti)ς^i+titΦ(t,τ)f(τ)dτk(ς^i+titf(τ)dτ).

The boundedness of titf(τ)dτ can be easily seen by observing (20), in which x^(t), x˜(t), and W(t)=Ldiag[y^]=Ldiag[Cx^] followed by Theorem 1 have bounds and x^(t),x˜(t),W(t)0, as t. The covariance matrix Γ(t) satisfies (21) and, then, is bounded by Lemma 1. Followed by system property (S1), F(t) is clearly bounded for all t0. The measured signal y = Cx, by Theorem 1, is 0 as t. In summary, there exists a positive finite number k3 such that

titf(τ)dτk3titdτk3Δt.

Therefore,

ς^(t)kς^i+ k3Δt<,E26

which indicates that ς^(t)L, for tIi. As time evolves, for each small time interval, (26) always holds. Hence, we may extend t. This completes the proof.

Remark 6. In this section, a modified least-squares algorithm is shown to find the estimated ς(t), which is intentionally designed to justify the effects of time-varying functions F(t) produced in the plant (1). Figure 3 depicts the complete structure of observer–error dynamics that has been shown in Figure 2, in which two filters, namely observer dynamics and error dynamics, and one lest squares algorithm construct the feedback control. The observer dynamics produces the estimated state of plant by filtering the signals u(t), w(t), and e(t). It is worth noting that the signal e(t) from least-squares algorithm plays an additional drive force to the observer dynamics. The error dynamics is to find the error state x˜(t), which is then injected into the least-squares algorithms such that the time-varying function ς(t) is estimated.

Figure 3.

Mass-damper-spring system.

Advertisement

4. Control and observer gain synthesis

The synthesis of control and observer gains is addressed in Theorem 1. For the simplicity of expression, the time argument of matrix-valued function F(t) will be dropped and denoted by F. A useful and important Lemma will be stated in advance for clarity:

Lemma 3 (Elimination Lemma see [32]). Given H=HTn×n, Vn×m, and Un×p with Rank(V)<n and Rank(UT)<n. There exists a matrix K such that

H+VKUT +UKTVT0

if and only if

VTHV0andUTHU0,

where V and U are orthogonal complement of V and U, respectively, that is VTV= 0 and [VV] is of maximum rank.

Lemma 4. Given a real number γ>0 and {A(t), B1, B2, C, D, D2} satisfying system properties (S1) and (S2), the following statements (Q1), (Q2), and (Q3) are equivalent.

(Q1) There exist matrices P10, P20, matrices K and L, and positive scalars β and δ such that the following inequality holds,

(Π1(P1,K)+δ2AT(IF)T(IF)AP1B1KKTB1TP1Π2(P2,L)+δ2P2P2)0.E27

(Q2) There exist matrices P10, P20, matrices K and L, and positive scalars β and δ such that the following inequality hold,

(B1)TP11 (Π(P1,K)+δ2AT(IF)T(IF)A)P11(B1)0,E28
Π2(P2,L)+δ2P2P20,E29

(Q3) There exist matrices X0 and P20, matrix W and Y, and the positive scalars γ, δ and β such that the following two matrix inequalities hold,

(XATFT+FAX + WTB1T+B1W+γ2B2B2T XX(DTD+δ2AT(IF)T(IF)A)1)0,E30
(ATP2+P2A+CTϒT+ϒCϒP2ϒTβ2I0P20δ2I)0.E31

Proof: to prove (Q1)(Q2). The inequality (27) may fit into Lemma 3 with

H=( Π1(P1)+δ2AT(IF)T(IF)A00Π2(P2)+δ2P2P2),
V=(P1B10),andU=(II).

Next, the orthogonal complement of V and U is given by V and U, respectively, which are

V=(P11(B1)00I),andU=(II),

which (B1) is defined as the orthogonal complement of B1 and is such that (B1)TB1=0 and [B1(B1)] is of maximum rank. By applying Lemma 3, we may have the following inequalities,

VTV=( (B1)TP11(Π(P1,K)+δ2AT(IF)T(IF)A)P11(B1)00Π2(P2,L)+δ2P2P2,)0,E32

and

UTHU=Π(P1,K)+δ2AT(IF)T(IF)+Π2(P2,L)+δ2P2P20.E33

It is seen that matrix inequalities (28) and (29) hold if and only if (32) is true. Given (32), (33) is also true. Therefore, by Lemma 3, (Q1)(Q2).

To prove (Q2)(Q3), let = P11, we find the following iff condition for inequality (28),

(B1)TP11(Π(P1,K)+δ2AT(IF)T(IF)A)P11(B1)0(B1)T(XATFT+ FAX+WTB1T+B1W+γ2B2B2T+X(DTD+δ2AT(IF)T(IF)A)X)(B1)0,XATFT+FAX+WTB1T+B1W+γ2B2B2T+X(DTD+δ2AT(IF)T(IF)A)X0,(XATFT+FAX+WTB1T+B1W+γ2B2B2TXX(DTD+δ2AT(IF)T(IF)A)1)0,E34

where W = KX. It is noted that the last iff holds is due to Schur complement in that the positive definiteness of DTD+δ2AT(IF)T(IF)A must ensure. As for the matrix inequality (29), let Y = P2L, we have

Π2(P2,L)+δ2P2P20,ATP2+P2A+CTϒT+ϒC+β2ϒϒT+δ2P2P20,(ATP2+P2A+CTϒT+ϒC ϒP2ϒTβ2I0P20δ2I)0.E35

Again, the last iff of (35) is due to Schur complement and β>0 and δ>0 ensure the inequality holds.

Remark 7. It is seen that δ is the only common scalar for matrix inequalities (34) and (35). In order to ease of computation and without loss of generality, we may assume that δ is a certain constant. The advantage of it, in addition to the ease of computation, is that the gains K and L are solely determined by (34) and (35), respectively. From rigorous point of view, we may not be able to say that the separation principle is completely valid for this case. But, loosely speaking, it fits by small modification.

Lemma 5. (Q1) implies (10).

Proof: let (π1,π2)(0,0). Then

(π1π2)T(Π1(P1,K)KTB1TP1+P2(IF)AΠ2(P2,L))(π1π2)=π1T(Π1(P1,K))π1+π2T(Π2(P2,L))π2+π1T (P1B1K+AT(IF)TP2)π2+π2T(KTB1TP1+P2(IF)A)π1=π1T(Π1(P1,K))π1+π2T(Π2(P2,L))π2+π1T(P1B1K)π2+π2T(KTB1TP1)π1+δ2π1TAT(IF)T(IF)Aπ1+δ2π2TP2P2π2((IF)Aπ1δ2P2π2)Tδ2((IF)Aπ1δ2P2π2)π1T(Π1(P1,K))π1+π2T(Π2(P2,L))π2+π1T(P1B1K)π2+π2T(KTB1TP1)π1 +δ2π1TAT(IF)T(IF)Aπ1+δ2π2TP2P2π2=(π1π2)T(Π1(P1,K)+δ2AT(IF)T(IF)A  P1B1KKTB1TP1Π2(P2,L)+δ2P2P2)(π1π2).

Thus, (Q1) implies (10). This completes the proof.

Theorem 3. Given a real number γ>0 and {A(t), B1, B2, C, D, D2} satisfying system property (S1) and (S2). Then, (Q3) with scheme (11) implies (T2).

Proof: by Lemma 5, (Q1) implies matrix inequality (10). Moreover, by Lemma 4, we have (Q1)(Q3). Therefore, (Q3) with scheme (11) is equivalent to (T1). Moreover, by Theorem 1, the claim is true.

Remark 8. Theorem 3 states that the problems post in observer-based control via contaminated measured feedback, that is, (O1) and (O2), are solvable by proving that (T2) holds.

Advertisement

5. Illustrative application

In this application, a simple time-varying mass-damper-spring system is controlled to demonstrate that the time-varying effects appearing in the system matrix can be transferred to a force term in the observer structure. Thus, consider the system shown in Figure 3 without sensor fault. k1 and c1 are linear spring and damping constant, respectively. k2(t), c2(t), and c3(t) are time-varying spring and viscous damping coefficients. The system is described by the following equation of motion

0=k2(t)y(t)+c3(t)+(t)mÿ(t)=k1y(t)(c1+c2(t))y˙(t)+u(t),E36

where time-varying functions are c2(t)=c2(sinω2t), k2(t)= k1mebt(cosω1t), c3(t)=c1mebt(cosω1t), and the constants are k1 = k and c1=c2 = c. Define x1(t)=y(t) and x2(t)=y˙(t), the state space representation of (36) is

x˙(t)=F(t)Ax(t)+B1u(t)y(t)=Cx(t),E37

where

x(t)=(x1(t)x2(t)),F(t)=(1ebtcosω1tc2msinω2t1),A=(01k1mc1m),B1=(01m),andC=(10).

Here, we consider the parameters m = 1, c = 1, k = 1, b = 0, ω1 = 1, and ω2=10 (rad/sec). Thus, the set of vertices of polytope Ψ1 associated with time-varying matrix F(t) is

Co{(1111),(1111),(1111),(1111)}.

By applying linear matrix inequalities (30) and (31) of (Q3) in Lemma 4, the control and observer gain, K and L can be found by implementing Matlab Robust Control Toolbox. It is also noted that the computation of two matrix inequalities can be separated by justifying δ = 0.05. We thus find

K=(9.032210.2123) ,L =(7.09502.8926).

The control input is then computed by u = Kx^, where the observed state x^ is from

x^˙(t)=(A+ B1K)x^(t)+Le(t)y^(t)=Cx^(t)E38

with e(t)=diag[y^(t)]ς(t)y(t), in which the time-varying vector-valued function ς(t) is estimated via the set of recursive formulations (19)(22).

Figure 4.

(a) shows plant state (solid line) x1 and observer state (dash-dot line) x^1. (b) is the plant state (solid line) x2 and observer state (dash-dot line) x^2. (c) gives the control input u(t).

The implementation are coded in Matlab using the initial states: x1(0)=0.5, x2(0)=0.6, x^1(0)=0.0, x^2(0)=0.1, ς^(0)=0.1, Γ(0)=k0=2, and k1=0.3. The simulation results are depicted in Figures 4 and 5. Figure 4(a) and (b) shows that the observer states x^ cohere with the plant states x. It is, therefore, seen that the observer (38) being driven by time-varying term e(t) can actually trace the plant (37). The control input u(t) to the system is shown in Figure 4(c). The covariance resetting propagation law Γ(t) and the estimated ς(t), that is, ς^(t) are shown in Figure 5(a) and (b). The observer driving force e(t) and 2-norm value of the time-varying matrix function F(t) are depicted in Figure 5(c) and (d). It is seen clearly that the Γ(t) and ς^(t) are adjusted to accommodate the time-varying effects that driving the observer dynamics as the closed-loop system is approaching equilibrium point. The driving force to the observer dynamics e(t) shows the same results. Figure 5(c) depicts that the time-varying matrix F(t) is indeed varying with time.

Figure 5.

(a) is the values of Γ(t). (b) demonstrates the least-squares estimated results of ς^(t). (c) is the driving force of the observer dynamics e(t). (d) computes the 2-norm value of time-varying matrix function F(t) with b = 0.

Advertisement

6. Conclusion

This paper has developed the modified time-invariant observer control for a class of time-varying systems. The control scheme is suitable for the time-varying system that can be characterized by the multiplicative type of time-invariant and time-varying parts. The time-invariant observer is constructed directly from the time-invariant part of the system with additional adaptation forces that are prepared to account for time-varying effects coming from the measured output feeding into the modified observer. The derivation of adaptation forces is based on the least squares algorithms in which the minimization of the cost of error dynamics considers as the criteria. It is seen from the illustrative application that the closed-loop systems are showing exponentially stable with system states being asymptotically approached by the modified observer. Finally, the LMI process has been demonstrated for the synthesis of control and observer gains and their implementation on a mass-spring-damper system proves the effectiveness of the design.

Advertisement

7. Appendix

It is noted that in this appendix all time arguments of either vector-valued or matrix-valued time functions will be dropped for the simplicity of expression. They can be easily distinguished by their contents.

Proof: (T1)(T2). We need to show that if the conditions (10) and (11) in (T1) hold, then (O1) and (O2), which are equivalent to (T2), hold. Let quadratic Lyapunov function be

V(x,x˜,ϵ)=xTP1x+x˜TP2x˜+ϵTP31ϵ,E38000

with P10, P20, and P30. Then, the performance index (9) can be written as

J=0z2dt=0[zTz+ddtV(x,x˜,ϵ)]dtV(x(),x˜(),ϵ()),E39

for all states satisfying (1) and (3) with initial states (x(0),x˜(0))=(0,0), and ϵ(0)=0. In view of (8), the first integrand in (39) is

zTz = xTDTDxE40

The second integrand in (39) is

ddtV(x,x˜,ϵ)=ddt(xTP1x+x˜TP2x˜+ϵTP31ϵ).E41

The right-hand side of equality (41) can be reorganized by using the closed-loop system (8), and thus, the first term is

ddtxTP1x=xT(FA+B1K)TP1+P1(FA+B1K)x+ x˜T(B1K)TP1x+xTP1B1Kx˜+wTB2TP1x+xTP1B2wE42

Completing the square of (42), we have

ddtxTP1x=xT(FA+B1K)TP1+P1(FA+B1K)x+ x˜T(B1K)TP1x+xTP1B1Kx˜+ γ2wTw+γ2xP1B2TB2TP1x (wγ2B2TP1x)Tγ2(wγ2B2TP1x).E43

Similarly, the second term of (41) is

ddtx˜TP2x˜=x˜T(A+LC)TP2+P2(A+LC)x˜+y^Tdiag[ϵ]TLTP2x˜+x˜TP2Ldiag[ϵ]y^+ xTAT(IF)TP2x˜+x˜TP2(IF)Ax.E44

Applying completing the square to (44), we obtain

ddtx˜TP2x˜=x˜T(A+LC)TP2+P2(A+LC)x˜+ xTAT(IF)TP2x˜+x˜TP2(I F)Ax+ β2y^Tdiag[ϵ]Tdiag[ϵ]y^+β2x˜TP2LLTP2x˜(diag[ϵ]y^β2LTP2x˜)Tβ2(diag[ϵ]y^β2LTP2x˜).E45

Substituting (40), (43), and (45) into (39), we have

J=0[(xx˜)T(Π1(P1,K)(P1B1K)T+P2(IF)AΠ2(P2,L))(xx˜)   (wγ2B2TP1x)Tγ2(wγ2B2TP1x) (diag[ϵ]y^β2LTP2x˜)Tβ2(diag[ϵ]y^β2LTP2x˜)+γ2wTw+β2y^Tdiag[ϵ]Tdiag[ϵ]y^+ddt(ϵTP31ϵ)]dtV(x(),x˜(),ϵ()),E46

where definition of Π1(P1) and Π2(P2) are defined as (12) and (13), respectively. Therefore, by eliminating the negative terms from (46), the following inequality is drawn,

J0[(xx˜)T(Π1(P1,K)(P1B1K)T+P2(IF)AΠ2(P2,L))(xx˜)+γ2wTw+β2y^Tdiag[ϵ]Tdiag[ϵ]y^+ddt(ϵTP31ϵ)]dt.E47

Given that diag[ϵ]y^=diag[y^]ϵ, if (11) of (T1) holds, then it concludes that

β2y^Tdiag[ϵ]Tdiag[ϵ]y^+ddt(ϵTP31ϵ)=ϵT(2Q)ϵ.

In view of (10) of (T1), we thus find that the inequality (47) is simply

J0[γ2wTwϵT(2Q)ϵ]dtγ20wTwdt.E48

Therefore, the inequality (48) satisfies the performance index (9), which completes the proof (O2).

To prove that (O1) holds, we use the inequality (10) in (T1) and get the equivalent inequality as follows,

P˜Ã+ÃTP˜P˜B˜R1B˜TP˜D˜TD˜,

where

P˜=(P100P2),A˜=(FA+B1KB1K(IF)AA+LC),R=(ϒ200β2),B˜=(B200L),D˜=(D000)

It is concluded, by a standard Lyapunov stability argument, that Ã, that is (7), has all eigenvalues in C, which shows that (O1) holds. This completes the proof of Theorem 1.

References

  1. 1. Amato F. Robust Control of Linear Systems Subject to Uncertain Time-Varying Parameters. Lecture Notes in Control and Information Sciences. Berlin Heidelberg: Springer-Verlag; 2006.
  2. 2. Grewal MS, Andrews AP. Kalman Filtering: Theory and Practice Using MATLAB. New York, NY: John Wiley & Sons, Inc.; 2008.
  3. 3. Simon D. Optimal State Estimation: Kalman, H, and Nonlinear Approaches. Hoboken, New Jersey: John Wiley & Sons, Inc.; 2006.
  4. 4. Sontag ED. Mathematical Control Theory: Deterministic Finite Dimensional Systems. 2nd ed. New York: Springer; 1998.
  5. 5. Chen BM, Lee TH, Venkatakrishnan V. Hard Disk Drive Servo Systems. Advances in Industrial Control. London: Springer-Verlag; 2002.
  6. 6. Nie J, Conway R, Horowitz R. Optimal H∞ Control for Linear Periodically Time-Varying Systems in Hard Disk Drives. IEEE/ASME Transactions on Mechatronics. 2013;18(1):212–220.
  7. 7. Phat VN. Global Stabilization for Linear Continuous Time-varying Systems. Applied Mathematics and Computation. 2006;175(2):1730–1743.
  8. 8. Zhou K, Doyle JC. Essentials of Robust Control. Upper Saddle River, New Jersey: Prentice Hall; 1998.
  9. 9. Syrmos VL, Abdallah CT, Dorato P, Grigoriadis K. Static Output Feedback-A Survey. Automatica. 1997;33(2):125–137.
  10. 10. Huang D, Nguang SK. Robust H Static Output Feedback Control of Fuzzy Systems: an ILMI Approach. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2006;36(1):216–222.
  11. 11. Rosinová D, Veselý V, Kućera V. A Necessary and Sufficient Condition for Static Output Feedback Stabilizability of Linear Discrete-time Systems. Kybernetika. 2003;39(4):447–459.
  12. 12. Leibfritz F. An LMI-Based Algorithm for Designing Suboptimal Static H2/H Output Feedback Controllers. SIAM Journal on Control and Optimization. 2000;39(6):1711–1735.
  13. 13. Zhang J, Zhang C. Robustness of Discrete Periodically Time-varying Control Under LTI Unstructured Perturbations. IEEE Transactions on Automatic Control. 2000;45(7):1370–1374.
  14. 14. Zhang Y, Fidan B, Ioannou PA. Backstepping Control of Linear Time-Varying Systems with Known and Unknown Parameters. IEEE Transactions on Automatic Control. 2003;48(11):1908–1925.
  15. 15. Astrom KJ, Wittenmark B. Adaptive Control. 2nd ed. Boston, MA: Addison-Wesley; 1994.
  16. 16. Besançon G. Observer Design for Nonlinear Systems. A Loría FLL, Panteley E, editors. Lecture Notes in Control and Information Sciences, vol. 328. London: Springer; 2006.
  17. 17. Besançon G. Nonlinear Observers and Applications. Lecture Notes in Control and Information Sciences. London: Springer; 2007.
  18. 18. Spurgeon SK. Sliding Mode Observers: A Survey. International Journal of Systems Science. 2008;39(8):751–764.
  19. 19. Luenberger DG. Observers for Multivariable Systems. IEEE Transactions on Automatic Control. 1966;11(2):190–197.
  20. 20. Parr EA. Industrial Control Handbook. Jordan Hill, Oxford: Industrial Press; 1998.
  21. 21. Franklin GF, Emami-Naeini A, Powell JD. Feedback Control of Dynamic Systems. 3rd ed. Boston, MA: Addison-Wesley; 1994.
  22. 22. Feng CC. Fault-Tolerant Control and Adaptive Estimation Schemes for Sensors with Bounded Faults. In: IEEE International Conference on Control Applications; Singapore, 2007. p. 628–633.
  23. 23. Feng CC. Robust Control for Systems with Bounded-Sensor Faults. Mathematical Problems in Engineering. 2012;2012, Article ID 471585.
  24. 24. Sepe RB, Lang JH. Real-time observer-based (adaptive) control of a permanent-magnet synchronous motor without mechanical sensors. IEEE Transactions on Industry Applications. 1992;28(6):1345–1352.
  25. 25. Swarnakar A, Marquez HJ, Chen T. A New Scheme on Robust Observer-Based Control Design for Interconnected Systems with Application to an Industrial Utility Boiler. IEEE Transactions on Control Systems Technology. 2008;16(3):539–547.
  26. 26. Bellman RE. Stability Theory of Differential Equations. Dover books on Intermediate and Advanced Mathematics. New York, NY: Dover Publications; 1953.
  27. 27. Amato F, Ariola M, Cosentino C. Finite-time control of linear time-varying systems via output feedback. In: Proceedings of the American Control Conference; Salt Lake City, 2005. p. 4722–4726.
  28. 28. Callier FM, Desoer CA. Linear System Theory. London: Springer; 2012.
  29. 29. Desoer C. Slowly Varying System x˙=A(t)x. IEEE Transactions on Automatic Control. 1969;14(6):780–781.
  30. 30. Green M, Limebeer DJN. Linear Robust Control. Englewood Cliffs, New Jersey: Prentice Hall; 1995.
  31. 31. Rosenbrook HH. The Stability of Linear Time-dependent Control Systems. Journal of Electronics and Control. 1963;15(1):73–80.
  32. 32. Boyd S, El Ghaoui L, Feron E, Balakrishnan V. Linear Matrix Inequalities in System and Control Theory. vol. 15 of Studies in Applied Mathematics. Philadelphia, PA: SIAM; 1994.
  33. 33. Feng CC. Integral Sliding-Based Robust Control. In: Mueller A, editor. Recent Advances in Robust Control—Novel Approaches and Design Methods. Rijeka, Croatia: InTech; 2011.p. 165–186.

Written By

Chieh-Chuan Feng

Submitted: 28 October 2015 Reviewed: 24 February 2016 Published: 06 July 2016