A class of time-varying systems can be quadratically stabilized with satisfactory performance by a modified time-invariant-based observer. The modified observer driven by the additional adaptation forces with static correction gains is used to estimate the time-varying system states. Under the frame of quadratic stability, the closed-loop systems satisfying induced norm bounded performance criterion are exponentially stabilized while the states are exponentially approaching by the modified observer. This paper deals with the time-varying systems that can be characterized as the multiplicative type of time-invariant and time-varying parts. The time-invariant part is then used to construct the modified observer with additional driving forces, which are ready to adjust time-varying effect coming from the measured outputs feeding into the modified observer. The determination of the adaptation forces can be derived from the minimization of the cost of error dynamics with modified least-squares algorithms. The synthesis of control and observer static correction gains are also demonstrated. The developed systems have been tested in a mass-spring-damper system to illustrate the effectiveness of the design.
- quadratic stablilization
- time-invariant-based observer
- error dynamics
- least-squares algorithm
- adaptation forces
- time-varying parts
The study of optimal control for time-varying systems involves, in general, the solutions of Riccati differential equations (RDEs) and computations of the time-varying correction gains [1–4]. It is noted that the system is typically computer-implemented system, upon which the RDE and correction gains are calculated. The computations, however, induces unavoidable time delay. Although the time-delayed control has been considered, it leads to two disadvantages—complication of control mechanism and bulk of the control board. For some systems, for example, hard disk drives (a typically time-varying system) can only tolerate no delayed or very limited time delay control [5, 6] and use very small compartment. Hence, many literatures have focused on the static gain control of time-varying systems or the systems with time-varying or nonlinear uncertainties [7, 8]. It represents the simplest closed-loop control form but still encounters problems. One should aware that static output control is nonconvex, in which iterative linear matrix inequality approaches are exploited after it is expressed as a bilinear matrix inequality formulation( see [9–12]). As a result, it cannot be easily implemented in controlling the time-varying system and time delay problems remain.
It is a great challenge problem to design a linear continuous time-invariant observer with constant correction gains that regulate linear continuous time-varying plants. Although the vast majority of continuous TV control applications are implemented in digital computers [6, 13, 14], there are still opportunities to implement control with Kalman observer in continuous time (i.e., in analog circuits) [Hug88]. In particular, those control systems requiring fast response ask no or little delay effects. The difficulties for setting up those boards are because the algorithm of the design is too complex to implement in board level design, too expensive which can only be realized in a laboratory, or digital computation time induced unsatisfactory delay. It should be noticed that to realize the Kalman observer involves the computation of Riccati differential equations and inversion of matrices, which cause the obstacles of the board level design. A survey of linear and nonlinear observer design for control systems has been conducted in the literatures [15–18] and references therein. For controlling an linear time-invariant (LTI) system, the Lungerber observer  design with constant correction gain is straightforward and can be implemented on a circuit board with ease.
Many practical control systems implement time-invariant controllers with observers in the feedback loop, which can be easily realized not only in the laboratory but also in the industrial merchandize . The advantages of realization for the time-invariant controllers and observers are due to the constant parameters, which can be easily assembled by using resistors and other analog integrated elements in circuits board. The use of observers is also essential in industrial controls due to, in some cases, the states can be either not reachable or expensive to be sensed. Therefore, the use of observers are undoubtedly required to estimate unmeasured states since not merely full-state feedback control can be easily implemented but unmeasured states can be monitored [21–25].
With the aforementioned disadvantages and advantages, the control of time-varying systems is naturally arisen by designing a time-invariant observer-based controller that stabilizes, in particular exponentially, this time-varying plant. It is believed that this is a great challenge problem since we found no literatures tackling this problem. In what follows time-varying system control is first reviewed for laying the foundation of the robust control of the system with optimality property.
The feedback control of linear time-varying system has been extensively studied [1, 6, 7, 26–31]. The key observation of early works for exponential stability of time-varying systems requires that the time-dependent matrix-valued functions be bounded and piecewise continuous satisfying Lyapunov quadratic stability [29, 31]. In this regard, many, but not all, of them can be translated to robust control framework since time-dependent matrices are essentially bounded and are treated as uncertainties [8, 32]. This gives an opportunity for the control system design without solving RDEs, although what we pay for the avoiding solving RDEs is the conservative of control. The conservativeness comes from two reasons—solutions of RDEs are avoided and admit fast varying parameters. This, however, can be reduced by designing parameter-dependent type of criterion or introducing slack variables such that reduces the tightness of dependent variables( see, e.g.,  and reference therein).
This paper is organized as follows. The following section, Section 2, sets up the time-varying systems to be tackled, time-invariant observer to be built, feedback control problems to be solved, and system properties (assumptions) to be with the systems. Section 3 gives the main results for solving the feedback control problems, in which LMIs characterize the quadratically stability of the closed-loop system with of the closed-loop system is preserved; in addition, least-squares algorithm is suggested to drive the time-varying observer such that the time-varying plant states can be estimated asymptotically. Consequently, Section 4 demonstrates the synthesis of the static gains of control input and correction gains of observer. To verify the effectiveness, illustrative applications are to test the overall design of the feedback closed-loop system. The last section, Section 5, concludes the overall paper.
2. System formulation and problem statement
We consider a nonlinear time-varying system described by a set of equations
The first equation describes the plant with n-vector of state x and control input and is subject to exogenous input , which include disturbances (to be rejected) or references (to be tracked). The second equation defines the regulated outputs , which, for example, may include tracking error, expressed as a linear combination of the plant state x and of the exogenous input w. The last part is the measured outputs . The matrices in (1) are assumed to have the following system property:
(S1) A(t) denotes the matrix with nonlinear time-varying properties satisfying
where A is the constant matrix that extracts from A(t). The matrix F(t) lumps all time-varying elements associated with plant matrix A(t), and it is possible to find a vertex set defined as follows
such that , which is equivalent to saying that F(t) is within the convex set for all time .
(S2) The matrices , , D, and are all constant matrices, in which and quantify the range spaces of control input u and exogenous input w, respectively, and is chosen to be zero matrix that is for computational simplicity.
Remark 1. It is highlighted that F(t) in (S1) is not merely to lump all possible time-varying functions but to include the parametric uncertainties. For the parametric uncertainties, it is seen by simply observing that F(t) can be multiplicative uncertainties shown in . For representing time-varying matrix, an example is set as follows. Let
It should be aware that another equally good choice is to use additive type of representation, that is, , where lumps all time-varying factors. As a matter of fact, multiplicative and additive type of representations are interchangeable. Let be such that . Thus, , where
Remark 2. A number of examples are found to show the time-varying bound for , such as aircraft control systems in which constantly weight decreasing due to fuel consumption, the switching operations of a power circuit board for voltage and current regulations, and the hard disk drives with rotational disks induced time-varying dynamic phenomena .
The control action to (1) is to design an observer-based output feedback control system, which processes the measured outputs y(t) in order to determine the plant states and generate an appropriate control inputs u(t) based on the estimated plant states. The following observer dynamics is developed for system (1),
is the observed state of x(t) and the gain L is to be designed for the sake of stability. It should be noted that the usage of constant matrix A in (3) instead of using time-varying A(t) is due to the fact that it is not possible or may be too expensive to build the time-varying plant matrix A(t) for the time-varying observers in a real analog circuit board that controls the system. On the contrary, we are able to establish a time-invariant observer with ease for constant system matrix A, , and as stated in the Section 2. It is also seen that the observer (3) is Luenberger-like observer because of the use of observer gain L.
The time-varying vector-valued function to be determined in the sequel is an additional degree of freedom for driving observer (3) to estimate the plant state x(t). We should be aware that the intention of is designed and meant to compensate time-varying effects of F(t) to the system, that is, the effects of the time-varying functions will be adjusted by one such function . Therefore, in addition to input u(t), e(t) becomes an additional driving force to (3) such that tracks x(t) is possible. If and all elements of the vector being equal to 1, then the system (1) with the observer (3) is a typical textbook example of Luenberger observer control system .
or, equivalently, by taking the advantages of polytopic bound of (S1)
where and , in which 1 denotes the vector with all elements being equal to 1.
Once the observed state is available, the control input u is chosen to be a memoryless system of the form
where K is the static gain to be designed. The control purpose has twofold: to achieve closed-loop stability and to attenuate the influence of the exogenous input w on the penalty variable z, in the sense of rendering the gain of the corresponding closed-loop system less than a prescribed number γ, in the presence of time-varying plant. The problem of finding controllers achieving these goals can be formally stated in the following terms.
Observer-based control via measured feedback. Given a real number and satisfying system properties (S1) and (S2). Find, if possible, two constant matrices K and L such that
(O1) the matrix
has all eigenvalues in ,
(O2) the -gain of the closed-loop system
is strictly less than , or equivalently, for each input , the response z(t) of (8) from initial state is such that the following performance index is satisfied
for some and every .
Remark 3. Here, we will be using the notion of quadratic stability with an -gain measure which was introduced in . This concept is a generalization of that of quadratic stabilization to handle -gain performance constraint to time-varying system attenuation. To this end, the characterizations of robust performance based on quadratic stability will be given in terms of LMIs, where if LMIs can be found, then the computations by finite dimensional convex programming are efficient (see, for example, ).
Remark 4. Figure 2 shows the overall feedback control structure of (8) to be designed in the sequel, where the feedback loop, namely observer–error dynamics, serves as filtering process with y(t) and w(t) as inputs such that proper control inputs u(t) and additional driving force of (3) e(t) are produced.
3. Analyses and characterizations
Two issues will be addressed in this section. Firstly, the theorem states the sufficiency condition showing that the problem of observer-based control via measured feedback of time-varying system is solvable. Secondly, an identification process based on least-squares algorithms for is derived to construct the feedback structure of the closed-loop system (8).
3.1. LMI characterizations
(T1) There exist matrices , K, and L and positive scalars γ and β such that
and matrices and with adaptive scheme of ϵ(t) satisfying
The matrices, and , defined in (10) are
(T2) (O1) and (O2) hold, that is, the problem of observer-based control via contaminated measured feedback is solvable.
Proof: the implication between (T1) and (T2) is shown in the Appendix.
Remark 5. It is shown in the Theorem 1 that if the matrix inequality (10) is satisfied and if is computationally adjusted according to (11), then the overall closed-loop system is not merely quadratically stabilizable, but the performance index (9) is fulfilled. It is highlighted that in (11) is exponentially approaching zero as for any . It is also noticed that there remains a problem, that is, to compute the observed states in (3), in addition to the input and exogenous signal , the time-varying vector function is needed in the computation. Therefore, the following modified least-squares algorithm is derived for recursive estimation of time-varying vector-valued function .
3.2. Modified least-squares algorithms
Prior to stating the modified least-squares scheme for computing , the following assumption is made
where . This is to say that is kept constant within the small time interval , which, equivalently, is assumed that is a piecewise continuous time-varying function. The problem in this section is to determine an adaptation law for the vector-valued function in such a way that the computed from the model (4) agree as closely as possible to zero in the sense of least squares. The following least-squares algorithms are developed by summing the index of each small time interval with cost function defined as follows
To minimize the cost function , each index should be minimized as well and the following conditions may be obtained for each time interval
In view of (16), the least-squares estimate for is given by
where is called covariance matrix and is defined as follows
To assure positive definiteness and thus the invertibility, the covariance matrix will be further polished in the sequel. The covariance matrix plays an important role in the estimation of and is worth noting that
To find the least-squares estimator with recursive formulations, which parameters are updated continuously on the basis of available data, we differentiate (17) with respect to time and obtain
for . The covariance matrix acts in the update law as a time-varying, directional adaptation gain. We have to aware that by observing (18), which indicates positive semi definite of , implies that may go without bound and hence will become very small in some directions and adaptation in those directions becomes very slow. Therefore, to avoid slowing adaptive propagation speed and to assure the positive definiteness of covariance matrix such that invertibility exists, the following covariance resetting propagation law is developed. Within each time window, we modify (18) as follows,
The scalar is chosen such that the adaptation maintains suitable rate of propagation. The covariance resetting propagation is adjusted by (21), in which the initial condition is also reset. The condition (22) shows that the covariance matrix can also be reset within the time window if the covariance matrix is close to the singularity. That is, the covariance matrix is reset if its minimum eigenvalue is less than or equal to , that is, . The following Lemma shows that the covariance matrix is bounded and is positive definite based on the covariance resetting propagation law (21) and (22).
Proof: At the resettings, the covariance matrix is reset at , hence . Then, followed by , we have for all between covariance resettings. The computation will progress until the next resetting time t, if it exists, on which . Hence, we may conclude that , which says that .
Before presenting the theorem for modified least-squares algorithms of showing that it is bounded, the following transition matrix Lemma for the solutions of (19) is essential.
Lemma 2. There exists a positive number k such that the transition matrix, , of (19) is bounded, that is, , for .
Proof: The proof is constructive. We first notice that the solution to (19) is given by
where is the transition matrix of or the unique solution of
A constructive method is suggested by letting a differential equation , , where is a vector of appropriate dimensions. We may conclude that .
Let and Lyapunov candidate, , where is chosen as satisfying Lemma 1. Then, computing along solutions of between the covariance resettings is as follows,
Without loss of generality, let . Then,
At the point of resetting, that is, the point of discontinuity of , we obtain
Theorem 2. Assuming that the problem of observer-based control via contaminated measured feedback is solvable. If there exist the identifier structure of least-squares algorithm (19) with covariance resetting propagation law (21) and (22), then for all .
Proof: To prove the claim is true, we need to show that for . We have the solution to (19) is given by
where is the transition matrix shown in (23). In view of Lemma 2, we obtain
The boundedness of can be easily seen by observing (20), in which , , and followed by Theorem 1 have bounds and , as . The covariance matrix satisfies (21) and, then, is bounded by Lemma 1. Followed by system property (S1), F(t) is clearly bounded for all . The measured signal , by Theorem 1, is 0 as . In summary, there exists a positive finite number such that
which indicates that , for . As time evolves, for each small time interval, (26) always holds. Hence, we may extend . This completes the proof.
Remark 6. In this section, a modified least-squares algorithm is shown to find the estimated , which is intentionally designed to justify the effects of time-varying functions F(t) produced in the plant (1). Figure 3 depicts the complete structure of observer–error dynamics that has been shown in Figure 2, in which two filters, namely observer dynamics and error dynamics, and one lest squares algorithm construct the feedback control. The observer dynamics produces the estimated state of plant by filtering the signals u(t), w(t), and e(t). It is worth noting that the signal e(t) from least-squares algorithm plays an additional drive force to the observer dynamics. The error dynamics is to find the error state , which is then injected into the least-squares algorithms such that the time-varying function is estimated.
4. Control and observer gain synthesis
The synthesis of control and observer gains is addressed in Theorem 1. For the simplicity of expression, the time argument of matrix-valued function F(t) will be dropped and denoted by F. A useful and important Lemma will be stated in advance for clarity:
Lemma 3 (Elimination Lemma see ). Given , , and with and . There exists a matrix K such that
if and only if
where and are orthogonal complement of and , respectively, that is and is of maximum rank.
Lemma 4. Given a real number and satisfying system properties (S1) and (S2), the following statements (Q1), (Q2), and (Q3) are equivalent.
(Q1) There exist matrices , , matrices K and L, and positive scalars and such that the following inequality holds,
(Q2) There exist matrices , matrices K and L, and positive scalars and such that the following inequality hold,
(Q3) There exist matrices and , matrix W and Y, and the positive scalars , and such that the following two matrix inequalities hold,
Proof: to prove . The inequality (27) may fit into Lemma 3 with
Next, the orthogonal complement of and is given by and , respectively, which are
which is defined as the orthogonal complement of and is such that and is of maximum rank. By applying Lemma 3, we may have the following inequalities,
To prove , let , we find the following iff condition for inequality (28),
where . It is noted that the last iff holds is due to Schur complement in that the positive definiteness of must ensure. As for the matrix inequality (29), let , we have
Again, the last iff of (35) is due to Schur complement and and ensure the inequality holds.
Remark 7. It is seen that is the only common scalar for matrix inequalities (34) and (35). In order to ease of computation and without loss of generality, we may assume that is a certain constant. The advantage of it, in addition to the ease of computation, is that the gains K and L are solely determined by (34) and (35), respectively. From rigorous point of view, we may not be able to say that the separation principle is completely valid for this case. But, loosely speaking, it fits by small modification.
Lemma 5. (Q1) implies (10).
Proof: let . Then
Thus, (Q1) implies (10). This completes the proof.
Theorem 3. Given a real number and satisfying system property (S1) and (S2). Then, (Q3) with scheme (11) implies (T2).
Remark 8. Theorem 3 states that the problems post in observer-based control via contaminated measured feedback, that is, (O1) and (O2), are solvable by proving that (T2) holds.
5. Illustrative application
In this application, a simple time-varying mass-damper-spring system is controlled to demonstrate that the time-varying effects appearing in the system matrix can be transferred to a force term in the observer structure. Thus, consider the system shown in Figure 3 without sensor fault. and are linear spring and damping constant, respectively. , , and are time-varying spring and viscous damping coefficients. The system is described by the following equation of motion
where time-varying functions are , , , and the constants are and . Define and , the state space representation of (36) is
Here, we consider the parameters , , , , , and . Thus, the set of vertices of polytope associated with time-varying matrix F(t) is
By applying linear matrix inequalities (30) and (31) of (Q3) in Lemma 4, the control and observer gain, K and L can be found by implementing Matlab Robust Control Toolbox. It is also noted that the computation of two matrix inequalities can be separated by justifying . We thus find
The control input is then computed by , where the observed state is from
The implementation are coded in Matlab using the initial states: , , , , , , and . The simulation results are depicted in Figures 4 and 5. Figure 4(a) and (b) shows that the observer states cohere with the plant states . It is, therefore, seen that the observer (38) being driven by time-varying term e(t) can actually trace the plant (37). The control input u(t) to the system is shown in Figure 4(c). The covariance resetting propagation law and the estimated , that is, are shown in Figure 5(a) and (b). The observer driving force e(t) and 2-norm value of the time-varying matrix function F(t) are depicted in Figure 5(c) and (d). It is seen clearly that the and are adjusted to accommodate the time-varying effects that driving the observer dynamics as the closed-loop system is approaching equilibrium point. The driving force to the observer dynamics e(t) shows the same results. Figure 5(c) depicts that the time-varying matrix F(t) is indeed varying with time.
This paper has developed the modified time-invariant observer control for a class of time-varying systems. The control scheme is suitable for the time-varying system that can be characterized by the multiplicative type of time-invariant and time-varying parts. The time-invariant observer is constructed directly from the time-invariant part of the system with additional adaptation forces that are prepared to account for time-varying effects coming from the measured output feeding into the modified observer. The derivation of adaptation forces is based on the least squares algorithms in which the minimization of the cost of error dynamics considers as the criteria. It is seen from the illustrative application that the closed-loop systems are showing exponentially stable with system states being asymptotically approached by the modified observer. Finally, the LMI process has been demonstrated for the synthesis of control and observer gains and their implementation on a mass-spring-damper system proves the effectiveness of the design.
It is noted that in this appendix all time arguments of either vector-valued or matrix-valued time functions will be dropped for the simplicity of expression. They can be easily distinguished by their contents.
with , , and . Then, the performance index (9) can be written as
The second integrand in (39) is
Completing the square of (42), we have
Similarly, the second term of (41) is
Applying completing the square to (44), we obtain
Given that , if (11) of (T1) holds, then it concludes that
To prove that (O1) holds, we use the inequality (10) in (T1) and get the equivalent inequality as follows,
It is concluded, by a standard Lyapunov stability argument, that , that is (7), has all eigenvalues in , which shows that (O1) holds. This completes the proof of Theorem 1.