Open access

Identification of Linear, Discrete-Time Filters via Realization

Written By

Daniel N. Miller and Raymond A. de Callafon

Published: 11 July 2012

DOI: 10.5772/48293

From the Edited Volume

Linear Algebra - Theorems and Applications

Edited by Hassan Abid Yasser

Chapter metrics overview

2,762 Chapter Downloads

View Full Metrics

1. Introduction

The realization of a discrete-time, linear, time-invariant (LTI) filter from its impulse response provides insight into the role of linear algebra in the analysis of both dynamical systems and rational functions. For an LTI filter, a sequence of output data measured over some finite period of time may be expressed as the linear combination of the past input and the input measured over that same period. For a finite-dimensional LTI filter, the mapping from past input to future output is a finite-rank linear operator, and the effect of past input, that is, the memory of the system, may be represented as a finite-dimensional vector. This vector is the state of the system.

The central idea of realization theory is to first identify the mapping from past input to future output and to then factor it into two parts: a map from the input to the state and another from the state to the output. This factorization guarantees that the resulting system representation is both casual and finite-dimensional; thus it can be physically constructed, or realized.

System identification is the science of constructing dynamic models from experimentally measured data. Realization-based identification methods construct models by estimating the mapping from past input to future output based on this measured data. The non-deterministic nature of the estimation process causes this mapping to have an arbitrarily large rank, and so a rank-reduction step is required to factor the mapping into a suitable state-space model. Both these steps must be carefully considered to guarantee unbiased estimates of dynamic systems.

The foundations of realization theory are primarily due to Kalman and first appear in the landmark paper of [1], though the problem is not defined explicitly until [2], which also coins the term “realization” as being the a state-space model of a linear system constructed from an experimentally measured impulse response. It was [3] that introduced the structured-matrix approach now synonymous with the term “realization theory” by re-interpreting a theorem originally due to [4] in a state-space LTI system framework.

Although Kalman's original definition of “realization” implied an identification problem, it was not until [5] proposed rank-reduction by means of the singular-value decomposition that Ho's method became feasible for use with non-deterministic data sets. The combination of Ho's method and the singular-value decomposition was finally generalized to use with experimentally measured data by Kung in [6].

With the arrival of Kung's method came the birth of what is now known as the field of subspace identification methods. These methods use structured matrices of arbitrary input and output data to estimate a state-sequence from the system. The system is then identified from the propagation of the state over time. While many subspace methods exist, the most popular are the Multivariable Output-Error State Space (MOESP) family, due to [7], and the Numerical Algorithms for Subspace State-Space System Identification (N4SID) family, due to [8]. Related to subspace methods is the Eigensystem Realization Algorithm [9], which applies Kung's algorithm to impulse-response estimates, which are typically estimated through an Observer/Kalman Filter Identification (OKID) algorithm [10].

This chapter presents the central theory behind realization-based system identification in a chronological context, beginning with Kronecker's theorem, proceeding through the work of Kalman and Kung, and presenting a generalization of the procedure to arbitrary sets of data. This journey provides an interesting perspective on the original role of linear algebra in the analysis of rational functions and highlights the similarities of the different representations of LTI filters. Realization theory is a diverse field that connects many tools of linear algebra, including structured matrices, the QR-decomposition, the singular-value decomposition, and linear least-squares problems.

Advertisement

2. Transfer-function representations

We begin by reviewing some properties of discrete-time linear filters, focusing on the role of infinite series expansions in analyzing the properties of rational functions. The reconstruction of a transfer function from an infinite impulse response is equivalent to the reconstruction of a rational function from its Laurent series expansion. The reconstruction problem is introduced and solved by forming structured matrices of impulse-response coefficients.

2.1. Difference equations and transfer functions

Discrete-time linear filters are most frequently encountered in the form of difference equations that relate an input signal u k to an output signal y k . A simple example is an output y k determined by a weighted sum of the inputs from u k to u k-m ,

y k =b m u k +b m-1 u k-1 ++b 0 u k-m .uid2

More commonly, the output y k also contains a weighted sum of previous outputs, such as a weighted sum of samples from y k-1 to y k-n ,

y k =b m u k +b m-1 u k-1 ++b 0 u k-m -a n-1 y k-1 -a n-2 y k-2 -+a 0 y k-n .uid3

The impulse response of a filter is the output sequence g k =y k generated from an input

u k =1k=0,0k0.uid4

The parameters g k are the impulse-response coefficients, and they completely describe the behavior of an LTI filter through the convolution

y k = j=0 g j u k-j .uid5

Filters of type () are called finite-impulse response (FIR) filters because g k is a finite-length sequence that settles to 0 once k>m. Filters of type () are called infinite impulse response (IIR) filters since generally the impulse response will never completely settle to 0.

A system is stable if a bounded u k results in a bounded y k . Because the output of LTI filters is a linear combination of the input and previous output, any input-output sequence can be formed from a linear superposition of other input-output sequences. Hence proving that the system has a bounded output for a single input sequence is necessary and sufficient to prove the stability of an LTI filter. The simplest input to consider is an impulse, and so a suitable definition of system stability is that the absolute sum of the impulse response is bounded,

k=0 g k <.uid6

Though the impulse response completely describes the behavior of an LTI filter, it does so with an infinite number of parameters. For this reason, discrete-time LTI filters are often written as transfer functions of a complex variable z. This enables analysis of filter stability and computation of the filter's frequency response in a finite number of calculations, and it simplifies convolution operations into basic polynomial algebra.

The transfer function is found by grouping output and input terms together and taking the Z-transform of both signals. Let Y(z)= k=- y k z -k be the Z-transform of y k and U(z) be the Z-transform of u k . From the property

𝒵y k-1 =Y(z)z -1

the relationship between Y(z) and U(z) may be expressed in polynomials of z as

a(z)Y(z)=b(z)U(z).

The ratio of these two polynomials is the filter's transfer function

G(z)=b(z) a(z)=b m z m +b m-1 z m-1 ++b 1 z+b 0 z n +a n-1 z n-1 ++a 1 z+a 0 .uid7

When nm, G(z) is proper. If the transfer function is not proper, then the difference equations will have y k dependent on future input samples such as u k+1 . Proper transfer functions are required for causality, and thus all physical systems have proper transfer function representations. When n>m, the system is strictly proper. Filters with strictly proper transfer functions have no feed-through terms; the output y k does not depend on u k , only the preceding input u k-1 ,u k-2 ,. In this chapter, we assume all systems are causal and all transfer functions proper.

If a(z) and b(z) have no common roots, then the rational function G(z) is coprime, and the order n of G(z) cannot be reduced. Fractional representations are not limited to single-input-single-output systems. For vector-valued input signals u k n u and output signals y k n y , an LTI filter may be represented as an n y ×n u matrix of rational functions G ij (z), and the system will have matrix-valued impulse-response coefficients. For simplicity, we will assume that transfer function representations are single-input-single-output, though all results presented here generalize to the multi-input-multi-output case.

2.2. Stability of transfer function representations

Because the effect of b(z) is equivalent to a finite-impulse response filter, the only requirement for b(z) to produce a stable system is that its coefficients be bounded, which we may safely assume is always the case. Thus the stability of a transfer function G(z) is determined entirely by a(z), or more precisely, the roots of a(z). To see this, suppose a(z) is factored into its roots, which are the poles p i of G(z),

G(z)=b(z) i=1 n (z-p i ).uid9

To guarantee a bounded y k , it is sufficient to study a single pole, which we will denote simply as p. Thus we wish to determine necessary and sufficient conditions for stability of the system

G ' (z)=1 z-p.uid10

Note that p may be complex. Assume that |z|>|p|. G ' (z) then has the Laurent-series expansion

G ' (z)=z -1 1 1-pz -1 =z -1 k=0 p k z -k = k=1 p k-1 z -k .uid11

From the time-shift property of the z-transform, it is immediately clear that the sequence

g k ' =0k=1,p k-1 k>1,uid12

is the impulse response of G ' (z). If we require that () is absolutely summable and let |z|=1, the result is the original stability requirement (), which may be written in terms of p as

k=1 p k-1 <.

This is true if and only if |p|<1, and thus G ' (z) is stable if and only if |p|<1. Finally, from () we may deduce that a system is stable if and only if all the poles of G(z) satisfy the property |p i |<1.

2.3. Construction of transfer functions from impulse responses

Transfer functions are a convenient way of representing complex system dynamics in a finite number of parameters, but the coefficients of a(z) and b(z) cannot be measured directly. The impulse response of a system can be found experimentally by either direct measurement or from other means such as taking the inverse Fourier transform of a measured frequency response [11]. It cannot, however, be represented in a finite number of parameters. Thus the conversion between transfer functions and impulse responses is an extremely useful tool.

For a single-pole system such as (), the expansion () provides an obvious means of reconstructing a transfer function from a measured impulse response: given any 2 sequential impulse-response coefficients g k and g k+1 , the pole of G ' (z) may be found from

p=g k -1 g k+1 .uid14

Notice that this is true for any k, and the impulse response can be said to have a shift-invariant property in this respect.

Less clear is the case when an impulse response is generated by a system with higher-order a(z) and b(z). In fact, there is no guarantee that an arbitrary impulse response is the result of a linear system of difference equations at all. For an LTI filter, however, the coefficients of the impulse response exhibit a linear dependence which may be used to not only verify the linearity of the system, but to construct a transfer function representation as well. The exact nature of this linear dependence may be found by forming a structured matrix of impulse response coefficients and examining its behavior when the indices of the coefficients are shifted forward by a single increment, similar to the single-pole case in (). The result is stated in the following theorem, originally due to Kronecker [4] and adopted from the English translation of [12].

Theorem 1 (Kronecker's Theorem) Suppose G(z): is an infinite series of descending powers of z, starting with z -1 ,

G(z)=g 1 z -1 +g 2 z -2 +g 3 z -3 += k=1 g k z -k .uid16

Assume G(z) is analytic (the series converges) for all |z|>1. Let H be an infinitely large matrix of the form

H=g 1 g 2 g 3 g 2 g 3 g 4 g 3 g 4 g 5 uid17

Then H has finite rank n if and only if G(z) is a strictly proper, coprime, rational function of degree n with poles inside the unit circle. That is, G(z) has an alternative representation

G(z)=b(z) a(z)=b m z m +b m-1 z m-1 ++b 1 z+b 0 z n +a n-1 z n-1 ++a 1 z+a 0 ,uid18

in which m<n, all roots of a(z) satisfy |z|<1, a(z) and b(z) have no common roots, and we have assumed without loss of generality that a(z) is monic.

To prove Theorem , we first prove that for k>n, g k must be linearly dependent on the previous n terms of the series for H to have finite rank.

Theorem 2 The infinitely large matrix H is of finite rank n if and only if there exists a finite sequence α 1 ,α 2 ,,α n such that for kn,

g k+1 = j=1 n α j g k-j+1 ,uid20

and n is the smallest number with this property.

Let h k be the row of H beginning with g k . If H has rank n, then the first n+1 rows of H are linearly dependent. This implies that for some 1pn, h p+1 is a linear combination of h 1 ,,h p , and thus there exists some sequence α k such that

h p+1 = j=1 p α j h p-j+1 .uid21

The structure and infinite size of H imply that such a relationship must hold for all following rows of H, so that for q0

h q+p+1 = j=1 p α j h q+p-j+1 .

Hence any row h k , k>p, can be expressed as a linear combination of the previous p rows. Since H has at least n linearly independent rows, p=n, and since this applies element-wise, error(H)=n implies ().

Alternatively, () implies a relationship of the form () exists, and hence error(H)=p. Since n is the smallest possible p, this implies error(H)=n.

We now prove Theorem . Suppose G(z) is a coprime rational function of the form () with series expansion (), which we know exists, since G(z) is analytic for |z|<1. Without loss of generality, let m=n-1, since we may always let b k =0 for some k. Hence

b n-1 z n-1 +b n-2 z n-2 ++b 1 z+b 0 z n +a n-1 z n-1 ++a 1 z+a 0 =g 1 z -1 +g 2 z -2 +g 3 z -3 +

Multiplying both sides by the denominator of the left,

b n-1 z n-1 +b n-2 z n-2 ++b 1 z+b 0 =g 1 z n-1 +(g 2 +g 1 a n-1 )z n-2 +(g 3 +g 2 a n-1 +g 1 a n-2 )z n-3 +,

and equating powers of z, we find

b n-1 =g 1 b n-2 =g 2 +g 1 a n-1 b n-3 =g 3 +g 2 a n-1 +g 1 a n-2 =b 1 =g n-1 +g n-2 a n-1 ++g 1 a 2 b 0 =g n +g n-1 a n-1 ++g 1 a 1 0=g k+1 +g k a n-1 ++g k-n+1 a 0 kn.uid22

From this, we have, for kn,

g k+1 = j=1 n -a j g k-j+1 ,

which not only shows that () holds, but also shows that α j =-a j . Hence by Theorem , H must have finite rank.

Conversely, suppose H has finite rank. Then () holds, and we may construct a(z) from α k and b(z) from () to create a rational function. This function must be coprime since its order n is the smallest possible.

The construction in Theorem is simple to extend to the case in which G(z) is only proper and not strictly proper; the additional coefficient b n is simply the feed-through term in the impulse response, that is, g 0 .

A result of Theorem is that given finite-dimensional, full-rank matrices

H k =g k g k+1 g k+n-1 g k+1 g k+2 g k+n g k+n-1 g k+n g k+2n-2 uid23

and

H k+1 =g k+1 g k+2 g k+n g k+2 g k+3 g k+n+1 g k+n g k+n+1 g k+2n-1 ,uid24

the coefficients of a(z) may be calculated as

000-a 0 100-a 1 010-a 2 001-a n-1 =H k -1 H k+1 .uid25

Notice that () is in fact a special case of (). Thus we need only know the first 2n+1 impulse-response coefficients to reconstruct the transfer function G(z): 2n to form the matrices H k and H k+1 from () and (), respectively, and possibly the initial coefficient g 0 in case of an n t h-order b(z).

Matrices with the structure of H are useful enough to have a special name. A Hankel matrix H is a matrix constructed from a sequence {h k } so that each element H (j,k) =h j+k . For the Hankel matrix in (), h k =g k-1 . H k also has an interesting property implied by (): its row space is invariant under shifting of the index k. Because its symmetric, this is also true for its column space. Thus this matrix is also often referred to as being shift-invariant.

While () provides a potential method of identifying a system from a measured impulse response, this is not a reliable method to use with measured impulse response coefficients that are corrupted by noise. The exact linear dependence of the coefficients will not be identical for all k, and the structure of () will not be preserved. Inverting H k will also invert any noise on g k , potentially amplifying high-frequency noise content. Finally, the system order n is required to be known beforehand, which is usually not the case if only an impulse response is available. Fortunately, these difficulties may all be overcome by reinterpreting the results Kronecker's theorem in a state-space framework. First, however, we more carefully examine the role of the Hankel matrix in the behavior of LTI filters.

2.4. Hankel and Toeplitz operators

The Hankel matrix of impulse response coefficients () is more than a tool for computing the transfer function representation of a system from its impulse response. It also defines the mapping of past input signals to future output signals. To define exactly what this means, we write the convolution of () around sample k=0 in matrix form as

y -3 y -2 y -1 y 0 y 1 y 2 =0g 0 g 1 g 0 g 2 g 1 g 0 g 3 g 2 g 1 g 0 g 4 g 3 g 2 g 1 g 0 g 5 g 4 g 3 g 2 g 1 g 0 u -3 u -2 u -1 u 0 u 1 u 2 ,

where the vectors and matrix have been partitioned into sections for k<0 and k0. The output for k0 may then be split into two parts:

y 0 y 1 y 2 y f =g 1 g 2 g 3 g 2 g 3 g 4 g 3 g 4 g 5 H u -1 u -2 u -3 u p +g 0 0g 1 g 0 g 2 g 1 g 0 T u 0 u 1 u 2 u f ,uid27

where the subscripts p and f denote “past” and “future,” respectively. The system Hankel matrix H has returned to describe the effects of the past input u p on the future output y f . Also present is the matrix T, which represents the convolution of future input u f with the impulse response. Matrices such as T with constant diagonals are called Toeplitz matrices.

From (), it can be seen that H defines the effects of past input on future output. One interpretation of this is that H represents the “memory” of the system. Because H is a linear mapping from u p to y f , the induced matrix 2-norm of H, H 2 , can be considered a function norm, and in a sense, H 2 is a measure of the “gain” of the system. H 2 is often called the Hankel-norm of a system, and it plays an important role in model reduction and in the analysis of anti-causal systems. More information on this aspect of linear systems can be found in the literature of robust control, for instance, [13].

Advertisement

3. State-space representations

Although transfer functions define system behavior completely with a finite number of parameters and simplify frequency-response calculations, they are cumbersome to manipulate when the input or output is multi-dimensional or when initial conditions must be considered. The other common representation of LTI filters is the state-space form

x k+1 =Ax k +Bu k y k =Cx k +Du k ,uid28

in which x k n is the system state. The matrices A n×n , B n×n u , C n y ×n , and D n y ×n u completely parameterize the system. Only D uniquely defines the input-output behavior; any nonsingular matrix T ' may be used to change the state basis via the relationships

x ' =T ' xA ' =T ' AT '-1 B ' =T ' BC ' =CT '-1 .

The Z-transform may also be applied to the state-space equations () to find Z[xk+1] = A Z[xk] + B Z[uk] X(z) z = A X(z) + B U(z)

Z[yk] = C Z[xk] + D Z[uk] Y(z) = C X(z) + D U(z)

Y(z) U(z)=G(z)G(z)=CzI-A -1 B+D,uid29

and thus, if () is the state-space representation of the single-variable system (), then a(z) is the characteristic polynomial of A, det(zI-A).

Besides clarifying the effect of initial conditions on the output, state-space representations are inherently causal, and () will always result in a proper system (strictly proper if D=0). For this reason, state-space representations are often called realizable descriptions; while the forward-time-shift of z is an inherently non-causal operation, state-space systems may always be constructed in reality.

3.1. Stability, controllability, and observability of state-space representations

The system impulse response is simple to formulate in terms of the state-space parameters by calculation of the output to a unit impulse with x 0 =0:

g k =Dk=0,CA k-1 Bk>0.uid31

Notice the similarity of () and (). In fact, from the eigenvalue decomposition of A,

A=VΛV -1 ,

we find

k=1 g k = k=1 CA k-1 B= k=1 CV(|Λ k-1 |)V -1 B.

The term |Λ k-1 | will only converge if the largest eigenvalue of A is within the unit circle, and thus the condition that all eigenvalues λ i of A satisfy |λ i |<1 is a necessary and sufficient condition for stability.

For state-space representations, there is the possibility that a combination of A and B will result in a system for which x k cannot be entirely controlled by the input u k . Expressing x k in a matrix-form similar to () as

x k =𝒞u k-1 u k-2 u k-3 ,𝒞=BABA 2 Buid32

demonstrates that x k is in subspace n if and only if 𝒞 has rank n. 𝒞 is the controllability matrix and the system is controllable if it has full row rank.

Similarly, the state x k may not uniquely determine the output for some combinations of A and C. Expressing the evolution of the output as a function of the state in matrix-form as

y k y k+1 y k+2 =𝒪x k ,𝒪=CCACA 2

demonstrates that there is no nontrivial null space in the mapping from x k to y k if and only if 𝒪 has rank n. 𝒪 is the observability matrix and the system is observable if it has full column rank.

Systems that are both controllable and observable are called minimal, and for minimal systems, the dimension n of the state variable cannot be reduced. In the next section we show that minimal state-space system representations convert to coprime transfer functions that are found through ().

3.2. Construction of state-space representations from impulse responses

The fact that the denominator of G(z) is the characteristic polynomial of A not only allows for the calculation of a transfer function from a state-space representation, but provides an alternative version of Kronecker's theorem for state-space systems, known as the Ho-Kalman Algorithm [3]. From the Caley-Hamilton theorem, if a(z) is the characteristic polynomial of A, then a(A)=0, and

CA k a(A)B=CA k A n +a n-1 A n-1 ++a 1 A+a 0 B=CA k+n B+ j=0 n-1 a j CA k+j B,

which implies

CA k+n B=- j=0 n-1 a j CA k+j B.uid34

Indeed, substitution of () into () and rearrangement of the indices leads to (). Additionally, substitution of () into the product of 𝒪 and 𝒞 shows that

𝒪𝒞=CBCABCA 2 BCABCA 2 BCA 3 BCA 2 BCA 3 BCA 4 B=g 1 g 2 g 3 g 2 g 3 g 4 g 3 g 4 g 5 =H,

which confirms our previous statement that H effectively represents the memory of the system. Because

error(H)=min{error(𝒪),error(𝒞)},

we see that error(H)=n implies the state-space system () is minimal.

If the entries of H are shifted forward by one index to form

H ¯=g 2 g 3 g 4 g 3 g 4 g 5 g 4 g 5 g 6 ,

then once again substituting () reveals

H ¯=𝒪A𝒞.uid35

Thus the row space and column space of H are invariant under a forward-shift of the indices, implying the same shift-invariant structure seen in ().

The appearance of A in () hints at a method for constructing a state-space realization from an impulse response. Suppose the impulse response is known exactly, and let H r be a finite slice of H with r block rows and L columns,

H r =g 1 g 2 g 3 g L g 2 g 3 g 4 g L+1 g 3 g 4 g 5 g L+2 g r-1 g r g r+1 g r+L-1 .

Then any appropriately dimensioned factorization

H r =𝒪 r 𝒞 L =CCACA 2 CA r-1 BABA 2 BA L-1 Buid36

may be used to find A for some arbitrary state basis as

A=𝒪 r H ¯ r 𝒞 L uid37

where H ¯ r is H r with the indices shifted forward once and (·) is the Moore-Penrose pseudoinverse. C taken from the first block row of 𝒪 r , B taken from the first block column of 𝒞 L , and D taken from g 0 then provides a complete and minimal state-space realization from an impulse response. Because H r has rank n and det(zI-A) has degree n, we know from Kronecker's theorem that G(z) taken from () will be coprime.

However, as mentioned before, the impulse response of the system is rarely known exactly. In this case only an estimate H ^ r with a non-deterministic error term is available:

H ^ r =H r +E.

Because E is non-deterministic, H ^ will always have full rank, regardless of the number of rows r. Thus n cannot be determined from examining the rank of H, and even if n is known beforehand, a factorization () for r>n will not exist. Thus we must find a way of reducing the rank of H ^ r in order to find a state-space realization.

3.3. Rank-reduction of the Hankel matrix estimate

If H ^ r has full rank, or if n is unknown, its rank must be reduced prior to factorization. The obvious tool for reducing the rank of matrices is the singular-value decomposition (SVD). Assume for now that n is known. The SVD of H ^ r is

H ^ r =UΣV T

where U and V T are orthogonal matrices and Σ is a diagonal matrix containing the nonnegative singular values σ i ordered from largest to smallest. The SVD for a matrix is unique and guaranteed to exist, and the number of nonzero singular values of a matrix is equal to its rank [14].

Because U and V T are orthogonal, the SVD satisfies

H ^ r =UΣV T 2 =Σ 2 =σ 1 uid41

where · 2 is the induced matrix 2-norm, and

H ^ r =UΣV T F =Σ F = i l σ i 2 1/2 uid42

where · F is the Frobenius norm. Equation () also shows that the Hankel norm of a system is the maximum singular value of H r . From () and (), we can directly see that if the SVD of H r is partitioned into

H ^ r =U n U s Σ n 00Σ s V n T V s T ,

where U n is the first n columns of U, Σ n is the upper-left n×n block of Σ, and V n T is the first n rows of V T , the solution to the rank-reduction problem is [14]

Q=arg~min error(Q)=nQ-H ^ r 2 =arg~min error(Q)=nQ-H ^ r F =U n Σ n V n T .

Additionally, the error resulting from the rank reduction is

e=Q-H ^ r 2 =σ n+1 ,

which suggests that if the rank of H r is not known beforehand, it can be determined by examining the nonzero singular values in the deterministic case or by searching for a significant drop-off in singular values if only a noise-corrupted estimate is available.

3.4. Identifying the state-space realization

From a rank-reduced H ^ r , any factorization

H ^ r =𝒪 ^ r 𝒞 ^ L

can be used to estimate 𝒪 r and 𝒞 L . The error in the state-space realization, however, will depend on the chosen state basis. Generally we would like to have a state variable with a norm x k 2 in between u k 2 and y k 2 . As first proposed in [5], choosing the factorization

𝒪 ^ r =U n Σ n 1/2 and𝒞 ^ L =Σ n 1/2 V n T uid44

results in

𝒪 ^ r 2 =𝒞 ^ L 2 =H ^ r 2 ,uid45

and thus, from a functional perspective, the mappings from input to state and state to output will have equal magnitudes, and each entry of the state vector x k will have similar magnitudes. State-space realizations that satisfy () are sometimes called internally balanced realizations [11]. (Alternative definitions of a “balanced” realization exist, however, and it is generally wise to verify the definition in each context.)

Choosing the factorization () also simplifies computation of the estimate A ^, since

A ^=𝒪 ^ r H ¯ ^ r 𝒞 ^ L =Σ n -1/2 U n T H ¯ ^ r V n Σ n -1/2 .

By estimating B ^ as the first block column of 𝒞 ^ L , C ^ as the first block row of 𝒪 ^ L , and D ^ as g 0 , a complete state-space realization (A ^,B ^,C ^,D ^) is identified from this method.

3.5. Pitfalls of direct realization from an impulse response

Even though the rank-reduction process allows for realization from a noise-corrupted estimate of an impulse response, identification methods that generate a system estimate from a Hankel matrix constructed from an estimated impulse response have numerous difficulties when applied to noisy measurements. Measuring an impulse response directly is often infeasible; high-frequency damping may result in a measurement that has a very brief response before the signal-to-noise ratio becomes prohibitively small, and a unit pulse will often excite high-frequency nonlinearities that degrade the quality of the resulting estimate.

Taking the inverse Fourier transform of the frequency response guarantees that the estimates of the Markov parameters will converge as the dataset grows only so long as the input is broadband. Generally input signals decay in magnitude at higher frequencies, and calculation of the frequency response by inversion of the input will amplify high-frequency noise. We would prefer an identification method that is guaranteed to provide a system estimate that converges to the true system as the amount of data measured increases and that avoids inverting the input. Fortunately, the relationship between input and output data in () may be used to formulate just such an identification procedure.

Advertisement

4. Realization from input-output data

To avoid the difficulties in constructing a system realization from an estimated impulse response, we will form a realization-based identification procedure applicable to measured input-output data. To sufficiently account for non-deterministic effects in measured data, we add a noise term v k n y to the output to form the noise-perturbed state-space equations

x k+1 =Ax k +Bu k y k =Cx k +Du k +v k .uid47

We assume that the noise signal v k is generated by a stationary stochastic process, which may be either white or colored. This includes the case in which the state is disturbed by process noise, so that the noise process may have the same poles as the deterministic system. (See [15] for a thorough discussion of representations of noise in the identification context.)

4.1. Data-matrix equations

The goal is to construct a state-space realization using the relationships in (), but doing so requires a complete characterization of the row space of H r . To this end, we expand a finite-slice of the future output vector to form a block-Hankel matrix of output data with r block rows,

Y=y 0 y 1 y 2 y L y 1 y 2 y 3 y L+1 y 2 y 3 y 4 y L+2 y r-1 y r y r+1 y r+L-1 .

This matrix is related to a block-Hankel matrix of future input data

U f =u 0 u 1 u 2 u L u 1 u 2 u 3 u L+1 u 2 u 3 u 4 u L+2 u r-1 u r u r+1 u r+L-1 ,

a block-Toeplitz matrix of past input data

U p =u -1 u 0 u 1 u L-1 u -2 u -1 u 0 u L-2 u -3 u -2 u -1 u L-3 ,

a finite-dimensional block-Toeplitz matrix

T=g 0 0g 1 g 0 g 2 g 1 g 0 g r-1 g r-2 g r-3 g 0 ,

the system Hankel matrix H, and a block-Hankel matrix V formed from noise data v k with the same indices as Y by the equation

Y=HU p +TU f +V.uid49

If the entries of Y f are shifted forward by one index to form

Y ¯=y 1 y 2 y 3 y L+1 y 2 y 3 y 4 y L+2 y 3 y 4 y 5 y L+3 y r y r+1 y r+2 y r+L ,

then Y ¯ f is related to the shifted system Hankel matrix H ¯, the past input data U p , T with a block column appended to the left, and U f with a block row appended to the bottom,

T ¯=g 1 g 2 g 3 Tg r ,U ¯ f =U f u r u r+1 u r+2 u r+L ,

and a block-Hankel matrix V ¯ of noise data v k with the same indices as Y ¯ by the equation

Y ¯=H ¯U p +T ¯U ¯ f +V ¯.uid50

From (), the state is equal to the column vectors of U p multiplied by the entries of the controllability matrix 𝒞, which we may represent as the block-matrix

X=x 0 x 1 x 2 x L =𝒞U p ,

which is an alternative means of representing the memory of the system at sample 0, 1, .... The two data matrix equations () and () may then be written as Y = Or X + T Uf + V ,

Y = Or A X + T  Uf + V . Equation () is basis for the field of system identification methods known as subspace methods. Subspace identification methods typically fall into one of two categories. First, because a shifted observability matrix

𝒪 ¯=CACA 2 CA 3 ,

satisfies

im(𝒪)=im(𝒪 ¯),

where im(·) of denotes the row space (often called the “image”), the row-space of 𝒪 is shift-invariant, and A may be identified from estimates 𝒪 r and 𝒪 ¯ r as

A ^=𝒪 ^ r 𝒪 ¯ ^ r .

Alternatively, because a forward-propagated sequence of states

X ¯=AX

satisfies

im(X T )=im(X ¯ T ),

the column-space of X is shift-invariant, and A may be identified from estimates X ^ and X ¯ ^ as

A ^=X ¯ ^X ^ .

In both instances, the system dynamics are estimated by propagating the indices forward by one step and examining a propagation of linear dynamics, not unlike () from Kronecker's theorem. Details of these methods may be found in [16] and [17]. In the next section we present a system identification method that constructs system estimates from the shift-invariant structure of Y itself.

4.2. Identification from shift-invariance of output measurements

Equations () and () still contain the effects of the future input in U ¯ f . To remove these effects from the output, we must first add some assumptions about U ¯ f . First, we assume that U ¯ f has full row rank. This is true for any U f with a smooth frequency response or if U ¯ f is generated from some pseudo-random sequence. Next, we assume that the initial conditions in X do not somehow cancel out the effects of future input. A sufficient condition for this is to require

errorXU ¯ f =n+rn u

to have full row rank. Although these assumptions might appear restrictive at first, since it is impossible to verify without knowledge of X, it is generally true with the exception of some pathological cases.

Next, we form the null-space projector matrix

Π=I L+1 -U ¯ f T U ¯ f U ¯ f T -1 U ¯ f ,uid52

which has the property

U ¯ f Π=0.

We know the inverse of (U ¯ f U ¯ f T ) exists, since we assume U ¯ f has full row rank. Projector matrices such as () have many interesting properties. Their eigenvalues are all 0 or 1, and if they are symmetric, they separate the subspace of real vectors — in this case, vectors in L+1 — into a subspace and its orthogonal complement. In fact, it is simple to verify that the null space of U ¯ f contains the null space of U f as a subspace, since

U ¯ f Π=U f Π=0.

Thus multiplication of () and () on the right by Π results in

Y = Or X + V ,

Y = Or A X + V . It is also unnecessary to compute the projected products YΠ and Y ¯Π directly, since from the QR-decomposition

U ¯ T Y T =Q 1 Q 2 R 11 R 12 0R 22 ,

we have

Y=R 12 T Q 1 T +R 22 T Q 2 T uid53

and U=R 11 T Q 1 T . Substitution into () reveals

Π=I-Q 1 Q 1 T .uid54

Because the columns of Q 1 and Q 2 are orthogonal, multiplication of () on the right by () results in

YΠ=R 22 T Q 2 T .

A similar result holds for Y ¯Π. Taking the QR-decomposition of the data can alternatively be thought of as using the principle of superposition to construct new sequences of input-output data through a Gram-Schmidt-type orthogonalization process. A detailed discussion of this interpretation can be found in [18].

Thus we have successfully removed the effects of future input on the output while retaining the effects of the past, which is the foundation of the realization process. We still must account for non-deterministic effects in V and V ¯. To do so, we look for some matrix Z such that

V ZT 0 ,

V ZT 0 . This requires the content of Z to be statistically independent of the process that generates v k . The input u k is just such a signal, so long as the filter output is not a function of the input — that is, the data was measured in open-loop operation,. If we begin measuring input before k=0 at some sample k=-ζ and construct Z as a block-Hankel matrix of past input data,

Z=1 Lu -ζ u -ζ+1 u -ζ+2 u -ζ+L u -ζ+1 u -ζ+2 u -ζ+3 u -ζ+L+1 u -ζ+2 u -ζ+3 u -ζ+4 u -ζ+L+2 u -1 u 0 u 1 u r+L-2 ,

then multiplication of () and () on the right by Z T results in

Y ZT Or X ZT ,

Y ZT Or A ZT , as L. Note the term 1 L in Z is necessary to keep () and () bounded.

Finally we are able to perform our rank-reduction technique directly on measured data without needing to first estimate the impulse response. From the SVD

YΠZ T =UΣV T ,

we may estimate the order n by looking for a sudden decrease in singular values. From the partitioning

YΠZ T =U n U s Σ n 00Σ s V n T V s T ,

we may estimate 𝒪 r and XΠZ T from the factorization

𝒪 ^ r =U n Σ n 1/2 andX ^ΠZ T =Σ n 1/2 V n T .

A may then be estimated as

A ^=𝒪 ^ r Y ¯ΠZ T X ^ΠZ T =Σ n -1/2 U n T Y ¯ΠZ T V n Σ n -1/2 𝒪 r H ¯U p Π𝒞 L U p Π 𝒪 r H ¯𝒞 L .

And so we have returned to our original relationship ().

While C may be estimated from the top block row of 𝒪 ^ r , our projection has lost the column space of H r that we previously used to estimate B, and initial conditions in X prevent us from estimating D directly. Fortunately, if A and C are known, then the remaining terms B, D, and an initial condition x 0 are linear in the input output data, and may be estimated by solving a linear-least-squares problem.

4.3. Estimation of B, D, and x0

The input-to-state terms B and D may be estimated by examining the convolution with the state-space form of the impulse response. Expanding () with the input and including an initial condition x 0 results in

y k =CA k x 0 + j=0 k-1 CA k-j-1 Bu j +Du k +v k .uid56

Factoring out B and D on the right provides

y k =CA k x 0 + j=0 k-1 u k T CA k-j-1 vec(B)+u k T I n y vec(D)+v k ,

in which vec(·) is the operation that stacks the columns of a matrix on one another, is the (coincidentally named) Kronecker product, and we have made use of the identity

vec(AXB)=(B T A)vec(X).

Grouping the unknown terms together results in

y k =CA k j=0 k-1 u k T CA k-j-1 u k T I n y x 0 vec(B)vec(D)+v k .

Thus by forming the regressor

φ k =C ^A ^ k j=0 k-1 u k T C ^A ^ k-j-1 u k T I n y

from the estimates A ^ and C ^, estimates of B and D may be found from the least-squares solution of the linear system of N equations

y 0 y 1 y 2 y N =φ 0 φ 1 φ 2 φ N x ^ 0 vec(B ^)vec(D ^).

Note that N is arbitrary and does not need to be related in any way to the indices of the data matrix equations. This can be useful, since for large-dimensional systems, the regressor φ k may become very computationally expensive to compute.

Advertisement

5. Conclusion

Beginning with the construction of a transfer function from an impulse response, we have constructed a method for identification of state-space realizations of linear filters from measured input-output data, introducing the fundamental concepts of realization theory of linear systems along the way. Computing a state-space realization from measured input-output data requires many tools of linear algebra: projections and the QR-decomposition, rank reduction and the singular-value decomposition, and linear least squares. The principles of realization theory provide insight into the different representations of linear systems, as well as the role of rational functions and series expansions in linear algebra.

References

  1. 1. Rudolf E. Kalman. On the General Theory of Control Systems. In Proceedings of the First International Congress of Automatic Control, Moscow, 1960. IRE.
  2. 2. Rudolf E. Kalman. Mathematical Description of Linear Dynamical Systems. Journal of the Society for Industrial and Applied Mathematics, Series A: Control, 1(2):152, July 1963.
  3. 3. B. L. Ho and Rudolf E. Kalman. Effective construction of linear state-variable models from input/output functions. Regelungstechnik, 14:545–548, 1966.
  4. 4. Leopold Kronecker. Zur Theorie der Elimination einer Variabeln aus zwei Algebraischen Gleichungen. Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, pages 535–600, 1881.
  5. 5. Paul H. Zeiger and Julia A. McEwen. Approximate Linear Realizations of Given Dimension via Ho's Algorithm. IEEE Transactions on Automatic Control, 19(2):153 – 153, 1971.
  6. 6. Sun-Yuan Kung. A new identification and model reduction algorithm via singular value decomposition. In Proceedings of the 12th Asilomar Conference on Circuits, Systems, and Computers, pages 705–714. IEEE, 1978.
  7. 7. Michel Verhaegen. Identification of the deterministic part of MIMO state space models given in innovations form from input-output data. Automatica, 30(1):61–74, January 1993.
  8. 8. Peter Van Overschee and Bart De Moor. N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems. Automatica, 30(1):75–93, January 1994.
  9. 9. Jer-Nan Juang and R. S. Pappa. An Eigensystem Realization Algorithm (ERA) for Modal Parameter Identification and Model Reduction. JPL Proc. of the Workshop on Identification and Control of Flexible Space Structures, 3:299–318, April 1985.
  10. 10. Jer-Nan Juang, Minh Q. Phan, Lucas G. Horta, and Richard W. Longman. Identification of Observer/Kalman Filter Markov Parameters: Theory and Experiments. Journal of Guidance, Control, and Dynamics, 16(2):320–329, 1993.
  11. 11. Chi-Tsong Chen. Linear System Theory and Design. Oxford University Press, New York, 1st edition, 1984.
  12. 12. Felix R. Gantmacher. The Theory of Matrices - Volume Two. Chelsea Publishing Company, New York, 1960.
  13. 13. Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice Hall, August 1995.
  14. 14. G.H. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, Maryland, USA, third edition, 1996.
  15. 15. Lennart Ljung. System Identification: Theory for the User. PTR Prentice Hall Information and System Sciences. Prentice Hall PTR, Upper Saddle River, NJ, 2nd edition, 1999.
  16. 16. Michel Verhaegen and Vincent Verdult. Filtering and System Identification: A Least Squares Approach. Cambridge University Press, New York, 1 edition, May 2007.
  17. 17. Peter Van Overschee and Bart De Moor. Subspace Identification for Linear Systems: Theory, Implementation, Applications. Kluwer Academic Publishers, London, 1996.
  18. 18. Tohru Katayama. Role of LQ Decomposition in Subspace Identification Methods. In Alessandro Chiuso, Stefano Pinzoni, and Augusto Ferrante, editors, Modeling, Estimation and Control, pages 207–220. Springer Berlin / Heidelberg, 2007.

Written By

Daniel N. Miller and Raymond A. de Callafon

Published: 11 July 2012