Open access

Identifiability of Quantized Linear Systems

Written By

Ying Shen and Hui Zhang

Submitted: 08 February 2012 Published: 28 November 2012

DOI: 10.5772/39274

From the Edited Volume

Stochastic Modeling and Control

Edited by Ivan Ganchev Ivanov

Chapter metrics overview

2,009 Chapter Downloads

View Full Metrics

1. Introduction

On-line parameter identification is a key problem of adaptive control and also an important part of self-tuning regulator (STR) (Astrom & Wittenmark, 1994). In the widely equipped large-scale systems, distributed systems and remote systems, the plants, controllers, actuators and sensors are connected by communication channels which possess only finite communication capability due to, e.g., data loss, bandwidth constraint, and access constraint. From a heuristic analysis perspective, the existence of communication constraints has the effect of complicating what are otherwise well-understood control problems, including the traditional methods, such as the Hcontrol (Fu & Xie, 2005), and even the basic theoretic notions, such as the stability (De Persis & Mazenc, 2010).

Due to the constraints of the communication channels, it is difficult to transmit data with infinite precision. Quantization is an effective way of reducing the use of transmission resource, and then meeting the bandwidth constraint of the communication channels. However, quantization is a lossy compression method, and hence the performance of parameter identification, even the validity or effectiveness of identification may be changed by quantization, along with which the performance of adaptive control may deteriorate. This has attracted plenty of works. The problem of system identification with quantized observation was investigated in (Wang, et al, 2003, 2008, 2010), where issues of optimal identification errors, time complexity, optimal input design, and the impact of disturbances and unmodeled dynamics on identification accuracy and complexity are included.

In the light of the fundamental effect of quantization on system identification, it is necessary to pay attention to the parameter identifiability property of quantized systems. The concept of identifiability has been defined by maximal information criterion in (Durgaryan & Pashchenko, 2001): the system is parameter identifiable by maximal information criterion, if the mutual information between actual output and model output is greater than zero. However, the concept of identifiability in (Durgaryan & Pashchenko, 2001) is defined in principle, based on which there is no practical result. Reference (Zhang, 2003; Zhang & Sun, 1996; Baram & Kailath, 1988) have discussed the problem of states estimability, which is related closely with parameter identifiability, for that input-output description of linear systems with Gauss-Markov parameters can be transformed to state space model, and then the problem of parameter identifiability can be treated as state estimability. Reference (Zhang, 2003) has proposed the definition of parameter identifiability under the criterion of minimum maximum error entropy (MMSE) estimation referring to the definition of states estimability, and also obtained some useful conclusions. Reference (Wang & Zhang, 2011) has studied the parameter identifiability of linear systems under access constraints. However, there is few work on that for quantized systems.

This paper mainly analyzes the parameter identifiability of quantized linear systems with Gauss-Markov parameters from information theoretic point of view. The definition of parameter identifiability proposed in (Zhang, 2003) is reviewed: the linear system with Gauss-Markov parameters is identifiable, if and only if the mutual information between the actual value and estimates of parameters is greater than zero, which is extended to quantized systems by considering the intrinsic property of the system. Then the parameter identifiability of linear systems with quantized outputs is analyzed and the criterion of parameter identifiability is obtained based on the measure of mutual information. Furthermore, the convergence property of the quantized parameter identifiability Gramian is analyzed.

The rest of the paper is organized as follows: In section 2, we introduce the model that we are interested in; Section 3 discusses the existing definition of parameter identifiability, proposes our new definition, and gives analytic conclusion focusing on quantized linear systems with Gauss-Markov parameters; The convergence property of Gramian matrix of parameter identifiability for quantized systems is discussed in section 4; Section 5 and 6 are illustrative simulation and conclusion, respectively.

Advertisement

2. System description

Consider the following SISO linear system expressed by Auto-Regressive and Moving Average Model (ARMA) (Astrom & Wittenmark, 1994):

y(k)+a1(k)y(k-1)++an(k)y(k-n)        =b1(k)u(k-1)++bm(k)u(k-m)+e(k)E1

where u(k)and y(k)are input and output of the system respectively, stochastic noise sequence{e(k)}is Gaussian and white with zero-mean and covariance R, and uncorrelated withy(k), y(k-1)andu(k),u(k-1). Let

θ(k)=[b1(k)bm(ka1(k)an(k)]TE2

be the parameter vector, where ()T denotes the operation of transposition, and let

F(k)=[u(k-1)u(k-m) -y(k-1)-y(k-n)]TE3

then system (1) can be described as

y(k)=FT(k)θ(k)+e(k)E4

Suppose that the parameterθ(k)can be modeled by a Gauss-Markov process, i.e.,

θ(k+1)=(k)+Bw(k)E5
(5)

where A, B are known matrices with appropriate dimensions; noise sequence {w(k)} is Gaussian and white with zero-mean and covariance Q; initial value of the parameter θ(0) is Gaussian with mean θ ¯ and covarianceΠ(0). Suppose thate(k), w(k)and θ(0) are mutually uncorrelated. Hence, linear system (1) with Gauss-Markov parameters can be described by (4) and (5), i.e.,

{θ(k+1)=(k)+Bw(k)y(k)=FT(k)θ(k)+e(k)E6

Due to bandwidth constraint of the channel, quantization is required. The discussion in the present paper does not focus on a special quantizer, but on general N-level quantization (Curry 1970; Gray and Neuhoff 1998) which can be described as:

yq(k)=Q(y(k))=zl, for y(k)δl,l=1,2,,NE7

where Q()is the general quantizer, yq(k){z1,z2,,zN}denote the quantizer outputs with zll=1,,Nas the reproduction values; δl=(alal+1], l=1,2,,Ndenote the quantization intervals, where{ai}i=1N+1, -=a1<a2<<aN+1=+are the thresholds of the quantizer.

The channel is assumed to be lossless. yq(k){z1,z2,,zN}is transmitted and then received at the channel receiver. zi, i=1,2,,Nare symbols denoting the ith quantization interval and not necessarily real numbers, hence, further decoding is required, as follows

yq*(k)=Dk(yq(k))E8

where Dk() is assumed to be a one to one mapping. A common decoding method (Curry, 1970) isDk(yq(k))=E{y(k)|yq(k)=zl}, l=1,2,,N

where E{.} is the operation of expectation.

Advertisement

3. Parameter identifiability

3.1. Definition of parameter identifiability

Reference (Zhang, 2003) proposed the definition of parameter identifiability for system (6) referring to the definition of state estimability under MMEE.

Let θ ^(k)be the MMEE estimation of θ(k)based onF(k), and θ ˜(k)=θ(k)-θ ^(k)be the estimation error. Define the prior and posterior mean-square estimation error matrices respectively asΠ(k)=E{(θ(k)-θ ¯(k))(θ(k)-θ ¯(k))T}P(k)=E{θ ˜(k)θ ˜T(k)}

whereθ ¯(k)is the mean ofθ(k), i.e.θ ¯(k)=E{θ(k)}.

Definition 1(Zhang, 2003): The linear system (1) with Gauss-Markov parameters (i.e. system (6)) is identifiable, if and only if

I(θ(k);θ ^(k))>0,kn+m-1E9

where I(;)denotes mutual information.

Based on Definition 1, the following conclusion was obtained in (Zhang, 2003).

Lemma 1(Zhang, 2003): The linear system (1) with Gauss-Markov parameters (i.e. system (6)) is identifiable, if and only if, the identifiability Gramian

Wkid=j=k0Ak-jΠ(j)F(j)FT(j)Π(j)(Ak-j)T,
kn+m-1E10

has full rank, i.e.rank(Wkid)=n+m,kn+m-1.

In the present paper, we propose an alternative definition of parameter identifiability for the quantized system (6)(7)(8) from information theoretic point of view, as follows.

Definition 2: The quantized linear system with Gauss-Markov parameters (6)(7)(8) is identifiable, if and only if

I(θ(k);Yq*k)>0, kn+m-1E11

where

Yq*k=[yq*(0) yq*(1)  yq*(k)]T.E12

Remark 1:

  1. From information theory (Cover & Thomas, 2006), condition (11) is equivalent to that

H(θ(k))>H(θ ^(k)),kn+m-1E13

where H() denotes entropy, i.e. the prior error entropy H(θ(k)) is strictly greater than posterior error entropyH(θ ^(k));

  1. Definition 1 considers the mutual information between the actual value and estimate of parameters, so it relies on the estimation principle, while Definition 2 takes into account the intrinsic property of the system independent of the estimator used. If Definition 2 is adopted to analyze unquantized system (6), the identifiability condition (11) turns into

I(θ(k);Yk)>0,kn+m-1E14

whereYk=[y(0) y(1) y(k)]T, and condition (9) is equivalent to (13) for linear Gaussian system (6) (Zhang, 2003). Hence, in some sense, Definition 2 is a more general one than Definition 1.

3.2. Identifiability analysis of quantized linear systems

Mutual informationI(;) (Cover & Thomas, 2006) is a measure of the information amount commonly contained in, and the statistic dependence between two random variables. I(;)0with equality if and only if these two random variables are independent. Therefore (11) based on the information theoretic Definition 2 indicates that system is parameter identifiable if and only if any direction of parameter space is not orthogonal to all the past (quantized) measurements (Baram & Kailath, 1988), i.e. gRn+mg0,

gTE{(θ(k)-θ ¯(k))(yq*(j)-y ¯q*(j))}0, jkkn+m-1E15

where y ¯q*(j) is the mean ofyq*(j), i.e.y ¯q*(j)=E{yq*(j)}.

From the Gauss-Markov property of the parameters, we have

θ(k)=Ak-jθ(j)+i=0k-j-1Ak-j-i-1Bw(j+i)E16
θ ¯(k)=Ak-jθ ¯(j)E17

where i=0-1A-i-1Bw(k+i)=0 when j=k, then

   E{(θ(k)-θ ¯(k))(yq*(j)-y ¯q*(j))}=E{(θ(k)-θ ¯(k))yq*(j)}=E{[Ak-j(θ(j)-θ ¯(j))+i=0k-j-1Ak-j-i-1Bw(j+i)]yq*(j)}=Ak-jE{(θ(j)-θ ¯(j))yq*(j)}=Ak-jE{θ'(j)yq*(j)}E18

whereθ'(j)=θ(j)-θ ¯(j). Letθ'θ'(j), yq*yq*(j), and the time variable j of other relevant time-variant variables (e.g. F(j),Π(j)) is omitted for notational simplicity, then

E{θ'(j)yq*(j)}E{θ'yq*}=l=1Nθ'Rn+mθ'Dj(zl)p(θ',yq*=Dj(zl))dθ'                                       =l=1NDj(zl)θ'Rn+mθ'p(θ',yq*=Dj(zl))dθ'E19

Based on the Bayesian law,

    p(θ',yq*=Dj(zl))=p(yq*=Dj(zl)|θ')p(θ')=p(al<yal+1|θ')p(θ')=p(al<FTθ+eal+1|θ')p(θ')=p(al<FT(θ'+θ ¯)+eal+1|θ')p(θ')=p(al-FTθ ¯-FTθ'<eal+1-FTθ ¯-FTθ'|θ')p(θ')=p(al-FTθ ¯-FTθ'<eal+1-FTθ ¯-FTθ')p(θ')E20

where the last equality is based on the fact thatθ'(j) and e(j) are stochastically independent, then

   θ'Rn+mθ'p(θ',yq*=Dj(zl))dθ'=θ'Rn+mθ'p(al-FTθ ¯-FTθ'<eal+1-FTθ ¯-FTθ')p(θ')dθ'=θ'Rn+mθ'(T(al+1-FTθ ¯-FTθ'R)-T(al-FTθ ¯-FTθ'R))G(θ',0,Π)dθ'E21

where T(.) is the probability distribution function of standardized normalized distribution (note that, T(.) is different from the tail function defined in (Ribeiro et al., 2006; You et al., 2011)), G(θ',0,Π)denotes the probability density function which means that stochastic vector θ'is Gaussian with zero-mean and covarianceΠ. Following the similar line of argument as in (Ribeiro et al., 2006; You et al., 2011), we calculate the integral (20) and then obtain

θ'Rn+mθ'p(θ',yq*=Dj(zl))dθ'=(ϕ(al-FTθ ¯FTΠF+R)-ϕ(al+1-FTθ ¯FTΠF+R))ΠFFTΠF+RE22

where ϕ() is the probability density function of standardized normalized distribution. Substitute (21) into (18), we have

E{θ'(j)yq*(j)=l=1NDj(zl)FT(j)Π(j)F(j)+R(ϕ(al-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R) -ϕ(al+1-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))Π(j)F(j).E23

Combining (22) with (17) and (14), we get that gTE{(θ(k)-θ ¯(k))(yq*(j)-y ¯q*(j))}0 is equivalent to

gTl=1NDj(zl)FT(j)Π(j)F(j)+R(ϕ(al-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R) - ϕ(al+1-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))Ak-jΠ(j)F(j)0,                                                                                                                        jkkn+m-1 .E24

Let

ψ(j)l=1NDj(zl)FT(j)Π(j)F(j)+R(ϕ(al-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R)-ϕ(al+1-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))E25

Then (23) is equivalent to that

Wkidq=j=k0ψ2(j)Ak-jΠ(j)F(j)FT(j)Π(j)(Ak-j)Tkn+m-1E26

has full rank. We conclude the above analysis as follows.

Theorem 1: The quantized linear system with Gauss-Markov parameters (6)(7)(8) is parameter identifiable, if and only if

rank Wkidq=n+mkn+m-1E27

Remark 2:

  1. In Theorem 1, ψ(j), j=0,1,2,,kis defined by the quantizer, whileAk-jΠ(j)F(j)FT(j)Π(j)(Ak-j)T,j=0,1,2,,k , which is the same part as in the unquantized system, reflects the intrinsic properties of the system. Hence, the full rank requirement of Wkidqshows that the parameter identifiability of the quantized system is defined by quantizer and intrinsic properties of the system jointly;

  2. When quantization levelN=1, i.e.a1=-, a2=+, ψ(j)0, thenWkidq0, condition (26) is not satisfied and the system is not identifiable. This is consistent with the intuition. From (10) and (25), it can be observed that the difference between unquantized estimability Gramian Wkid and quantized estimability Gramian Wkidq is that the later includes additional weightsψ2(j), j=0,1,2,,k. As a result, it can be seen that besides the situation of quantization levelN=1, the matrix Wkidq may become singular due to the property ofψ2(j), though Wkid has full rank;

  3. The quantizer in Theorem 1 is time-invariant. However, by using the above analysis method, a conclusion similar to Theorem 1 can be derived for time-variant quantizer, except that the weights in the estimability Gramian reflects the time-variant property of the quantizer, i.e.,

ψ(j)l=1NDj(zl(j))FT(j)Π(j)F(j)+R(ϕ(al(j)-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R)-ϕ(al+1(j)-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))E28
  1. Condition (26) is equivalent to that the matrix sequence {ψ(j)Ak-jΠ(j)F(j), jk} has column rank n+m. {ψ(j)Ak-jΠ(j)F(j), jk}can be decomposed as{ψ(j)Ak-jΠ(j)F(j), jk}diag{ψ(0),ψ(1),,ψ(k)}, wherediag{}denotes diagonal matrix. Hence the parameter identifiability of the original system can be preserved ifψ(j)0,j=0,1,2,. Especially, condition (26) is equivalent to Lemma 1 for (6). Suppose we can design a time-variant quantizer as follows

Dj(zl(j))=clFT(j)Π(j)F(j)+R, l=1,2,,N,al(j)=dlFT(j)Π(j)F(j)+R+FT(j)θ ¯(j),  l=1,2,,N+1 , j=0,1,2,,k ,E29

where cl and dl are constants which make

ψ(j)=l=1Ncl(ϕ(dl)-ϕ(dl+1))0E30

be the same for every j, thus,

Wkidq=(l=1Ncl(ϕ(dl)-ϕ(dl+1)))2j=k0ψ2(j)Ak-jΠ(j)F(j)FT(j)Π(j)(Ak-j)T.E31

By comparing (30) with (10), we can find that such a time-variant quantizer does not change the parameter identifiability of the original system if and only if (29) is satisfied;

  1. The parameter identifiability of the system can be preserved even if the quantization level is low as long as the quantizer is designed reasonably. Especially, when quantization levelN=2, setc1=-1, c2=1, d1=-, d2=0, d3=+in the formula (28), thenψ(j)2/π,j=0,1,2,, namely a coarse quantizer of 1 bit can preserve the parameter identifiability of the original system.

Advertisement

4. Convergence analysis

In this section, we discuss the convergence property of the Gramian in Theorem 1, i.e., the convergence property ofψ(j),j=0,1,2,.

We know that

l=1NFT(j)θ ¯(j)FT(j)Π(j)F(j)+Rϕ(al-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R)-ϕ(al+1-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))=0E32
(31)

by the property ofϕ(), then ψ(j) can be re-expressed as

ψ(j)=l=1NDj(zl)-FT(j)θ ¯(j)FT(j)Π(j)F(j)+Rϕ(al-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R)-ϕ(al+1-FT(j)θ ¯(j)FT(j)Π(j)F(j)+R))E33
(32)

LetΔ=sup1kNΔl,whereΔl=al+1-al, l=1,2,,N, leta ˜l(j)al-FT(j)θ ¯(j), thenΔl=a ˜l+1(j)-a ˜l(j), FT(j)Π(j)F(j)+R,zl*(j)Dj(zl)-FT(j)θ ¯(j). Consider the convergence property of the Gramian Wkidq whenΔ0. Note that N whenΔ0, and then

limNψ(j)=limNl=1Nzl*(j)(ϕ(a ˜l(j))-ϕ(a ˜l+1(j)))              =l=1limΔl0zl*(j)(ϕ(a ˜l(j))-ϕ(a ˜l(j)+Δl))              =l=1limΔl0zl*(j)ϕ(a ˜l(j))-ϕ(a ˜l(j)+Δl)-Δl.(-Δl)              =l=1limΔl0zl*(j)ddsϕ(s)|s=a ˜l(j)(-Δl)              =l=1limΔl0(zl*(j))2exp{-12(zl*(j))2}2πΔl              =R(zl*(j))2exp{-12(zl*(j))2}2πdzl*(j)E34

Let r=zl*(j), then dr=dzl*(j),

limNψ(j)=Rr2e-r222πdr=1.E35

Remark 3: Equation (34) implies the convergence of Theorem 1 to Lemma 1 whenΔ0, i.e. the quantized identifiability Gramian Wkidq converges to the unquantized identifiability GramianWkid.

Advertisement

5. Simulation

In order to illustrate our main conclusion, the following system is simulated with the tool of Matlab:y(k)=b(k)u(k-1)+e(k)

where b(k) is the parameter to be identified, and can be modeled as a Gauss-Markov process. Then the system model can be transformed to{θ(k+1)=(k)+w(k)y(k)=FT(k)θ(k)+e(k)

whereθ(k)=b(k), a=0.5.e(k), w(k)and θ(0) are mutually statistically uncorrelated, their covariance are Q=1, R=0.1, Π(0)=1respectively, and the mean of θ(0) isθ ¯=1. Here we set F(k)=u(k-1)=2sin(k)+3 as the assumed system input (i.e., the control signal, which can be considered to be generated, for example, by the adaptive controller), where the additive term “3” plays the role of avoiding the problem of “turn-off” (Astrom & Wittenmark, 1994).

To do the illustrative simulation, an optimal filter is required though the analysis about parameter identifiability is independent of the estimator used. The discussed linear system with Gauss-Markov parameter is transformed to state space model, and then the problem of parameter identification can be treated as states estimation. A number of quantized state estimators have been proposed by scholars in various areas, and we choose the Gaussian fit algorithm (Curry, 1970) as the filter in this section for that this filter which bases on the Gaussian assumption is near optimal and convenient to be implemented. Note that, in this simulated model, F(k)is defined by u(k-1) completely, so it is known at the channel receiver; however, in general model (1) ai(k),i=1,2,,nand bi(k),i=1,2,,m are to be identified, then F(k) is defined by u(k-1)u(k-m) and -y(k-1)-y(k-n) jointly. So in general, the quantized signalsyq(k-1),,yq(k-n), instead of the actual outputs y(k-1),,y(k-n) are received at the channel receiver, thus y(k-1),,y(k-n) in F(k) should be replaced by their estimates.

The analysis about parameter identifiability of quantized systems is suitable for any rational quantizer. Here the Max-Lloyd quantizer (Proakis, 2001) generally adopted in areas of communication and signal processing is employed. In the following statement, cases of quantization level N=4 and N=2 in (7) are simulated, respectively. The thresholds of the 4 level Max-Lloyd quantizer are {–∞, –0.9816, 0, 0.9816, +∞} and the outputs of the quantizer are {–1.51, –0.4528, 0.4528, 1.51} when the signal to be quantized is standardized normally distributed. In the case of 2 level quantizer, the thresholds are {–∞, 0, +∞} and the outputs of the quantizer are {–0.7979, 0.7979}. Hence the thresholds of the time-variant quantizers with 4 and 2 levels are respectively σy(k)×{–∞, –0.9816, 0, 0.9816, +∞}+Ey(k)×{1, 1, 1, 1, 1} and σy(k)×{–∞, 0, +∞}+Ey(k)×{1, 1, 1}, the outputs of the quantizers are σy(k)×{–1.51, –0.4528, 0.4528, 1.51}+Ey(k)×{1, 1, 1, 1} and σy(k)×{–0.7979, 0.7979}+Ey(k)×{1, 1}, where Ey(k) and σy(k) are the mean and standard deviation of the output y(k) respectively.

It is obvious that the above model is parameter identifiable by Lemma 1 when it is unquantized. We get ψ(j)0.8823 by calculating the weight ψ(j) in equation (27) when the 4 level time-variant Max-Lloyd quantizer is used and ψ(j)0.6366 when quantization levelN=2. Hence the parameter identifiability will not be changed by the quantization according to Remark 2, i.e. the quantized system is still parameter identifiable, theoretically. The simulation results of the quantized system shown in Fig. 1 (N=4) and Fig. 2 (N=2) illustrate the above conclusion.

In Fig. 1(a) and Fig. 2(a), actual values of parameter are denoted by solid lines and the estimates are denoted by dotted lines. Estimation errors are shown in Fig. 1(b) and Fig. 2(b). Fig. 1 and Fig. 2 show that the estimate can track the real value of the parameter when the outputs are quantized coarsely. The curves of prior error entropy and posterior error entropy are shown in Fig. 1(c) and Fig. 2(c). The entropy is calculated by

H(x)=n2ln2πe+12ln|C|E36

where xRn is a Gaussian vector with covariance C, ||denotes determinant. For quantized systems, the probability distribution of estimation error θ ˜(k) is unknown, but is supposed to make the entropy of θ ˜(k) maximal according to “maximal entropy principle” of Jaynes (Jaynes, 1957), namely, the uncertainty of θ ˜(k) is supposed to be maximal in the situation of lack of prior information, hence θ ˜(k) is assumed to be Gaussian, and thus (35) can be adopted to calculate the entropy of θ ˜(k) in this simulation. We can observe from Fig. 1(c) and Fig. 2(c) that the posterior error entropy is strictly smaller than prior error entropy from the initial time instant. This indicates that this quantized system is parameter identifiable, and these observations consist with our analysis mentioned above perfectly. Besides, we can observe that the estimation error when quantization level N=2 is greater than that in the case of quantization level N=4 though the system is parameter identifiable in both of the two quantization cases. This shows that systems with different quantizers lead to different estimation precision, though all of them are parameter identifiable when rational quantizers are used.

Figure 1.

a) Actual value and estimate of b(k), N=4, (b) Estimation error of b(k), N=4, (c) Prior and posterior error entropy, N=4

Figure 2.

a) Actual state and estimate of b(k), N=2, (b) Estimation error of b(k), N=2, (c) Prior and posterior error entropy, N=2

Advertisement

6. Conclusion

This paper discusses the parameter identifiability of quantized linear systems with Gauss-Markov parameters from information theoretic point of view. The existing definition concerning this property is reviewed and new definition is proposed for quantized systems. Criterion function, the Gramian of parameter identifiability for quantized systems is analyzed based on the quantity of mutual information. The derived conclusions consist with our intuition very well and also provide us with intrinsic perspective for the quantizer design. The analysis shows that the Gramian of quantized systems converge to that of unquantized systems when the quantization intervals turn to zero, and a well designed quantizer can preserve the identifiability of the original system even if the quantizer is as coarse as one bit. The analytical analysis is verified by the illustrative simulation.

References

  1. 1. AstromK. J.WittenmarkB. 1994 Adaptive Control (2nd edition), Addison-Wesley Publishing Company, 0-20155-866-1 MA, USA.
  2. 2. BaramY.KailathT. 1988 Estimability and Regulability of Linear Systems. IEEE Transactions on Automatic Control, 33 12 (December 1988), 11161121 , 0018-9286
  3. 3. CoverT. M.ThomasJ. A. 2006 Elements of Information Theory (2nd edition). John Wiley & Sons, Inc., 100471241954 Jersey.
  4. 4. CurryR. E. 1970 Estimation and Control with Quantized Measurements (1st edition). The M.I.T Press, 0-26253-216-6
  5. 5. De PersisC.MazencF. 2010 Stability of Quantized Time-Delay Nonlinear Systems: A Lyapunov-Krasowskii-Functional Approach. Mathematics of Control, Signals, & Systems, 21 4 (August 2010), 337370 , 0932-4194
  6. 6. DurgaryanI. S.PashchenkoF. F. 2001 Identification of Objects by the Maximal Information Criterion. Automation & Remote Control, 62 7 (July 2001), 11041114 , 0005-1179
  7. 7. FuM.Y.XieL.H. 2005 The Sector Bound Approach to Quantized Feedback Control. IEEE Transactions on Automatic Control, 50 11 (November 2005), 16981711 , 0018-9286
  8. 8. GrayR. M.NeuhoffD. L. 1998 Quantization. IEEE Transactions on Information Theory, 44 6 (October 1998), 23252383 , 0018-9448
  9. 9. JaynesE. T. 1957 Information Theory and Statistical Mechanics, I & II. Physical Review, 106 4 (May 1957), 620630 , 0003-1899X, vol. 108, no. 2, (October 1957), pp. 171-190, ISSN 0031899X.
  10. 10. ProakisJ. G. 2001 Digital Communications (4th edition). The McGraw-Hill Companies, 0-07232-111-3 York, NY.
  11. 11. RibeiroA. .GiannakisG. B.RoumeliotisS. I. 2006 SOI-KF: Distributed Kalman Filtering with Low-Cost Communications Using The Sign of Innovations. IEEE Transactions on Signal Processing, 54 12 (December 2006), 47824795 , 0105-3587X.
  12. 12. WangL.J.ZhangH. 2011 On Parameter Identifiability of Linear Multivariable Systems with Communication Access Constraints. Journal of Zhejiang University (Engineering Science), 45 12 (December 2011), 102107 , 0100-8973X, in Chinese.
  13. 13. WangL.Y. .ZhangJ.F.YinG. G. 2003 System Identification Using Binary Sensors. IEEE Transactions on Automatic Control, 48 11 (November 2003), 18921907 , 0018-9286
  14. 14. WangL.Y. .YinG. G. .ZhaoY.L.ZhangJ.F. 2008 Identification Input Design for Consistent Parameter Estimation of Linear Systems With Binary-Valued Output Observations. IEEE Transactions on Automatic Control, 53 4 (May 2008), 867880 , 0018-9286
  15. 15. WangL.Y. .YinG. G.XuC.Z. 2010 Identification of Cascaded Systems with Linear and Quantized Observations. Asian Journal of Control, 12 1 (January 2010), 114 , 1561-8625
  16. 16. YouK.Y. .XieL.H. .SunS.L.XiaoW.D. 2011 Quantized Fltering of Linear Stochastic Systems. Transactions of the Institute of Measurement and Control, 33 6 (August 2011), 683698 , 0142-3312
  17. 17. ZhangH. 2003 Information Descriptions and Approaches in Control Systems, Ph.D Dissertation, Department of Control Science & Engineering, Zhejiang University, in Chinese.
  18. 18. ZhangH.SunY.X. 1996 Estimability of Stochastic Systems- An Information Theory Approach. Control Theory and Applications, 13 5 (October 1996), 567572 , 1000-8152 in Chinese.

Written By

Ying Shen and Hui Zhang

Submitted: 08 February 2012 Published: 28 November 2012