Open access peer-reviewed conference paper

Distributed Consensus‐Based Estimation with Random Delays

Written By

Dou Liya

Reviewed: 09 November 2016 Published: 01 February 2017

DOI: 10.5772/66784

From the Proceeding

Proceedings of the 2nd Czech-China Scientific Conference 2016

Edited by Jaromir Gottvald and Petr Praus

Chapter metrics overview

1,261 Chapter Downloads

View Full Metrics

Abstract

In this chapter we investigate the distributed estimation of linear‐invariant systems with network‐induced delays and packet dropouts. The methodology is based on local Luenberger‐like observers combined with consensus strategies. Only neighbors are allowed to communicate, and the random network‐induced delays are modeled as Markov chains. Then, the sufficient and necessary conditions for the stochastic stability of the observation error system are established. Furthermore, the design problem is solved via an iterative linear matrix inequality approach. Simulation examples illustrate the effectiveness of the proposed method.

1. Introduction

The convergence of sensing, computing, and communication in low cost, low power devices is enabling a revolution in the way we interact with the physical world. The technological advances in wireless communication make possible the integration of many devices allowing flexible, robust, and easily configurable systems of wireless sensor networks (WSNs). This chapter is devoted to the estimation problem in such networks.

Since sensor networks are usually large‐scale systems, centralization is difficult and costly due to large communication costs. Therefore, one must employ distributed or decentralized estimation techniques. Conventional decentralized estimation schemes involve all‐to‐all communication [1]. Distributed schemes seem to fit better. In this class of schemes, the system is divided into several smaller subsystems, each governed by a different agent, which may or may not share information with the rest. There exists a vast literature that study the distributed estimation for sensor networks in which the dynamics induced by the communication network (time‐varying delays and data losses mainly) are taken into account [210]. Millan et al. [6] have studied the distributed state estimation problem for a class of linear time‐invariant systems over sensor networks subject to network‐induced delays, which are assumed to have taken values in [0,τM].

One of the constraints is the network‐induced time delays, which can degrade the performance or even cause instability. Various methodologies have been proposed for modeling and stability analysis for network systems in the presence of network‐induced time delays and packet dropouts. The Markov chain can be effectively used to model the network‐induced time delays in sensor networks. In Ref. [11], the time delays of the networked control systems are modeled by using the Markov chains, and further an output feedback controller design method is proposed.

The rest of the chapter is organized as follows. In Section 2, we analyze the available delay information and formulate the observer design problem. In Section 3, the sufficient and necessary conditions to guarantee the stochastic stability are presented first and the equivalent LMI conditions with constraints are derived. Simulation examples are given to illustrate the effectiveness of the proposed method in Section 4.

Notation: Consider a network with p sensors. Let υ={1,2,,p} be an index set of p sensor nodes, ευ×υ be the link set of paired sensor nodes. Then the directed graph G=(υ,ε) represents the sensing topology. The link (i,j) implies that the node i receives information from node j. The cardinality of ε is equal to l. Define q=g(i,j) as the link index. Ni={jυ|(i,j)ε} denotes the subset of nodes that communicating to node i.

Advertisement

2. Problem formulation

Assume a sensor network intended to collectively estimate the state of a linear plant in a distributed way. Every observer computes a local estimation of the plant's states based on local measurements and the information received from neighboring nodes. Observers periodically collect some outputs of the plant and broadcast some information of their own estimation. The information is transmitted through the network, so network‐induced time delays and dropouts may occur.

In this work, the system to be observed is assumed to be an autonomous linear time‐invariant plant given by the following equations:

x(k+1)=Ax(k)E1
yi(k)=Cix(k)i=1,2,,p,E2

where x(k)Rn is the state of the plant, yi(k)Rmi is the system's outputs and p is the number of the observers. Assume (A,C) is observable, where C=[C1,,Cp].

Besides the system's output yi(k), observer i receives some estimated outputs y^ij(k)=Cijx^j from each neighbor jΝi. The matrix Cij is assumed to be known for both nodes. Define C¯i as a matrix stacking the matrix Cj and the matrices Cij for all jΝi. It is assumed that (A,Ci¯) is observable i.

2.1. Delays modeled by Markov chains

The communication links between neighbors may be affected by delays and/or packet dropouts. The equivalent delay τij(k)N (or τq(k), with q=g(i,j){1,,l}) represents the time difference between the current time instant k and the instant when the last packet sent by j was received at node i. The delay includes the effect of sampling, communication delay, and packet dropouts. The number of consecutive packet dropouts and network‐induced delays are assumed to be bounded, so τij(k) is also bounded.

The Markov chain is a discrete‐time stochastic process with Markov property. One way to model the delays is to use the finite state Markov chain as in Refs. [79]. The main advantages of the Markov model are that the dependencies between delays are taken into account since in real networks the current time delays are usually related to the previous delays [8]. In this note, τij(k) (i,jNi) are modeled as l different Markov chains that take values in W={0,1,,τM}. And their transition probability matrices are Λq=[λqrs], q=1,2,,l. That means τij(k) jump from mode r to s with the probability λqrs:

λqrs=Pr(τq(k+1)=s|τq(k)=r),q=1,2,,l,E3

where λqrs0 and s=0τMλqrs=1 for all r,sW.

Remark 1: In the real network, the network‐induced delays are difficult to measure. Using the stochastic process to model the delays is more practical. For sensor networks, the communication link between different pairs of nodes is also different, so the data may experience different time delays. It is more reasonable to model the delays by different Markov chains.

2.2. Observation error system

The structure of the observers described in the following is inspired by that given in Ref. [6]. To estimate the state of the plant, every node is assumed to run an estimator of the plant's state as:

x^i(k+1)=Ax^i(k)+Mi(y^i(k)yi(k))+jNiNij(Cijx^j(kτij(k))Cijx^i(kτij(k)))E4
y^i(k)=Cix^i(k),i=1,2,,p,E5

The observers’ dynamics are based on both local Luenberger‐like observers weighted with Mi matrices, and consensus with weighting matrices Nij, which takes into account the information received from the neighboring nodes.

The observation error of observer i is defined as ei(k)=x^i(k)x(k). From Eqs. (1)(5), the dynamics of the observation errors can be written as:

ei(k+1)=(A+MiCi)ei(k)jNiNijCij×ei(kτij(k))+jNiNijCijej(kτij(k))E6

Define e(k)=[e1T(k)e2T(k)epT(k)]T, X(k)=[eT(k)eT(k1)eT(kτM)]T, then we have the observation error system:

X(k+1)=(Ψ(Μ)+Φ(Ν,τ1(k),,τl(k)))X(k)E7

where

Ψ(Μ)=[φ(Μ)000I0000I0000I0]Φ(Ν,τ1(k),,τl(k))=[ϕ(Ν,τ1(k),,τl(k))00],ϕ(Μ)=[ A+M1C10000A+M2C20000A+MpCp ]ϕ(Ν,τ1(k),,τl(k))=ϕ1(Ν,τ1(k))++ϕl(Ν,τl(k)),ϕq(Ν,τq(k))=[00Πq00]the(1+τq(k))thelementisΠq,q=1,2,,l.E8

Πq are block matrices in correspondence with each of the links q communicating the observer i with j, in which the only blocks different from zero are NijCij and NijCij in the (i,i) and (i,j) positions, respectively. M={Mi,iυ},N={Nij,iυ,jNi} are observer matrices to be designed.

Remark 2: The observation error system (Eq. (7)) depends on the delays τ1(k),,τl(k). This makes the analysis and design more challenging. The objective of this note is to design the observers to guarantee the stochastic stability of Eq. (7).

Definition 1 [7]: The system in Eq. (7) is stochastically stable if for every finite X0=X(0), initial mode τ1(0),,

τl(0)W, there exists a finite Ζ>0 such that the following holds:

Ε{k=0X(k)2|X0,τ1(0),,τl(0)}<X0TΖX0E9

Advertisement

3. Observers’ design

In this section, we first derive the sufficient and necessary conditions to guarantee the stochastic stability of system (Eq. (7)) with Definition 1. For ease of presentation, when the system's delays are

τ1(k)=r1,...,τl(k)=rl(r1,...rlW),E10

we denote Φ(Ν,τ1(k),,τl(k)) as Φ(Ν,r1,,rl).

Theorem 1: Under the observer (Eqs. (4) and (5)), the observation error system (Eq. (7)) is stochastically stable if and only if there exists symmetric P(r1,r2,,rl)>0 such that the following matrix inequality:

L(r1,r2,,rl)=s1=0τMs2=0τMsl=0τMλ1r1s1λ1r2s2λ1rlsl×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]TP(s1,s2,,sl)×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]P(r1,r2,,rl)<0E11

holds for all r1,r2,,rlW.

Proof: Sufficiency: For the system Eq. (7), construct the Lyapunov function

V(X(k),k)=X(k)TP(τ1(k),τ2(k),,τl(k))X(k)E12

Calculating the difference of V(X(k),k) along system Eq. (7) and taking the mathematical expectation, we have

Ε{Δ(V(X(k),k))}=Ε{V(X(k+1),k+1)V(X(k),k)}=Ε{X(k+1)TP(τ1(k+1),,τl(k+1))X(k+1).|Xk,τ1(k)=r1,,τl(k)=r}X(k)TP(τ1(k),,τl(k))X(k)E13

Define τ1(k+1)=s1,,τl(k+1)=sl. To evaluate the first term in Eq. (13), we need to apply the probability transition matrices for τ1(k)τ1(k+1),,τl(k)

τl(k+1), those are Λq,q=1,2,,l.

Then, Eq. (13) can be evaluated as

Ε{Δ(V(X(k),k))}=X(k)T{s1=0τMs2=0τMsl=0τMλ1r1s1λ1r2s2λ1rlsl×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]TP(s1,s2,,sl)×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]P(r1,r2,,rl)}×X(k)E14

Thus, if L(r1,r2,,rl)<0, then

Ε{Δ(V(X(k),k))}=X(k)TL(r1,r2,,rl)X(k)λmin(L(r1,r2,,rl))X(k)TX(k)                 βX(k)2E15

where β=inf{λmin(L(r1,r2,,rl))}>0. From Eq. (15), we can see that for any T1

Ε{V(X(T+1),T+1)}Ε{V(X0,0)}βΕ{t=0TX(t)2}E16

Then we have

Ε{t=0TX(t)2}1β(Ε{V(X0,0)}Ε{V(X(T+1),T+1)})1βΕ{V(X0,0)}=1βX(0)TP(τ1(0),,τl(0))X(0)E17

According to Definition 1, the observation error system Eq. (7) is stochastically stable.

Necessity: For necessity, we need to show that if the system Eq. (7) is stochastically stable, then there exists symmetric P(r1,,rl)>0 such that Eq. (11) holds. It suffices to prove that for any bounded Q(τ1(k),,τl(k))>0, there exists a set of P(τ1(k),,τl(k)) such that

s1=0τMs2=0τMsl=0τMλ1r1s1λ1r2s2λ1rlsl×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]TP(s1,s2,,sl)×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]P(r1,r2,,rl)=Q(r1,r2,,rl)E18

Define

X(t)TP˜(Tt,τ1(t),,τl(t))X(t)=Ε{k=tTX(k)TQ(τ1(k),,τl(k))X(k)|Xt,τ1(t),,τl(t)}E19

Assuming that X(k)0, since Q(τ1(k),,τl(k))>0, as T increases, X(t)TP˜(Tt,τ1(t),,τl(t))X(t) is monotonically increasing, or else it increases monotonically until Ε{X(k)TQ(τ1(k),,τl(k))X(k)|Xt,τ1(t),,τl(t)}=0 for all kk1t. From Eq. (9), X(t)TP˜(Tt,τ1(t),,τl(t))X(t) is bounded. Furthermore, its limit exists

X(t)TP(r1,,rl)X(t)=limTX(t)TP˜(Tt,τ1(t)=r1,,τl(t)=rl)X(t)=limTΕ{k=tTX(k)TQ(τ1(k),,τl(k))X(k)|Xt,τ1(t),,τl(t)}E20

Since it is valid for any X(t), we have

P(r1,,rl)=limTP˜(Tt,τ1(t)=r1,,τl(t)=rl).E21

From Eq. (20), we obtain P(r1,,rl)>0 since Q(τ1(k),,τl(k))>0. Consider

Ε{X(t)TP˜(Tt,τ1(t),,τl(t))X(t)X(t+1)T×P˜(Tt1,τ1(t+1),,τl(t+1))X(t+1)|Xt,τ1(t)=r1,,τl(t)=rl}=X(t)TQ(r1,,rl)X(t)E22
.

The second term in Eq. (22) equals to

Ε{X(t+1)TP˜(Tt1,τ1(t+1),τ2(t+1),,τl(t+1))×X(t+1)|Xt,τ1(t)=r1,τ2(t)=r2,,τl(t)=rl}=X(t)T{s1=0τMs2=0τMsl=0τMλ1r1s1λ1r2s2λ1rlsl×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]TP˜(Tt1,s1,,sl)×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]}X(t)E23

Substituting Eq. (23) into Eq. (22) gives rise to

X(t)T{P˜(Tt,τ1(t),,τl(t))s1=0τMs2=0τMsl=0τMλ1r1s1λ1r2s2λ1rlsl×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]TP˜(Tt1,s1,,sl)×[Ψ(Μ)+Φ(Ν,r1,r2,,rl)]}X(t)=X(t)TQ(r1,r2,,rl)X(t)E24

Letting T and noticing Eq. (21), it is shown that Eq. (11) holds. This completes the proof.

As it is clearly seen from Eq. (11) that the matrix inequality to be solved in order to design the observers is nonlinear. To handle this, Proposition 1 gives the equivalent LMI conditions with nonconvex constraints. It can be solved by several existing iterative LMI algorithms. Product reduction algorithm in Ref. [10] is employed to solve the following conditions.

Proposition 1: There exist observers Eqs. (4) and (5) such that the observation error system Eq. (7) is stochastically stable if and only if there exists matrices φ(Μ), ϕ1(Ν,r1),

ϕ2(Ν,r2),,ϕl(Ν,rl), and symmetric matrices Χ¯(s1,s2,,sl)>0, P(r1,r2,,rl)>0, satisfying

[P(r1,r2,,rl)V(r1,r2,,rl)TV(r1,r2,,rl)Χ(r1,r2,,rl)]<0E25
Χ¯(s1,s2,,sl)P(s1,s2,,sl)=IE26

for all r1,,rlW, with

V(r1,,rl)=[V0(r1,,rl)TVτM(r1,,rl)T]TVs1(r1,,rl)=[Vs1,0(r1,,rl)TVs1,τM(r1,,rl)T]TVs1s2(r1,,rl)=[Vs1s2,0(r1,,rl)TVs1s2,τM(r1,,rl)T]TVs1sl1(r1,,rl)=[(λ1r1s1λlrl0)12[Ψ(Μ)+Φ(Ν,r1,,rl)](λ1r1s1λlrl1)12[Ψ(Μ)+Φ(Ν,r1,,rl)](λ1r1s1λlrlτM)12[Ψ(Μ)+Φ(Ν,r1,,rl)]]Χ(r1,,rl)=diag{Χ0(r1,,rl),,ΧτM(r1,,rl)}Χs1(r1,,rl)=diag{Χs1,0(r1,,rl),,Χs1,τM(r1,,rl)}Χs1sl1(r1,,rl)=diag{Χ¯(s1,,sl),,Χ¯(s1,,sl)}.E27

Proof: As we know Χ¯(s1,,sl)>0, we have Χ(r1,,rl)>0 by the construction of it. By applying the Schur complement, Eq. (25) is equivalent to

P(r1,,rl)+V(r1,,rl)TΧ1(r1,,rl)V(r1,,rl)<0E28

Since Χ¯(s1,,sl)=P(s1,,sl)1, we can derive Eq. (11).

Advertisement

4. Numerical example

Consider a plant whose dynamics is given by:

x(k+1)=[0.99001.01]x(k)E29
.

Assume the network has two nodes, with two links, one is from node 1 to node 2, and the other is from node 2 to node 1. The matrices are given as follows:

C1=[10],C2=[11],C12=C2,C21=C1,E30

The random delays are assumed to be τq(k){0,1}(q=1,2), and their transition probability matrices are given by

Λ1=[0.40.60.50.5],Λ2=[0.30.70.40.6]E31
.

Figure 1 shows part of the simulation run of the delay τ2(k) governed by its transition probability matrix Λ2.

Figure 1.

The random delays τ2(k).

By using Proposition 1, we design the observers with the following matrices:

M1=[0.99000.0673],M2=[0.46200.5387],N12=[0.00710.3865],N21=[0.13200.1347]E32

The initial values of the plant and the observers are x(0)=[20.5]T, x^1(0)=x^2(0)=[00]T and x^1(1)=x^2(1)=[00]T. Figure 2 represents the evolution of the plant's states (solid lines) and the estimated states (dashed lines) for observer 2. It is observed that the estimation of the observers converge to the plant's state.

Figure 2.

Evolution of the estimates for observer 2.

Advertisement

5. Conclusion

This chapter addresses the problem of distributed estimation considering random network‐induced delays and packet dropouts. The delays are modeled by Markov chains. The observers are based on local Luenberger‐like observers and consensus terms to weight the information received from neighboring nodes. Then the resulting observation error system is a special discrete‐time jump linear system. The sufficient and necessary conditions for the stochastic stability of the observation error system are derived in the form of a set of LMIs with nonconvex constraints. Simulation examples verify its effectiveness.

References

  1. 1. R. Olfati‐Saber, “Distributed Kalman filter with embedded consensus filters,” in 44th Conference on Decision and Control and the European Control Conference, Seville, Spain, December 12–15, 2005, 8179–8184.
  2. 2. R. Olfati‐Saber, “Distributed Kalman filtering for sensor networks,” in 46th Conference on Decision and Control, New Orleans, Lousiana, USA, December 2007, 5492–5498.
  3. 3. M. Kamgarpour, C. Tomlin, “Convergence properties of a decentralize Kalman filter,” in 47th Conference on Decision and Control, Cancun, Mexico, December 2008, 3205–3210.
  4. 4. J.M. Maestre, D. Munoz de la Pena, E.F. Camacho, “Wireless sensor network analysis through a coalitional game: application to a distributed Kalman filter,” in 8th IEEE International Networking, Sensing and Control, 2011, 227–232.
  5. 5. M. Farina, G. Ferrari‐Trecate, R. Scattolini, Distributed moving horizon estimation for linear constrained systems, IEEE Transactions on Automatic Control, 55(11), 2462–2475, 2010.
  6. 6. P. Millan, L. Orihuela, C. Vivas, et al., Distributed consensus‐based estimation considering network induced delays and dropouts, Automatica, 48(10), 2726–2729, 2012.
  7. 7. J. Nilsson, Real‐Time Control Systems with Delays, Department of Automatic Control, Lund Institute of Technology, vol. 1049, 138 p, 1998.
  8. 8. L. Zhang, Y. Shi, T. Chen, et al., A new method for stabilization of networked control systems with random delays, IEEE Transactions on Automatic Control, 50(8), 1177–1181, 2005.
  9. 9. L. Zhang, B. Huang, J. Lam, H8 model reduction of Markovian jump linear systems, Systems and Control Letters, 50(2), 103–118, 2003.
  10. 10. L. Yan, X. Zhang, Z. Zhang, Y. Yang, Distributed state estimation in sensor networks with event‐triggered communication, Nonlinear Dynamics, 76(1), 169–181, 2014.
  11. 11. S. Yang, Y. Bo, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Transactions on Automatic Control, 54(7), 1668–1674, 2009.

Written By

Dou Liya

Reviewed: 09 November 2016 Published: 01 February 2017