InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Engineering » Civil Engineering » "Proceedings of the 2nd Czech-China Scientific Conference 2016", book edited by Jaromir Gottvald and Petr Praus, ISBN 978-953-51-2858-8, Print ISBN 978-953-51-2857-1, Published: February 1, 2017 under CC BY 3.0 license. © The Author(s).

Chapter 7

Distributed Consensus‐Based Estimation with Random Delays

By Dou Liya
DOI: 10.5772/66784

Article top

Distributed Consensus‐Based Estimation with Random Delays

Dou Liya
Show details


In this chapter we investigate the distributed estimation of linear‐invariant systems with network‐induced delays and packet dropouts. The methodology is based on local Luenberger‐like observers combined with consensus strategies. Only neighbors are allowed to communicate, and the random network‐induced delays are modeled as Markov chains. Then, the sufficient and necessary conditions for the stochastic stability of the observation error system are established. Furthermore, the design problem is solved via an iterative linear matrix inequality approach. Simulation examples illustrate the effectiveness of the proposed method.


1. Introduction

The convergence of sensing, computing, and communication in low cost, low power devices is enabling a revolution in the way we interact with the physical world. The technological advances in wireless communication make possible the integration of many devices allowing flexible, robust, and easily configurable systems of wireless sensor networks (WSNs). This chapter is devoted to the estimation problem in such networks.

Since sensor networks are usually large‐scale systems, centralization is difficult and costly due to large communication costs. Therefore, one must employ distributed or decentralized estimation techniques. Conventional decentralized estimation schemes involve all‐to‐all communication [1]. Distributed schemes seem to fit better. In this class of schemes, the system is divided into several smaller subsystems, each governed by a different agent, which may or may not share information with the rest. There exists a vast literature that study the distributed estimation for sensor networks in which the dynamics induced by the communication network (time‐varying delays and data losses mainly) are taken into account [210]. Millan et al. [6] have studied the distributed state estimation problem for a class of linear time‐invariant systems over sensor networks subject to network‐induced delays, which are assumed to have taken values in [0,τM].

One of the constraints is the network‐induced time delays, which can degrade the performance or even cause instability. Various methodologies have been proposed for modeling and stability analysis for network systems in the presence of network‐induced time delays and packet dropouts. The Markov chain can be effectively used to model the network‐induced time delays in sensor networks. In Ref. [11], the time delays of the networked control systems are modeled by using the Markov chains, and further an output feedback controller design method is proposed.

The rest of the chapter is organized as follows. In Section 2, we analyze the available delay information and formulate the observer design problem. In Section 3, the sufficient and necessary conditions to guarantee the stochastic stability are presented first and the equivalent LMI conditions with constraints are derived. Simulation examples are given to illustrate the effectiveness of the proposed method in Section 4.

Notation: Consider a network with p sensors. Let υ={1,2,,p} be an index set of p sensor nodes, ευ×υ be the link set of paired sensor nodes. Then the directed graph G=(υ,ε) represents the sensing topology. The link (i,j) implies that the node i receives information from node j. The cardinality of ε is equal to l. Define q=g(i,j) as the link index. Ni={jυ|(i,j)ε} denotes the subset of nodes that communicating to node i.

2. Problem formulation

Assume a sensor network intended to collectively estimate the state of a linear plant in a distributed way. Every observer computes a local estimation of the plant's states based on local measurements and the information received from neighboring nodes. Observers periodically collect some outputs of the plant and broadcast some information of their own estimation. The information is transmitted through the network, so network‐induced time delays and dropouts may occur.

In this work, the system to be observed is assumed to be an autonomous linear time‐invariant plant given by the following equations:


where x(k)Rn is the state of the plant, yi(k)Rmi is the system's outputs and p is the number of the observers. Assume (A,C) is observable, where C=[C1,,Cp].

Besides the system's output yi(k), observer i receives some estimated outputs y^ij(k)=Cijx^j from each neighbor jΝi. The matrix Cij is assumed to be known for both nodes. Define C¯i as a matrix stacking the matrix Cj and the matrices Cij for all jΝi. It is assumed that (A,Ci¯) is observable i.

2.1. Delays modeled by Markov chains

The communication links between neighbors may be affected by delays and/or packet dropouts. The equivalent delay τij(k)N (or τq(k), with q=g(i,j){1,,l}) represents the time difference between the current time instant k and the instant when the last packet sent by j was received at node i. The delay includes the effect of sampling, communication delay, and packet dropouts. The number of consecutive packet dropouts and network‐induced delays are assumed to be bounded, so τij(k) is also bounded.

The Markov chain is a discrete‐time stochastic process with Markov property. One way to model the delays is to use the finite state Markov chain as in Refs. [79]. The main advantages of the Markov model are that the dependencies between delays are taken into account since in real networks the current time delays are usually related to the previous delays [8]. In this note, τij(k) (i,jNi) are modeled as l different Markov chains that take values in W={0,1,,τM}. And their transition probability matrices are Λq=[λqrs], q=1,2,,l. That means τij(k) jump from mode r to s with the probability λqrs:


where λqrs0 and s=0τMλqrs=1 for all r,sW.

Remark 1: In the real network, the network‐induced delays are difficult to measure. Using the stochastic process to model the delays is more practical. For sensor networks, the communication link between different pairs of nodes is also different, so the data may experience different time delays. It is more reasonable to model the delays by different Markov chains.

2.2. Observation error system

The structure of the observers described in the following is inspired by that given in Ref. [6]. To estimate the state of the plant, every node is assumed to run an estimator of the plant's state as:


The observers’ dynamics are based on both local Luenberger‐like observers weighted with Mi matrices, and consensus with weighting matrices Nij, which takes into account the information received from the neighboring nodes.

The observation error of observer i is defined as ei(k)=x^i(k)x(k). From Eqs. (1)(5), the dynamics of the observation errors can be written as:


Define e(k)=[e1T(k)e2T(k)epT(k)]T, X(k)=[eT(k)eT(k1)eT(kτM)]T, then we have the observation error system:




Πq are block matrices in correspondence with each of the links q communicating the observer i with j, in which the only blocks different from zero are NijCij and NijCij in the (i,i) and (i,j) positions, respectively. M={Mi,iυ},N={Nij,iυ,jNi} are observer matrices to be designed.

Remark 2: The observation error system (Eq. (7)) depends on the delays τ1(k),,τl(k). This makes the analysis and design more challenging. The objective of this note is to design the observers to guarantee the stochastic stability of Eq. (7).

Definition 1 [7]: The system in Eq. (7) is stochastically stable if for every finite X0=X(0), initial mode τ1(0),,

τl(0)W, there exists a finite Ζ>0 such that the following holds:


3. Observers’ design

In this section, we first derive the sufficient and necessary conditions to guarantee the stochastic stability of system (Eq. (7)) with Definition 1. For ease of presentation, when the system's delays are


we denote Φ(Ν,τ1(k),,τl(k)) as Φ(Ν,r1,,rl).

Theorem 1: Under the observer (Eqs. (4) and (5)), the observation error system (Eq. (7)) is stochastically stable if and only if there exists symmetric P(r1,r2,,rl)>0 such that the following matrix inequality:


holds for all r1,r2,,rlW.

Proof: Sufficiency: For the system Eq. (7), construct the Lyapunov function


Calculating the difference of V(X(k),k) along system Eq. (7) and taking the mathematical expectation, we have


Define τ1(k+1)=s1,,τl(k+1)=sl. To evaluate the first term in Eq. (13), we need to apply the probability transition matrices for τ1(k)τ1(k+1),,τl(k)

τl(k+1), those are Λq,q=1,2,,l.

Then, Eq. (13) can be evaluated as


Thus, if L(r1,r2,,rl)<0, then

Ε{Δ(V(X(k),k))}=X(k)TL(r1,r2,,rl)X(k)λmin(L(r1,r2,,rl))X(k)TX(k)                 βX(k)2

where β=inf{λmin(L(r1,r2,,rl))}>0. From Eq. (15), we can see that for any T1


Then we have


According to Definition 1, the observation error system Eq. (7) is stochastically stable.

Necessity: For necessity, we need to show that if the system Eq. (7) is stochastically stable, then there exists symmetric P(r1,,rl)>0 such that Eq. (11) holds. It suffices to prove that for any bounded Q(τ1(k),,τl(k))>0, there exists a set of P(τ1(k),,τl(k)) such that




Assuming that X(k)0, since Q(τ1(k),,τl(k))>0, as T increases, X(t)TP˜(Tt,τ1(t),,τl(t))X(t) is monotonically increasing, or else it increases monotonically until Ε{X(k)TQ(τ1(k),,τl(k))X(k)|Xt,τ1(t),,τl(t)}=0 for all kk1t. From Eq. (9), X(t)TP˜(Tt,τ1(t),,τl(t))X(t) is bounded. Furthermore, its limit exists


Since it is valid for any X(t), we have


From Eq. (20), we obtain P(r1,,rl)>0 since Q(τ1(k),,τl(k))>0. Consider


The second term in Eq. (22) equals to


Substituting Eq. (23) into Eq. (22) gives rise to


Letting T and noticing Eq. (21), it is shown that Eq. (11) holds. This completes the proof.

As it is clearly seen from Eq. (11) that the matrix inequality to be solved in order to design the observers is nonlinear. To handle this, Proposition 1 gives the equivalent LMI conditions with nonconvex constraints. It can be solved by several existing iterative LMI algorithms. Product reduction algorithm in Ref. [10] is employed to solve the following conditions.

Proposition 1: There exist observers Eqs. (4) and (5) such that the observation error system Eq. (7) is stochastically stable if and only if there exists matrices φ(Μ), ϕ1(Ν,r1),

ϕ2(Ν,r2),,ϕl(Ν,rl), and symmetric matrices Χ¯(s1,s2,,sl)>0, P(r1,r2,,rl)>0, satisfying


for all r1,,rlW, with


Proof: As we know Χ¯(s1,,sl)>0, we have Χ(r1,,rl)>0 by the construction of it. By applying the Schur complement, Eq. (25) is equivalent to


Since Χ¯(s1,,sl)=P(s1,,sl)1, we can derive Eq. (11).

4. Numerical example

Consider a plant whose dynamics is given by:


Assume the network has two nodes, with two links, one is from node 1 to node 2, and the other is from node 2 to node 1. The matrices are given as follows:


The random delays are assumed to be τq(k){0,1}(q=1,2), and their transition probability matrices are given by


Figure 1 shows part of the simulation run of the delay τ2(k) governed by its transition probability matrix Λ2.


Figure 1.

The random delays τ2(k).

By using Proposition 1, we design the observers with the following matrices:


The initial values of the plant and the observers are x(0)=[20.5]T, x^1(0)=x^2(0)=[00]T and x^1(1)=x^2(1)=[00]T. Figure 2 represents the evolution of the plant's states (solid lines) and the estimated states (dashed lines) for observer 2. It is observed that the estimation of the observers converge to the plant's state.


Figure 2.

Evolution of the estimates for observer 2.

5. Conclusion

This chapter addresses the problem of distributed estimation considering random network‐induced delays and packet dropouts. The delays are modeled by Markov chains. The observers are based on local Luenberger‐like observers and consensus terms to weight the information received from neighboring nodes. Then the resulting observation error system is a special discrete‐time jump linear system. The sufficient and necessary conditions for the stochastic stability of the observation error system are derived in the form of a set of LMIs with nonconvex constraints. Simulation examples verify its effectiveness.


1 - R. Olfati‐Saber, “Distributed Kalman filter with embedded consensus filters,” in 44th Conference on Decision and Control and the European Control Conference, Seville, Spain, December 12–15, 2005, 8179–8184.
2 - R. Olfati‐Saber, “Distributed Kalman filtering for sensor networks,” in 46th Conference on Decision and Control, New Orleans, Lousiana, USA, December 2007, 5492–5498.
3 - M. Kamgarpour, C. Tomlin, “Convergence properties of a decentralize Kalman filter,” in 47th Conference on Decision and Control, Cancun, Mexico, December 2008, 3205–3210.
4 - J.M. Maestre, D. Munoz de la Pena, E.F. Camacho, “Wireless sensor network analysis through a coalitional game: application to a distributed Kalman filter,” in 8th IEEE International Networking, Sensing and Control, 2011, 227–232.
5 - M. Farina, G. Ferrari‐Trecate, R. Scattolini, Distributed moving horizon estimation for linear constrained systems, IEEE Transactions on Automatic Control, 55(11), 2462–2475, 2010.
6 - P. Millan, L. Orihuela, C. Vivas, et al., Distributed consensus‐based estimation considering network induced delays and dropouts, Automatica, 48(10), 2726–2729, 2012.
7 - J. Nilsson, Real‐Time Control Systems with Delays, Department of Automatic Control, Lund Institute of Technology, vol. 1049, 138 p, 1998.
8 - L. Zhang, Y. Shi, T. Chen, et al., A new method for stabilization of networked control systems with random delays, IEEE Transactions on Automatic Control, 50(8), 1177–1181, 2005.
9 - L. Zhang, B. Huang, J. Lam, H8 model reduction of Markovian jump linear systems, Systems and Control Letters, 50(2), 103–118, 2003.
10 - L. Yan, X. Zhang, Z. Zhang, Y. Yang, Distributed state estimation in sensor networks with event‐triggered communication, Nonlinear Dynamics, 76(1), 169–181, 2014.
11 - S. Yang, Y. Bo, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Transactions on Automatic Control, 54(7), 1668–1674, 2009.