Distributed Consensus‐Based Estimation with Random Delays Distributed Consensus-Based Estimation with Random Delays

In this chapter we investigate the distributed estimation of linear-invariant systems with network-induced delays and packet dropouts. The methodology is based on local Luenberger-like observers combined with consensus strategies. Only neighbors are allowed to communicate, and the random network-induced delays are modeled as Markov chains. Then, the sufficient and necessary conditions for the stochastic stability of the observation error system are established. Furthermore, the design problem is solved via an iterative linear matrix inequality approach. Simulation examples illustrate the effectiveness of the proposed method.


Introduction
The convergence of sensing, computing, and communication in low cost, low power devices is enabling a revolution in the way we interact with the physical world. The technological advances in wireless communication make possible the integration of many devices allowing flexible, robust, and easily configurable systems of wireless sensor networks (WSNs). This chapter is devoted to the estimation problem in such networks.
Since sensor networks are usually large-scale systems, centralization is difficult and costly due to large communication costs. Therefore, one must employ distributed or decentralized estimation techniques. Conventional decentralized estimation schemes involve all-to-all communication [1]. Distributed schemes seem to fit better. In this class of schemes, the system is divided into several smaller subsystems, each governed by a different agent, which may or may not share information with the rest. There exists a vast literature that study the distributed estimation for sensor networks in which the dynamics induced by the communication network (time-varying delays and data losses mainly) are taken into account [2][3][4][5][6][7][8][9][10]. Millan et al. [6] have studied the distributed state estimation problem for a class of linear time-invariant systems over sensor networks subject to network-induced delays, which are assumed to have taken values in ½0, τ M .
One of the constraints is the network-induced time delays, which can degrade the performance or even cause instability. Various methodologies have been proposed for modeling and stability analysis for network systems in the presence of network-induced time delays and packet dropouts. The Markov chain can be effectively used to model the network-induced time delays in sensor networks. In Ref. [11], the time delays of the networked control systems are modeled by using the Markov chains, and further an output feedback controller design method is proposed.
The rest of the chapter is organized as follows. In Section 2, we analyze the available delay information and formulate the observer design problem. In Section 3, the sufficient and necessary conditions to guarantee the stochastic stability are presented first and the equivalent LMI conditions with constraints are derived. Simulation examples are given to illustrate the effectiveness of the proposed method in Section 4.
Notation: Consider a network with p sensors. Let υ ¼ f1, 2, ⋯, pg be an index set of p sensor nodes, ε⊂υ · υ be the link set of paired sensor nodes. Then the directed graph G ¼ ðυ, εÞ represents the sensing topology. The link ði, jÞ implies that the node i receives information from node j. The cardinality of ε is equal to l. Define q ¼ gði, jÞ as the link index. N i ¼ fj∈υjði, jÞ∈εg denotes the subset of nodes that communicating to node i.

Problem formulation
Assume a sensor network intended to collectively estimate the state of a linear plant in a distributed way. Every observer computes a local estimation of the plant's states based on local measurements and the information received from neighboring nodes. Observers periodically collect some outputs of the plant and broadcast some information of their own estimation. The information is transmitted through the network, so network-induced time delays and dropouts may occur.
In this work, the system to be observed is assumed to be an autonomous linear time-invariant plant given by the following equations: where xðkÞ∈R n is the state of the plant, y i ðkÞ∈R mi is the system's outputs and p is the number of the observers. Assume ðA, CÞ is observable, where C ¼ ½C 1 , ⋯, C p .
Besides the system's output y i ðkÞ, observer i receives some estimated outputsŷ ij ðkÞ ¼ C ijxj from each neighbor j∈Ν i . The matrix C ij is assumed to be known for both nodes. Define C i as a matrix stacking the matrix C j and the matrices C ij for all j∈Ν i . It is assumed that ðA, C i Þ is observable ∀i.

Delays modeled by Markov chains
The communication links between neighbors may be affected by delays and/or packet dropouts. The equivalent delay τ ij ðkÞ∈N (or τ q ðkÞ, with q ¼ gði, jÞ∈f1, ⋯, lg) represents the time difference between the current time instant k and the instant when the last packet sent by j was received at node i. The delay includes the effect of sampling, communication delay, and packet dropouts. The number of consecutive packet dropouts and network-induced delays are assumed to be bounded, so τ ij ðkÞ is also bounded.
The Markov chain is a discrete-time stochastic process with Markov property. One way to model the delays is to use the finite state Markov chain as in Refs. [7][8][9]. The main advantages of the Markov model are that the dependencies between delays are taken into account since in real networks the current time delays are usually related to the previous delays [8]. In this note, τ ij ðkÞ (∀i, j∈N i ) are modeled as l different Markov chains that take values in And their transition probability matrices are Λ q ¼ ½λ qrs , q ¼ 1, 2, ⋯, l. That means τ ij ðkÞ jump from mode r to s with the probability λ qrs : where λ qrs ≥ 0 and ∑ τM s¼0 λ qrs ¼ 1 for all r, s ∈ W.
Remark 1: In the real network, the network-induced delays are difficult to measure. Using the stochastic process to model the delays is more practical. For sensor networks, the communication link between different pairs of nodes is also different, so the data may experience different time delays. It is more reasonable to model the delays by different Markov chains.

Observation error system
The structure of the observers described in the following is inspired by that given in Ref. [6].
To estimate the state of the plant, every node is assumed to run an estimator of the plant's state as:x The observers' dynamics are based on both local Luenberger-like observers weighted with M i matrices, and consensus with weighting matrices N ij , which takes into account the information received from the neighboring nodes.
The observation error of observer i is defined as e i ðkÞ ¼x i ðkÞ−xðkÞ. From Eqs. (1)-(5), the dynamics of the observation errors can be written as: Define eðkÞ ¼ ½ e T 1 ðkÞ e T 2 ðkÞ ⋯ e T p ðkÞ T , XðkÞ ¼ ½ e T ðkÞ e T ðk−1Þ ⋯ e T ðk−τ M Þ T , then we have the observation error system: Π q are block matrices in correspondence with each of the links q communicating the observer i with j, in which the only blocks different from zero are −N ij C ij and N ij C ij in the ði, iÞ and ði, jÞ positions, respectively. M ¼ fM i , i∈υg, N ¼ fN ij , i∈υ, j∈N i g are observer matrices to be designed.
Remark 2: The observation error system (Eq. (7)) depends on the delays τ 1 ðkÞ, ⋯, τ l ðkÞ. This makes the analysis and design more challenging. The objective of this note is to design the observers to guarantee the stochastic stability of Eq. (7).

Numerical example
Consider a plant whose dynamics is given by: . Assume the network has two nodes, with two links, one is from node 1 to node 2, and the other is from node 2 to node 1. The matrices are given as follows: The random delays are assumed to be τ q ðkÞ∈f0, 1gðq ¼ 1, 2Þ, and their transition probability matrices are given by . Figure 1 shows part of the simulation run of the delay τ 2 ðkÞ governed by its transition probability matrix Λ 2 .

Conclusion
This chapter addresses the problem of distributed estimation considering random networkinduced delays and packet dropouts. The delays are modeled by Markov chains. The observers are based on local Luenberger-like observers and consensus terms to weight the information received from neighboring nodes. Then the resulting observation error system is a special discrete-time jump linear system. The sufficient and necessary conditions for the stochastic stability of the observation error system are derived in the form of a set of LMIs with nonconvex constraints. Simulation examples verify its effectiveness.