Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.
We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.
Kalman filtering in its distributed information form is reviewed and applied to a network of receivers tracking Global Navigation Satellite Systems (GNSS). We show, by employing consensus-based data-fusion rules between GNSS receivers, how the consensus-based Kalman filter (CKF) of individual receivers can deliver GNSS parameter solutions that have a comparable precision performance as their network-derived, fusion center dependent counterparts. This is relevant as in the near future the proliferation of low-cost receivers will give rise to a significant increase in the number of GNSS users. With the CKF or other distributed filtering techniques, GNSS users can therefore achieve high-precision solutions without the need of relying on a centralized computing center.
*Address all correspondence to: amir.khodabandeh@curtin.edu.au
1. Introduction
Kalman filtering in its decentralized and distributed forms has received increasing attention in the sensor network community and has been extensively studied in recent years, see e.g. [1, 2, 3, 4, 5, 6, 7, 8]. While in the traditional centralized Kalman filter setup all sensor nodes have to send their measurements to a computing (fusion) center to obtain the state estimate, in the distributed filtering schemes the nodes only share limited information with their neighboring nodes (i.e. a subset of all other nodes) and yet obtain state estimates that are comparable to that of the centralized filter in a minimum-mean-squared-error sense. This particular feature of the distributed filters would potentially make the data communication between the nodes cost-effective and develop the nodes’ capacity to perform parallel computations.
Next to sensor networks, distributed filtering can therefore benefit several other applications such as formation flying of aerial vehicles [9], cooperative robotics [10] and disciplines that concern the Global Navigation Satellite Systems (GNSS). The latter is the topic of this present contribution. The GNSS have been proven to be an efficient tool for determination of time varying parameters that are of importance for Earth science disciplines like positioning, deformation, timing and atmosphere [11, 12]. Parameter estimation in GNSS often relies on the data processing of a network of receivers that collect measurements from visible GNSS satellites. In the context of sensor networks, GNSS network receivers therefore serve as sensor nodes, providing their data to a computing center, thereby computing network-based parameter solutions in a (near) real-time manner. In this contribution we intend to demonstrate how consensus algorithms [13] and the corresponding consensus-based Kalman filter (CKF), as a popular means for distributed filtering, can take an important role in GNSS applications for which a network of receivers are to be processed. Although each single receiver can run its own local filter to deliver GNSS-derived solutions, the precision of such single-receiver solutions is generally much lower than its network-derived counterparts, see e.g. [14, 15]. It will be shown, through a CKF setup, that single-receiver parameter solutions can achieve precision performances similar to that of their network-based versions, provided that a sufficient number of iterative communications between the neighboring receivers are established. The importance of such consensus-based single-receiver solutions is well appreciated in the light of the recent development of new GNSS constellations as well as the proliferation of low-cost mass-market receivers [16, 17, 18]. With the increase in the number and types of GNSS receivers, many more GNSS users can establish their own measurement setup to determine parameters that suit their needs. By taking recourse to the CKF or other distributed filtering techniques, GNSS users can therefore potentially deliver high-precision parameter solutions without the need of having a computing center.
The structure of this contribution is as follows. We first briefly review the principles of the standard Kalman filter and its information form in Section 2. The additivity property of the information filter that makes this filter particularly useful for distributed processing is also highlighted. In Section 3 we discuss average consensus rules on which the sensor nodes agree to fuse each other information. Different consensus protocols are discussed and a ‘probabilistic’ measure for the evaluation of their convergence rates is proposed. Section 4 is devoted to the CKF algorithmic steps. Its two time-scale nature is remarked and a three-step recursion for evaluating the consensus-based error variance matrix is developed. In Section 5 we apply the CKF theory to a small-scale network of GNSS receivers collecting ionospheric observables over time. Conducting a precision analysis, we compare the precision of the network-based ionospheric solutions with those of their single-receiver and consensus-based counterparts. It is shown how the CKF of each receiver responses to an increase in the number of iterative communications between the neighboring nodes. Concluding remarks and future outlook are provided in Section 6.
Consider a time series of observable random vectors y1,…,yt. The goal is to predict the unobservable random state-vectors x1,…,xt. By the term ‘prediction’, we mean that the observables y1,…,yt are used to estimate realizations of the random vectors x1,…,xt. Accordingly, the means of the state-vectors x1,…,xt can be known, while their unknown realizations still need to be guessed (predicted) through observed realizations of y1,…,yt. In the following, to show on which set of observables prediction is based, we use the notation x̂t∣τ as the predictor of xt when based on yτ=y1T…yτTT. The expectation, covariance and dispersion operators are denoted by E., C.. and D., respectively. The capital Q is reserved for (co)variance matrices. Thus Cxtyτ=Qxtyτ.
2.1. The Kalman filter standard assumptions
To predict the state-vectors in an optimal sense, one often uses the minimum mean squared error (MMSE) principle as the optimality criterion, see e.g., [19, 20, 21, 22, 23, 24, 25]. In case no restrictions are placed on the class of predictors, the MMSE predictor x̂t∣τ is given by the conditional mean Extyτ, known as the Best Predictor (BP). The BP is unbiased, but generally nonlinear, with exemptions, for instance in the Gaussian case. In case xt and yτ are jointly Gaussian, the BP becomes linear and identical to its linear counterpart, i.e. the Best Linear Predictor (BLP)
x̂t∣τ=Ext+QxtyτQyτyτ−1yτ−EyτE1
Eq. (1) implies that (1) the BLP is unbiased, i.e. Ex̂t∣τ=Ext, and that (2) the prediction error of a BLP is always uncorrelated with observables on which the BLP is based, i.e. Cxt−x̂t∣τ,yτ=0. These two basic properties can be alternatively used to uniquely specify a BLP [26].
The Kalman filter is a recursive BP (Gaussian case) or a recursive BLP. A recursive predictor, say x̂t∣t, can be obtained from the previous predictor x̂t∣t−1 and the newly collected observable vector yt. Recursive prediction is thus very suitable for applications that require real-time determination of temporally varying parameters. We now state the standard assumptions that make the Kalman filter recursion feasible.
The dynamic model: The linear dynamic model, describing the time-evolution of the state-vectors xt, is given as
xt=Φt,t−1xt−1+dt,t=1,2,…E2
with
Ex0=x0∣0,Dx0=Qx0x0E3
and
Edt=0,Cdtds=Stδt,s,Cdtx0=0E4
for the time instances t,s=1,2,…, with δt,s being the Kronecker delta. The nonsingular matrix Φt,t−1 denotes the transition matrix and the random vector dt is the system noise. The system noise dt is thus assumed to have a zero mean, to be uncorrelated in time and to be uncorrelated with the initial state-vector x0. The transition matrix from epoch s to t is denoted as Φt,s. Thus Φt,s−1=Φs,t and Φt,t=I (the identity matrix).
The measurement model: The link between the observables yt and the state-vectors xt is assumed given as
yt=Atxt+εt,t=1,2,…E5
with
Eεt=0,Cεtεs=Rtδt,s,Cεtx0=0,Cεtds=0E6
for t,s=1,2,…, with At being the known design matrix. Thus the zero-mean measurement noise εt is assumed to be uncorrelated in time and to be uncorrelated with the initial state-vector x0 and the system noise dt.
2.2. The three-step recursion
Initialization: As the mean of x0 is known, the best predictor of x0 in the absence of data is the mean Ex0=x0∣0. Hence, the initialization is simply given by
x̂0∣0=x0∣0,P0∣0=Qx0x0E7
That the initial error variance matrix P0∣0=Dx0−x̂0∣0 is identical to the variance matrix Qx0x0 follows from the equality Dx0−x0∣0=Dx0.
Time update: Let us choose Φt,t−1x̂t−1∣t−1 as a candidate for the BLP x̂t∣t−1. According to Eq. (1), the candidate would be the BLP if it fulfills two conditions: (1) it must be unbiased and (2) it must have a prediction error uncorrelated with the previous data yt−1. The first condition, i.e. E(Φt,t−1x̂t−1∣t−1)=Ext, follows from Eq. (2) and the equalities Ex̂t−1∣t−1=Ext−1 and Edt=0. The second condition, i.e. C(xt−Φt,t−1x̂t−1∣t−1yt−1)=0, follows from the fact that the prediction error xt−Φt,t−1x̂t−1∣t−1 is a function of the previous BLP prediction error xt−1−x̂t−1∣t−1 and the system noise dt, i.e. (cf. 2)
xt−Φt,t−1x̂t−1∣t−1=Φt,t−1xt−1−x̂t−1∣t−1+dt,E8
that are both uncorrelated with the previous data yt−1. Hence, the time update is given by
The error variance matrix Pt∣t−1=Dxt−x̂t∣t−1 follows by applying the covariance propagation law to (8), together with Cxt−1−x̂t−1∣t−1dt=0.
Measurement update: In the presence of new data yt, one may yet offer x̂t∣t−1 as a candidate for the BLP x̂t∣t. Such a candidate fulfills the unbiasedness condition Ex̂t∣t−1=Ext, but not necessarily the zero-correlation condition, that is, C(xt−x̂t∣t−1,yt)≠0. Note, however, that C(xt−x̂t∣t−1,yt−1)=0. Thus the zero-correlation condition C(xt−x̂t∣t−1,yt)=0 would have been met if the most recent data yt of yt=yt−1TytTT would be a function of the previous data yt−1, thereby fully predicted by yt−1. Since an observable is its own best predictor, this implies that yt=Atx̂t∣t−1, where Atx̂t∣t−1 is the BLP of yt. But this would require the zero-mean quantity vt=yt−Atx̂t∣t−1 to be identically zero which is generally not the case. It is therefore the presence of vt that violates the zero-correlation condition. Note that vt is a function of the prediction error xt−x̂t∣t−1 and the measurement noise εt, i.e. (cf. 5)
vt=Atxt−x̂t∣t−1+εt,E10
that are both uncorrelated with yt−1. Therefore, vt cannot be predicted by the previous data yt−1, showing that vt contains truly new information. That is why vt is sometimes referred to as the innovation of yt, see e.g. [27, 28, 29]. We now amend our earlier candidate x̂t∣t−1 by adding a linear function of vt to it. It reads x̂t∣t=x̂t∣t−1+Ktvt, with Kt being an unknown matrix to be chosen such that the zero-correlation condition is met. Such a matrix, known as the Kalman gain matrix, is uniquely specified by
Kt=Pt∣t−1AtTQvtvt−1⇔Cxt−x̂t∣t−1−Ktvtyt=0E11
since Cxt−x̂t∣t−1yt=Pt∣t−1AtT and Cvtyt=Qvtvt. The measurement update reads then
x̂t∣t=x̂t∣t−1+Ktvt,withPt∣t=Pt∣t−1−KtQvtvtKtTE12
The error variance matrix Pt∣t=Dxt−x̂t∣t follows by an application of the covariance propagation law, together with Cxt−x̂t∣t−1vt=Pt∣t−1AtT. Application of the covariance propagation law to (10) gives the variance matrix of vt as follows
Qvtvt=AtPt∣t−1AtT+RtE13
since Cxt−x̂t∣t−1εt=0.
2.3. A remark on the filter initialization
In the derivation of the Kalman filter one assumes the mean of the random initial state-vector x0, in Eq. (3), to be known, see e.g. [30, 31, 32, 33, 34, 35, 36, 37]. This is because of the BLP structure (1) that needs knowledge of the means Ext and E(yτ). Since in many, if not most, applications the means of the state-vectors x1,…,xt are unknown, such derivation is therefore not appropriate. As shown in Ref. [38], one can do away with this need to have both the initial mean x0∣0 and variance matrix Qx0x0, given in Eq. (3), known. The corresponding three-step recursion would then follow the Best Linear Unbiased Prediction (BLUP) principle and not that of the BLP. The BLUP is also a MMSE predictor, but within a more restrictive class of predictors. It replaces the means Ext and E(yτ) by their corresponding Best Linear Unbiased Estimators (BLUEs). Within such BLUP recursion, the initialization Eq. (7) is revised and takes place at time instance t=1 in the presence of the data y1. Provided that matrix A1 is of full column rank, the predictor x̂1∣1 follows from solving the normal equations
N1x̂1∣1=r1,withN1=A1TR1−1A1,r1=A1TR1−1y1E14
Thus
x̂1∣1=N1−1r1,andP1∣1=N1−1E15
The above error variance matrix P1∣1 is thus not dependent on the variance matrix of x1, i.e. Qx1x1=Φ1,0Qx0x0Φ1,0T+S1. This is, however, not the case with the variance matrix of the predictor x̂1∣1 itself, i.e. Qx̂1∣1x̂1∣1=Dx̂1∣1. This variance matrix is given by [38]
Qx̂1∣1x̂1∣1=Qx1x1+P1∣1E16
showing that P1∣1≠Qx̂1∣1x̂1∣1. Matrices Pt∣t and Qx̂t∣tx̂t∣t (t=1,2,…) are used for two different purposes. The error variance matrix Pt∣t=Dxt−x̂t∣t is a measure of ‘closeness’ of x̂t∣t to its target random vector xt, thereby meant to describe the ‘quality’ of prediction, i.e. precision of the prediction error xt−x̂t∣t. The variance matrix Qx̂t∣tx̂t∣t=Dx̂t∣t however, is a measure of closeness of x̂t∣t to the nonrandom vector Ext, as Dx̂t∣t=D(Ext−x̂t∣t). Thus Qx̂t∣tx̂t∣t does not describe the quality of prediction, but instead the precision of the predictor x̂t∣t.
The MMSE of the BLUP recursion is never smaller than that of the Kalman filter, as the Kalman filter makes use of additional information, namely, the known mean x0∣0 and variance matrix Qx0x0. When the stated information is available, the BLUP recursion is shown to encompass the Kalman filter as a special case [39]. In the following we therefore assume that the means of the state-vectors x1,…,xt are unknown, a situation that often applies to GNSS applications.
2.4. Filtering in information form
The three-step recursion presented in Eqs. (7), (9) and (12) concerns the time-evolution of the predictor x̂t∣t and the error variance matrix Pt∣t. As shown in Eq. (15), both P1∣1 and x̂1∣1 can be determined by the normal matrix N1=P1∣1−1 and the right-hand-side vector r1=P1∣1−1x̂1∣1. One can therefore alternatively develop recursion concerning the time-evolution of Pt∣t−1 and Pt∣t−1x̂t∣t. From a computational point of view, such recursion is found to be very suitable when the inverse-variance or information matrices St−1 and Rt−1 serve as input rather than the variance matrices St and Rt. To that end, one may define [34]
information vectorit∣τ:=Pt∣τ−1x̂t∣τandinformation matrixIt∣τ:=Pt∣τ−1E17
Given the definition above, the information filter recursion concerning the time-evolution of it∣t and It∣t would then follow from the recursion Eqs. (15), (9) and (12), along with the following matrix-inversion equalities
The algorithmic steps of the information filter are presented in
Figure 1
. In the absence of data, the filter is initialized by the zero information i1∣0=0 and I1∣0=0. In the presence of the data yt, the corresponding normal matrix Nt and right-hand-side vector rt are added to the time update information it∣t−1 and It∣t−1 to obtain the measurement update information it∣t and It∣t. The transition matrix Φt,t−1 and inverse-variance matrix St−1 would then be used to time update the previous information it−1∣t−1 and It−1∣t−1.
Figure 1.
Algorithmic steps of the information filter recursion concerning the time-evolution of the information vector it∣t and matrix It∣t.
Singular matrixSt: In the first expression of Eq. (18) one assumes the variance matrix St to be nonsingular and invertible. There are, however, situations where some of the elements of the state-vector xt are nonrandom, i.e., the corresponding system noise is identically zero. As a consequence, the variance matrix St becomes singular and the inverse-matrix St−1 does not exist. An example of such concerns the presence of the GNSS carrier-phase ambiguities in the filter state-vector which are treated constant in time. In such cases the information time update in
Figure 1
must be generalized so as to accommodate singular variance matrices St. Let S˜t be an invertible sub-matrix of St that has the same rank as that of St. Then there exists a full-column rank matrix Ht such that
St=HtS˜tHtTE19
Matrix Ht can be, for instance, structured by the columns of the identity matrix I corresponding to the columns of St on which the sub-matrix S˜t is positioned. The special case
St=S˜t000=I0S˜tI0T⇒Ht=I0,E20
shows an example of the representation (19). With Eq. (19), a generalization of the time update (
Figure 1
) can be shown to be given by
It∣t−1=Mt−MtHtHtTMtHt+S˜t−1−1HtTMtE21
Thus instead of St−1, the inverse-matrix S˜t−1 and Ht are assumed available.
2.5. Additivity property of the information measurement update
As stated previously, the information filter delivers outcomes equivalent to those of the Kalman filter recursion. Thus any particular preference for the information filter must be attributed to the computational effort required for obtaining the outcomes. For instance, if handling matrix inversion requires low computational complexity when working with the input inverse-matrices St−1 and Rt−1, the information filter appears to be more suitable. In this subsection we will highlight yet another property of the information filter that makes this recursion particularly useful for distributed processing.
As shown in
Figure 1
, the information measurement update is additive in the sense that the measurement information Nt and rt is added to the information states It∣t−1 and it∣t−1. We now make a start to show how such additivity property lends itself to distributed filtering. Let the measurement model Eq. (5) be partitioned as
Accordingly, the observable vector yt is partitioned into n sub-vectors yi,t (i=1,…,n), each having its own design matrix Ai,t and measurement noise vector εi,t. One can think of a network of n sensor nodes where each collects its own observable vector yi,t, but aiming to determine a common state-vector xt. Let us further assume that the nodes collect observables independently from one another. This yields
Cεi,tεj,t=Ri,tδi,j,fori,j=1,…,n,andt=1,2,…E23
Thus the measurement noise vectors εi,t (i=1,…,n) are assumed to be mutually uncorrelated. With the extra assumption Eq. (23), the normal matrix Nt=AtTRt−1At and right-hand-side vector rt=AtTRt−1yt can then be, respectively, expressed as
Nt=∑i=1nNi,t,andrt=∑i=1nri,tE24
where
Ni,t=Ai,tTRi,t−1Ai,t,andri,t=Ai,tTRi,t−1yi,tE25
According to Eq. (24), the measurement information of each node, say Ni,t and ri,t, is individually added to the information states It∣t−1 and it∣t−1, that is
It∣t=It∣t−1+∑i=1nNi,t,it∣t=it∣t−1+∑i=1nri,tE26
Now consider the situation where each node runs its own local information filter, thus having its own information states Ii,t∣t and ii,t∣t (i=1,…,n). The task is to recursively update the local states Ii,t∣t and ii,t∣t in a way that they remain equal to their central counterparts It∣t and it∣t given in Eq. (26). Suppose that such equalities hold at the time update, i.e. Ii,t∣t−1=It∣t−1 and ii,t∣t−1=it∣t−1. Given the number of contributing nodes n, each node just needs to be provided with the average quantities
N¯t=1n∑i=1nNi,t,andr¯t=1n∑i=1nri,tE27
The local states Ii,t∣t−1 and ii,t∣t−1 would then be measurement updated as (cf. 26)
Ii,t∣t=Ii,t∣t−1+nN¯t,ii,t∣t=ii,t∣t−1+nr¯tE28
that are equal to the central states It∣t and it∣t, respectively. In this way one has multiple distributed local filters i=1,…,n, where each recursively delivers results identical to those of a central filter.
To compute the average quantities N¯t and r¯t, node i may need to receive all other information Nj,t and rj,t (j≠i). In other words, node i would require direct connections to all other nodes j≠i, a situation that makes data communication and processing power very expensive (particularly for a large number of nodes). In the following cheaper ways of evaluating the averages N¯t and r¯t are discussed.
In the previous section, we briefly discussed the potential applicability of the information filter as a tool for handling the measurement model Eq. (22) in a distributed manner. With the representation Eq. (28) however, one may be inclined to conclude that such applicability is limited to the case where the nodes i=1,…,n, have ‘direct’ communication connections to one another in order to receive/send their measurement information Ni,t and ri,t (i=1,…,n).
Instead of having direct connections, the idea is now to relax such a stringent requirement by assuming that the nodes are linked to each other at least through a ‘path’ so that information can flow from each node to all other nodes. It is therefore assumed that each node along the path plays the role of an agent transferring information to other nodes. To reach the averages N¯t and r¯t, the nodes would then agree on specific ‘fusion rules’ or consensus protocols, see e.g. [6, 8, 40]. Note that each node exchanges information with neighboring nodes (i.e. those to which the node has direct connections) and not the entire nodes. Therefore, a repeated application of the consensus protocols is required to be carried out. The notion is made precise below.
3.1. Communication graphs
The way the nodes interact with each other to transfer information is referred to as the interaction topology between the nodes. The interaction topology is often described by a directed graph whose vertices and edges, respectively, represent the nodes and communication links [4]. The interaction topology may also undergo a finite number of changes over sessions k=1,…,ko. In case of one-way links, the directions of the edges face toward the receiving nodes (vertices). Here we assume that the communication links between the nodes are two-way, thus having undirected (or bidirectional) graphs. Examples of such representing a network of 20 nodes with their interaction links are shown in
Figure 2
. Let an undirected graph at session k be denoted by Gk=VEk where V=1…n is the vertex set and Ek⊂ijij∈V is the edge set. We assume that the nodes remain unchanged over time, that is why the subscript k is omitted for V. This is generally not the case with their interaction links though, i.e. the edge set Ek depends on k. As in
Figure 2
(b), the number of links between the nodes can be different for different sessions k=1,…,ko. Each session represents a graph that may not be connected. In a ‘connected’ graph, every vertex is linked to all other vertices at least through one path. In order for information to flow from each node to all other nodes, the union of the graphs Gk (k=1,…,ko), i.e.
G=VEwithE=⋃k=1koEkko:afinite numberE29
is therefore assumed to be connected. We define the neighbors of node i as those to which the node i has direct links. For every session k, they are collected in the set Ni,k=jji∈Ek. For instance for network (a) of
Figure 2
, we have only one session, i.e. ko=1, in which N2,1=1345 represents the neighbors of node 2. In case of network (b) however, we have different links over four sessions, i.e. ko=4. In this case, the neighbors of node 2 are given by four sets: N2,1=5 in session 1 (red), N2,2= in session 2 (yellow), N2,3=4 in session 3 (green) and N2,4= in session 4 (blue).
Figure 2.
Communications graphs of 20 sensor nodes. The edges represent two-way communication links between the nodes. (a) Network with 49 links. (b) Network with different numbers of links over four sessions: 7 links in session 1 (R), 6 links in session 2 (Y), 8 links in session 3 (G) and 7 links in session 4 (B).
3.2. Consensus protocols
Given the right-hand-vector ri,t, suppose that node i aims to obtain the average r¯t for which all other vectors rj,t (∀j≠i) are required to be available (cf. 27). But the node i only has access to those of its neighbors, i.e. the vectors rj,t (j∈Ni,k). For the first session k=1, it would then seem to be reasonable to compute a weighted-average of the available vectors, i.e.
ri,t1=∑j∈iNi,1wij1rj,tE30
as an approximation of r¯t, where the scalars wij1 ( j∈iNi,1) denote the corresponding weights at session k=1. Now assume that all other nodes j≠i agree to apply the fusion rule Eq. (30) to those of their own neighbors. Thus the neighboring nodes j∈Ni,k also have their own weighted-averages rj,t1. But they may have access to those to which the node i has no direct links. In other words, the weighted-averages rj,t1 (j∈Ni,1) contain information on the nodes to which the node i has no access. For the next session k=2, it is therefore reasonable for the node i to repeat the fusion rule Eq. (30), but now over the new vectors rj,t1 (j∈iNi,2), aiming to improve on the earlier approximation ri,t1. This yields the following iterative computations
ri,tk=∑j∈iNi,kwijkrj,tk−1,k=1,2,…E31
with rj,t0:=rj,t. Choosing a set of weights wijk, the nodes i=1,…,n agree on the consensus protocol (31) to iteratively fuse their information vectors ri,tk. Here and in the following, we use the letter ‘k’ for the ‘session number’ k=1,…,ko (cf. 29) and for the ‘number of iterative communications’ k=1,…,kn (cf. 34). The maximum iteration kn is assumed to be not smaller than the maximum session number ko, i.e. kn≥ko.
The question that now comes to the fore is how to choose the weights wijk such that the approximation ri,tk gets close to r¯t through the iteration Eq. (31). More precisely, the stated iteration becomes favorable if ri,tk→r¯t when k→∞ for all nodes i=1,…,n. To address this question, we use a multivariate formulation. Let p be the size of the vectors ri,t (i=1,…,n). We define the higher-dimensioned vector r=r1,tT…rn,tTT. The multivariate version of Eq. (31) reads then
The n×n weight matrix Wk is structured by wijk (j∈iNi,k) and wijk=0.
(j∉iNi,k). The symbol ⊗ is the Kronecker matrix product [41]. According to Eq. (33), after kn iterations the most recent iterated vector rkn is linked to the initial vector r0 by ∏k=1knWk⊗Ipr0. Thus the vectors ri,tk (i=1,…,n) converge to r¯t when
Lkn≔∏k=1knWk→1nenenT,askn→∞E34
where the n-vector en contains ones. If the condition Eq. (34) is met, the set of nodes 1…n can asymptotically reach average consensus [4]. It can be shown that (34) holds if the weight matrices Wk (k=1,…,ko) have bounded nonnegative entries with positive diagonals, i.e. wijk≥0 and wiik>0, having row- and column-sums equal to one, i.e. ∑j=1nwijk=1 and ∑i=1nwijk=1 (i,j=1,…,n), see e.g. [3, 5, 40, 42, 43].
Examples of such consensus protocols are given in
Table 1
. As shown, the weights form a symmetric weight matrix Wk, i.e. wjik=wijk. In all protocols presented, self-weights wiik are chosen so that the condition ∑j=1nwijk=1 is satisfied. The weights of Protocols 1 and 2 belong to the class of ‘maximum-degree’ weights, while those of Protocol 3 are referred to as ‘Metropolis’ weights [8]. The weights of Protocols 1 and 3 are driven by the degrees (number of neighbors) of nodes i=1,…,n, denoted by dgik=#Ni,k. For instance, in network (a) of
Figure 2
we have dg11=4 as node 1 has 4 neighbors, while dg141=7 as node 14 has 7 neighbors. Protocol 4 is only applicable to networks like (b) in
Figure 2
, i.e. when each node has at most one neighbor at a session [4]. In this case, each node exchanges its information to just one neighbor at a session. Thus for two neighboring nodes i and j we have wiik=wjjk=wijk=0.5, each averaging ri,tk−1 and rj,tk−1 to obtain ri,tk=rj,tk.
Protocols
wijkij∈Ek
wiik
Protocol 1
1maxu∈1…ndguk
1−∑u≠iwiuk
Protocol 2
1n
1−∑u≠iwiuk
Protocol 3
11+maxdgikdgjk
1−∑u≠iwiuk
Protocol 4
12
1−∑u≠iwiuk
otherwise wijk=0
Table 1.
Examples of average-consensus protocols forming the weights wijk in Eq. (31).
The degree (number of neighbors) of node i is denoted by dgik=#Ni,k. Protocol 4 is only applicable when each node has at most one neighbor at a session.
To provide insight into the applicability of the protocols given in
Table 1
, we apply them to the networks of
Figure 2
. Twenty values (scalars), say ri (i=1,…,20), are generated whose average is equal to 5, i.e. r¯=5. Each value is assigned to its corresponding node. For network (a), Protocols 1, 2 and 3 are separately applied, whereas Protocol 4 is only applied to network (b). The corresponding results, up to 30 iterations, are presented in
Figure 3
. As shown, the iterated values rik (i=1,…,20) get closer to their average (i.e. r¯=5), the more the number of iterative communications.
Figure 3.
Performances of protocols 1, 2 and 3 (network (a) of Figure 2) and protocols 4 (network (b)) in delivering the average of 20 values (scalars). The iterated values get closer to their average (i.e. r¯=5), the more the number of iterative communications.
3.3. On convergence of consensus states
Figure 3
shows that the states ri,tk (i=1,…,n) converge to their average r¯t, but with different rates. The convergence rate depends on the initial states ri,t0=ri,t and on the consensus protocol employed. From the figure it seems that the convergence rates of Protocols 1 and 3 are about the same, higher than those of Protocols 2 and 4. Note that the stated results are obtained on the basis of specific ‘realizations’ of ri,t (i=1,…,n). Consider the states ri,t to be random vectors. In that case, can the results be still representative for judging the convergence performances of the protocols? To answer this question, let us define the difference vectors dri,tk=ri,tk−r¯t that are collected in the higher-dimensioned vector drk=dr1,tTk…drn,tTkT. The more the number of iterations, the smaller the norm of drk becomes. According to Eq. (33), after kn iterations the difference vector drkn is linked to r=r1,tT…rn,tTT through
drkn=Lkn−1nenenT⊗IprE35
Now let the initial states ri,t have the same mean and the same variance matrix Dri,t=Q (i=1,…,n), but mutually uncorrelated. An application of the covariance propagation law to (35), together with Lknen=en, gives
Ddrkn=L2kn−1nenenT⊗QE36
Thus the closer the squared matrix L2kn to 1/nenenT, the smaller the variance matrix Eq. (36) becomes. In the limit when kn→∞, the stated variance matrix tends to zero. This is what one would expect, since drkn→0. Under the conditions stated in Eq. (34), matrices Wk have λn=1 as the largest absolute value of their eigenvalues [42]. A symmetric weight matrix W can then be expressed in its spectral form as
W=∑i=1n−1λiuiuiT+1nenenTE37
with the eigenvalues λ1≤…≤λn−1<λn=1, and the corresponding orthogonal unit eigenvectors u1,…,un−1,un=1/nen. By a repeated application of the protocol W, we get Lkn=Wkn. Substitution into Eq. (36), together with Eq. (37), gives finally
Ddrkn=∑i=1n−1λi2knuiuiT⊗Q≤λn−12knIn⊗QE38
The above equation shows that the entries of the variance matrix (36) are largely driven by the second largest eigenvalue of W, i.e. λn−1. The smaller the scalar ∣λn−1∣, the faster the quantity λn−12kn tends to zero, as kn→∞. The scalar ∣λn−1∣ is thus often used as a measure to judge the convergence performances of the protocols [7]. For the networks of
Figure 2
, ∣λn−1∣ of Protocols 1, 2 and 3 are about 0.92, 0.97, 0.91, respectively. As Protocol 3 has the smallest ∣λn−1∣, it is therefore expected to have the best performance. Note, in Protocol 4, that the weight matrix Wk varies in every session, the performance of which cannot be judged by a single eigenvalue λn−1. One can therefore think of another means of measuring the convergence performance. Due to the randomness of the information vectors ri,t (i=1,…,n), one may propose ‘probabilistic’ measures such as
Probmaxidri,tknQ≤q,q>0E39
to evaluate the convergence rates of the protocols, where dri,tQ2:=dri,tTQ−1dri,t. Eq. (39) refers to the probability that the maximum-norm of the difference vectors dri,tkn=ri,tkn−r¯t (i=1,…,n) is not larger than a given positive scalar q for a fixed number of iterations kn. The higher the probability Eq. (39), the better the performance of a protocol. For the scalar case Q=σ2, Eq. (39) is reduced to
Probmaxidri,tkn≤qσE40
which is the probability that the absolute differences ∣dri,tkn∣ (i=1,…,n) are not larger than q times the standard-deviation σ. For the networks of
Figure 2
, 100,000 normally-distributed vectors as samples of r=r1…r20T are simulated to evaluate the probability (40). The results for Protocols 1, 2, 3 and 4 are presented in
Figure 4
. The stated probability is plotted as a function of q for three numbers of iterative communications kn=10,20 and 30. As shown, Protocol 3 gives rise to highest probabilities, while Protocol 2 delivers lowest probabilities. After 10 iterations, the probability of having absolute differences smaller than one-fifth of the standard-deviation σ (i.e. q=0.2) is about 80% for Protocol 1, whereas it is less than 5% for Protocol 2. After 30 iterations, the stated probability increases to 80% for Protocol 2, but close to 100% for Protocols 1 and 3.
Figure 4.
Probability Eq. (40) as a function of q for three numbers of iterative communications kn=10,20 and 30 (protocols 1 (P.1), 2 (P.2), 3 (P.3) and 4 (P.4)). It refers to the probability that the absolute differences ∣ri,tkn−r¯t∣ (i=1,…,20) are not larger than q times the standard-deviation σ (cf. Figure 2).
Figure 4
demonstrates that the convergence performance of Protocol 4 is clearly better than that of Protocol 2, as it delivers higher probabilities (for the networks of
Figure 2
). Such a conclusion however, cannot be made on the basis of the results of
Figure 3
. This shows that results obtained on the basis of specific ‘realizations’ of ri,t (i=1,…,n) are not necessarily representative.
In Section 2.5 we discussed how the additivity property of the measurement update Eq. (26) offers possibilities for developing multiple distributed local filters i=1,…,n, each delivering local states Ii,t∣t and ii,t∣t equal to their central counterparts It∣t and it∣t. In doing so, each node has to evaluate the averages N¯t and r¯t at every time instance t. Since in practice the nodes do not necessarily have direct connections to each other, options such as the consensus-based fusion rules (cf. Section 3) can alternatively be employed to ‘approximate’ N¯t and r¯t. As illustrated in
Figures 3
and
4
, such consensus-based approximation requires a number of iterative communications between the nodes in order to reach the averages N¯t and r¯t. The stated iterative communications clearly require some time to be carried out and must take place during every time interval tt+1 (see
Figure 5
). We distinguish between the sampling rate Δ and the sending rate δ. The sampling rate refers to the frequency with which the node i collects its observables yi,t (t=1,2,…), while the sending rate refers to the frequency with which the node i sends/receives information Nj,tk and rj,tk (k=1,…,kn) to/from its neighboring nodes. As shown in
Figure 5
, the sending rate δ should therefore be reasonably smaller than the sampling rate Δ so as to be able to incorporate consensus protocols into the information filter setup. Such a consensus-based Kalman filter (CKF) would thus generally be of a two time-scale nature [2], the data sampling time-scale t=1,2,…, versus the data sending time-scale k=1,…,kn. The CKF is a suitable tool for handling real-time data processing in a distributed manner for the applications in which the state-vectors xt (t=1,2,…) change rather slowly over time (i.e. Δ can take large values) and/or for the cases where the sensor nodes transfer their data rather quickly (i.e. δ can take small values).
Figure 5.
The two time-scale nature of a consensus-based Kalman filter (CKF): The data sampling time-scale t=1,2,…, versus the data sending time-scale k=1,…,kn. The sending rate δ must be reasonably smaller than the sampling rate Δ so as to be able to incorporate consensus protocols into the CKF setup.
Under the assumption δ≤Δ, the CKF recursion follows from the Kalman filter recursion by considering an extra step, namely, the ‘consensus update’. The algorithmic steps of the CKF in information form are presented in
Figure 6
. Compare the recursion with that of the information filter given in
Figure 1
. Similar to the information filter, the CKF at node i is initialized by the zero information Ii,1∣0=0 and ii,1∣0=0. In the presence of the data yi,t, node i computes its local normal matrix Ni,t and right-hand-side vector ri,t to send them to its neighboring nodes j∈Ni,k (k=1,…,kn). In the consensus update, iterative communications between the neighboring nodes iNi,k are carried out to approximate the averages N¯t and r¯t by Ni,tkn and ri,tkn, respectively. After a finite number of communications kn, the consensus states Ni,tkn and ri,tkn are, respectively, added to the time update information Ii,t∣t−1 and ii,t∣t−1 to obtain their measurement update version Ii,t∣t and ii,t∣t at node i (cf. 28). The time update goes along the same lines as that of the information filter.
Figure 6.
Algorithmic steps of the CKF in information form concerning the time-evolution of the local information vector ii,t∣t and matrix Ii,t∣t of node i.
4.2. Time evolution of the CKF error covariances
With the consensus-based information filter, presented in
Figure 6
, it is therefore feasible to develop multiple distributed filters, all running in parallel over time. By taking recourse to an average-consensus protocol, not all the nodes are needed to be directly linked, thereby allowing non-neighboring nodes to also benefit from information states of each other. The price one has to pay for such an attractive feature of the CKF is that the local predictors
x̂i,t∣t=Ii,t∣t−1ii,t∣t,i=1,…,n,E41
will have a poorer precision performance than that of their central counterpart x̂t∣t. This is due to the fact that the consensus states Ni,tkn and ri,tkni=1…n are just approximations of the averages N¯t and r¯t. Although they reach the stated averages as kn→∞, one of course always comes up with a finite number of communications kn. As a consequence, while the inverse-matrix It∣t−1 represents the error variance matrix Pt∣t=Dxt−x̂t∣t (cf. 17), the inverse-matrices Ii,t∣t−1i=1…n do not represent the error variance matrices Pi,t∣t=Dxt−x̂i,t∣t. To see this, consider the local prediction errors xt−x̂i,t∣t which can be expressed as (
Figure 6
)
that is not necessarily equal to Ii,t∣t−1 (see the following discussion on Eqs. (47) and (48)).
In
Figure 7
we present the three-step recursion of the error variance matrix Pi,t∣t (for node i). As shown, the node i would need an extra input, i.e., the term ∑j=1nlij2Nj,t in order to be able to compute Pi,t∣t. In practice however, such additional information is absent in the CKF setup. This means that the node i does not have enough information to evaluate the error variance matrix Pi,t∣t. Despite such restriction, it will be shown in Section 5 how the recursion of Pi,t∣t conveys useful information about the performance of the local filters i=1,…,n, thereby allowing one to a-priori design and analyze sensor networks with different numbers of iterative communications.
Figure 7.
The three-step recursion of the error variance matrix Pi,t∣t=Dxt−x̂i,t∣t for node i. The extra term ∑j=1nlij2Nj,t would be required to compute Pi,t∣t. The entries of the product matrix Lkn in Eq. (34) are denoted by lij.
To better appreciate the recursion given in
Figure 7
, let us consider a special case where a stationary state-vector xt is to be predicted over time. Thus Φt,t−1=I and St=0t=12…. Moreover, we assume that all nodes deliver the same normal matrices Ni,t=Ni=1…n.
The central error variance matrix Pt∣t would then simply follow by inverting the sum of all normal matrices over n nodes and t time instances. Collecting observables up to and including time instance t, the stated variance matrix reads Pt∣t=1/tnN−1. We now compare Pt∣t with its consensus-based local counterpart at node i, i.e. Pi,t∣t. The aforementioned assumptions, together with ∑j=1nlij=1, give
Ni,tkn=∑j=1nlijNj,t=N,andn2∑j=1nlij2Nj,t=αnNE45
in which the scalar α is given by
α:=n∑j=1nlij2E46
Substitution into the stated recursion provides us with the time-evolution of the error variance matrix Pi,t∣t as follows (
Figure 7
)
This shows that the consensus-based error variance matrix Pi,t∣t is α times its central counterpart Pt∣t=1/tnN−1. With the vector l≔li1…linT, application of the Cauchy-Schwarz inequality gives the lower-bound
α=enTenlTl≥lTen2=1E48
as lTen=1. Thus scalar α is never smaller than 1, i.e. Pi,t∣t≥Pt∣t, showing that the performance of the consensus-based predictor x̂i,t∣t is never better than that of its central version x̂t∣t. The lower-bound Eq. (48) is reached when l=1/nen, i.e. when lij=1/n (j=1,…,n). According to Eq. (34), this can be realized if Lkn→1/nenenT, for which the number of iterations kn might be required to be reasonably large. The conclusion reads therefore that the local filters at nodes i=1,…,n, generate information matrices Ii,t∣t, the inverse of which are different from the actual error variance matrices of the predictors x̂i,t∣t, i.e. Ii,t∣t−1≠Pi,t∣t.
The purpose of this section is to demonstrate how the CKF theory, discussed in Section 4, can play a pivotal role in applications for which the GNSS measurements of a network of receivers are to be processed in a real-time manner. In a GNSS network setup, each receiver serves as a sensor node for receiving observables from visible GNSS satellites to determine a range of different parameters such as positions and velocities in an Earth-centered Earth-fixed coordinate system, atmospheric delays, timing and instrumental biases, see e.g. [11, 12]. As the observation equations of the receivers have satellite specific parameters in common, the receivers’ observables are often integrated through a computing (fusion) center to provide network-derived parameter solutions that are more precise than their single-receiver versions. Now the idea is to deliver GNSS parameter solutions without the need of having a computing center, such that their precision performance is still comparable to that of network-derived solutions.
As previously discussed, consensus-based algorithms and in particular the CKF can be employed to process network data in a distributed filtering scheme, i.e. no computing center is required. In order to illustrate such applicability, we simulate a network of 13 GNSS receivers located in Perth, Western Australia (
Figure 8
). As shown in the figure, each node (white circle) represents a receiver having data links (red lines) to its neighbors with inter-station distances up to 4 km. We therefore assume that the receivers receive each other data within the ranges not longer than 4 km. For instance, receiver 1 is directly connected to receivers 2 and 6, but not to receiver 3 (the inter-station distance between receivers 1 and 3 is about 8 km).
Figure 8.
A network of 13 GNSS receivers simulated over Perth, Western Australia. Each node (white circle) represents a receiver tracking GNSS satellites. The receivers have data links to their neighbors with inter-station distances up to 4 km. The data links are shown by red lines.
5.1. GNSS ionospheric observables: Dynamic and measurement models
Although the GNSS observables contain information on various positioning and non-positioning parameters, here we restrict ourselves to ionospheric observables of the GPS pseudo-range measurements only [44]. One should however bear in mind that such restriction is made just for the sake of presentation and illustration of the theory discussed in Sections 3 and 4. Would one, for instance, make use of the very precise carrier-phase measurements and/or formulate a multi-GNSS measurement setup, solutions of higher precision levels are therefore expected.
Let the scalar yi,ts denote the pseudo-range ionospheric observable that the receiver i collects from satellite s at time instance t. The corresponding measurement model, formed by the between-satellite differences yi,tps≔yi,ts−yi,tps≠p, reads (cf. 5)
where the term within . refers to the first-order slant ionospheric delays, and btps denotes the between-satellite differential code biases (DCBs). We use a regional single-layer model [45, 46] to represent the slant ionospheric delays in terms of 1) νo,t as the vertical total electron content (TEC), 2) νϕ,t and 3) νψ,t as the south-to-north and west-to-east spatial gradient of νo,t, respectively. The corresponding known coefficients follow from Ref. [47]
with .ps≔.s−.p. The angles ψi,ts and ϕi,ts, respectively, denote the longitude and latitude of the ionospheric piercing points (IPPs) corresponding to the receiver-to-satellite line-of-sight i−s (see
Figure 9
). They are computed with respect to those of the reference IPP at time instance t, i.e. ψo,t and ϕo,t. The angle zi,ts denotes the zenith angle of the IPPs. These angles are computed based on the mean Earth’s radius 6378.137 km and height of layer 450 km. The measurement noises εi,ts are assumed to be mutually uncorrelated with the dispersion (cf. 6)
Dεi,ts=1.0220.02+sinθi,ts2σ2E51
forming the variance matrices Rt in Eq. (6), where θi,ts is the satellite elevation angle. The scalar σ is set to σ≈65.6 cm as the zenith-referenced standard-deviation of the GPS ‘geometry-free’ pseudo-range measurements [48].
Figure 9.
Longitude (ψ) and latitude (ϕ) of an ionospheric piercing point (IPP) corresponding to a receiver-to-satellite line-of-sight. The distance scales are exaggerated.
Suppose that m number of satellites s=1,…,m, are tracked by the network receivers i=1,…,n=13, during the observational campaign. The state-vector sought is structured as
xt=νo,tνϕ,tνψ,tbtp1btp2…btpmTE52
Thus the state-vector xt contains three TEC parameters νo,t, νϕ,t, νψ,t and m−1 between-satellite DCBs btps (s≠p). The dynamic model is assumed to be given by (cf. 2, 4 and 21)
Thus the DCBs btps are assumed constant in time, while the temporal behavior of the TEC parameters νo,t, νϕ,t, νψ,t is captured by a random-walk process. The corresponding zero-mean process noises are assumed to be mutually uncorrelated, having the standard-deviations σdo=1 mm/sec and σdϕ=σdpsi=5 mm/rad/sec [49].
5.2. Observational campaign
The network receivers i=1,…,n (n=13), shown in
Figure 8
, are assumed to track GPS satellites over 16 hours from 8:00 to 24:00 Perth local time, on 02-06-2016. The observation sampling rate is set to Δ=1 minute. Thus the number of observational epochs (time instances) is 960. As to the data sending rate δ (cf. 5), we assume three different sending rates δ=5,10 and 15 seconds. Thus the number of iterative communications between the neighboring receivers takes the values kn=4,6 and 12. The consensus protocol 3 (
Table 1
) is applied to the CKF of each receiver.
As the satellites revolve around the Earth, not all of which are simultaneously visible to the ‘small-scale’ network of
Figure 8
. Their visibility over time is shown in
Figure 10
(left panel) in which the satellites with elevation angles smaller than 10 degrees are excluded. There are 31 GPS satellites (i.e. m=31), with PRN 4 absent (PRN refers to the satellite identifier). PRN 22 has the maximum duration of visibility, while PRN 21 has the minimum duration of visibility. Note also that PRNs 2, 6, 16, 17, 19, 26 and 32 disappear (set) and reappear (re-rise). That is why their visibility is shown via two separate time intervals.
Figure 10
(right panel) shows the trajectories of the ionospheric pierce points on the ionospheric single layer that are made by receiver-to-satellite line-of-sight paths. It is the spatial distribution of these points that drives the coefficients ai,t;ops, ai,t;ϕps, ai,t;ψps in Eq. (49).
Figure 10.
Left: GPS satellites visibility over time, viewed from Perth, Western Australia. Right: Trajectories of the corresponding IPPs made by receiver-to-satellite line-of-sight paths. The satellites are indicated by different colors.
In the following we present precision analyses on the measurement update solutions of xt in Eq. (52), given the network and satellite configurations shown in
Figures 8
and
10
, respectively. Throughout the text, PRN 10 is chosen as the pivot satellite p (cf. (49)). By the term ‘standard-deviation’, we mean the square-root of prediction errors’ variance.
5.3. Central (network-based) versus local (single-receiver) solutions
Before discussing the precision performance of the CKF solutions, we first compare the network-based (central) TEC solutions with the solutions that are obtained by the data of one single-receiver only (referred to as the local solutions). At the filter initialization, the standard-deviations of the local TEC solutions are 13≈3.6 times larger than those of the central TEC solutions (i.e. square-root of the number of nodes). This is because of the fact that each of the 13 network receivers independently provides equally precise solutions. In that case, the central solution follows then by averaging all the 13 local solutions. Due to the common dynamic model Eq. (53) however, the local solutions become correlated over time. After the filter initialization, the central solution would therefore not follow the average of its local versions. The standard-deviation results, after one hour of the filter initialization, are presented in
Figure 11
. Only the results of the receiver 1 are shown as local solutions (in red). As shown, the standard-deviations get stable over time as the filters reach their steady-state. On the right panel of the figure, the local-to-central standard-deviation ratios are also presented. In case of the vertical TECs νo,t, the ratios vary from 1.5 to 3. For the horizontal gradients νϕ,t and νψ,t, the ratios are about 2 and 2.5, respectively.
Figure 11.
Left: Standard-deviation of the central (green) and local (red) solutions of the TEC parameters νo,t (top), νϕ,t (middle) and νψ,t (bottom) as functions of time. Right: The corresponding local-to-central standard-deviation ratios.
5.4. Role of CKF in improving local solutions
With the results of
Figure 11
, we observed that the central TEC solutions considerably outperform their local counterparts in the sense of delivering more precise outcomes, i.e. the local-to-central standard-deviation ratios are considerably larger than 1. We now employ the CKF for each node (receiver) i=1,…,13, to improve the local solutions’ precision performance via consensus-based iterative communications between the receivers. In doing so, we make use of the three-step recursion given in
Figure 7
to evaluate the error variance matrices Pi,t∣t (i=1,…,13), thereby computing the CKF-to-central standard-deviation ratios. The stated ratios are presented in
Figure 12
for two different data sending rates δ=15 seconds (left panel) and δ=5 seconds (right panel). In both cases, the CKF-to-central standard-deviation ratios are smaller than their local-to-central versions shown in
Figure 11
(right panel), illustrating that employing the CKF does indeed improve the local solutions’ precision. Since more iterative communications take place for δ=5, the corresponding ratios are very close to 1. In that case, the CKF of each receiver is expected to have a similar precision performance to that of the central (network-based) filter. For the case δ=15 however, the CKF performance of each receiver does very much depend on the number of the receiver’s neighbors. This is because of the fact that only 4 iterative communications between the receivers take place (i.e. kn=4). The receivers with the minimum number of neighbors, i.e. receivers 1, 3 and 13 (
Figure 8
), have the worst precision performance as the corresponding ratios take largest values. On the other hand, the receivers with the maximum number of neighbors, i.e. receivers 4, 7, 9 and 8, have the best performance as the corresponding ratios are close to 1.
Figure 12.
Time-series of the CKF-to-central standard-deviation ratios corresponding to the TEC parameters νo,t (top), νϕ,t (middle) and νψ,t (bottom). Left: The data sending rate is set to δ=15 seconds (i.E. 4 iterative communications). Right: The data sending rate is set to δ=5 seconds (i.E. 12 iterative communications). The results of the nodes (receivers) are indicated by different colors.
Next to the solutions of the TEC parameters νo,t, νϕ,t and νψ,t, we also analyze CKF solutions of the between-satellite DCBs btps (s≠p) in Eq. (52). Because of the difference in the satellites visibility over time (cf.
Figure 10
), the DCBs’ standard-deviations are quite distinct and very much depend on the duration of the satellites visibility. The more a pair of satellites p−s are visible, the smaller the standard-deviation is expected. We now consider the required time to have between-satellite DCBs solutions with standard-deviation smaller than 0.5 nanoseconds. Because of the stated difference in the standard-deviations, each between-satellite DCB corresponds to a different required time. For the central filter, the minimum value of such required time is 7 minutes, with the 25th percentile as 12, median as 38, 75th percentile as 63 and the maximum as 84 minutes. Thus after 84 minutes of the filter initialization, all central DCB solutions have standard-deviations smaller than 0.5 nanoseconds. Such percentiles can be represented by a ‘boxplot’. We compute the stated percentiles for all the CKF solutions and compare their boxplots with the central one in
Figure 13
. The results are presented for three different data sending rates δ=15 seconds (top), δ=10 seconds (middle) and δ=5 seconds (bottom). As shown, the more the number of iterative communications, the more similar the boxplots becomes, i.e. the nodes (receivers) are reaching consensus. Similar to the TEC solutions, the DCB precision performance of the CKF corresponding to the receivers 4, 7, 9 and 8 is almost similar to that of the central one, irrespective of the number of iterative communications. This follows from the fact that the stated receivers have the maximum number of neighbors (
Figure 8
), thus efficiently approximating the averages N¯t and r¯t in Eq. (28) after a few iterations. On the other hand, the receivers with the minimum number of neighbors require more number of iterative communications in order for their CKF precision performance to get similar to that of the central filter.
Figure 13.
Boxplots of the required time (minutes) to have between-satellite DCBs solutions with standard-deviation smaller than 0.5 nanoseconds. The performance of the CKF of each node (receiver) is compared to that of the central filter (Cen.). In each boxplot, the horizontal lines from bottom to top show the minimum (black), 25th percentile (blue), median (green), 75th percentile (blue) and maximum (black) of the stated required time. The data sending rate is set to Top: δ=15 seconds (i.E. 4 iterative communications), Middle: δ=10 seconds (i.E. 6 iterative communications), and Bottom: δ=5 seconds (i.E. 12 iterative communications).
In this contribution we reviewed Kalman filtering in its information form and showed how the additive measurement update (28) can be realized by employing average-consensus rules, even when not all nodes are directly connected, thus allowing the sensor nodes to develop their own distributed filters. The nodes are assumed linked to each other at least through a ‘path’ so that information can flow from each node to all other nodes. Under this assumption, average-consensus protocols can deliver consensus states Ni,tknri,tkn as an approximation of the averages N¯tr¯t in Eq. (28) at every time instance t=1,2,…, thus allowing one to establish a CKF recursion at every node i=1,…,n. To improve the stated approximation, the neighboring nodes have to establish a number of iterative data communications to transfer and receive their consensus states. This makes the CKF implementation applicable only for the applications in which the state-vectors change rather slowly over time (i.e. the sampling rate Δ can take large values) and/or for the cases where the sensor nodes transfer their data rather quickly (i.e. the sending rate δ can take small values).
We developed a three-step recursion of the CKF error variance matrix (
Figure 7
). This recursion conveys useful information about the precision performance of the local filters i=1,…,n, thereby enabling one to a-priori design and analyze sensor networks with different numbers of iterative communications. As an illustrative example, we applied the stated recursion to a small-scale network of GNSS receivers and showed the role taken by the CKF in improving the precision of the solutions at each single receiver. In near future the proliferation of low-cost receivers will give rise to an increase in the number of GNSS users. Employing the CKF or other distributed filtering techniques, GNSS users can therefore potentially deliver high-precision parameter solutions without the need of having a computing center.
The second author is the recipient of an Australian Research Council Federation Fellowship (project number FF0883188). This support is gratefully acknowledged.
References
1.Cattivelli FS, Sayed AH. Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Transactions on Automatic Control. 2010;55(9):2069-2084
2.Das S, Moura JMF. Consensus+innovations distributed Kalman filter with optimized gains. IEEE Transactions on Signal Processing. 2017;65(2):467-481
3.Jadbabaie A, Lin J, Stephen Morse A. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control. 2003;48(6):988-1001
4.Kingston DB, Beard RW. Discrete-time average-consensus under switching network topologies. In: American Control Conference, 2006. IEEE; 2006. pp. 3551-3556
5.Moreau L. Stability of multiagent systems with time-dependent communication links. IEEE Transactions on Automatic Control. 2005;50(2):169-182
6.Olfati-Saber R, Murray RM. Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on Automatic Control. 2004;49(9):1520-1533
7.Scherber DS, Papadopoulos HC. Locally constructed algorithms for distributed computations in ad-hoc networks. In: Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks. ACM; 2004. pp. 11-19
8.Xiao L, Boyd S, Lall S. A scheme for robust distributed sensor fusion based on average consensus. In: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks. IEEE Press; 2005. pp. 63-70
9.Rigatos GG. Distributed filtering over sensor networks for autonomous navigation of UAVs. Intelligent Service Robotics. 2012;5(3):179-198
10.Sugar TG, Kumar V. Control of cooperating mobile manipulators. IEEE Transactions on Robotics and Automation. 2002;18(1):94-103
11.Hofmann-Wellenhof B, Lichtenegger H, Wasle E. GNSS: Global Navigation Satellite Systems: GPS, Glonass, Galileo, and More. New York: Springer; 2008
12.Teunissen PJG, Montenbruck O, editors. Springer Handbook of Global Navigation Satellite Systems. Switzerland: Springer; 2017
13.Casbeer DW, Beard R. Distributed information filtering using consensus filters. In: American Control Conference, 2009. ACC’09. IEEE; 2009. pp. 1882-1887
14.Khodabandeh A, Teunissen PJG. An analytical study of PPP-RTK corrections: Precision, correlation and user-impact. Journal of Geodesy. 2015;89(11):1109-1132
15.Li W, Nadarajah N, Teunissen PJG, Khodabandeh A, Chai Y. Array-aided single-frequency state-space RTK with combined GPS, Galileo, IRNSS, and QZSS L5/E5a observations. Journal of Surveying Engineering. 2017;143(4):04017006
16.Li X, Ge M, Dai X, Ren X, Fritsche M, Wickert J, Schuh H. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo. Journal of Geodesy. 2015;89(6):607-635
17.Odolinski R, Teunissen PJG. Single-frequency, dual-GNSS versus dual-frequency, single-GNSS: A low-cost and high-grade receivers GPS-BDS RTK analysis. Journal of Geodesy. 2016;90(11):1255-1278
18.Zaminpardaz S, Teunissen PJG, Nadarajah N. GLONASS CDMA L3 Ambiguity Resolution and Positioning. Berlin, Heidelberg: GPS Solutions; 2016. pp. 1-15
19.Brammer K, Siffling G. Kalman-Bucy Filters. Berlin: Artech House; 1989
20.Candy JV. Signal Processing: Model Based Approach. New Jersey: McGraw-Hill, Inc.; 1986
21.Gelb A. Applied Optimal Estimation. London: MIT press; 1974
22.Gibbs BP. Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook. New Jersey: Wiley; 2011
24.Kailath T. Lectures on Wiener and Kalman Filtering. Number 140. Vienna: Springer; 1981
25.Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960;82(1):35-45
26.Teunissen PJG. Best prediction in linear models with mixed integer/real unknowns: Theory and application. Journal of Geodesy. 2007;81(12):759-780
27.Bode HW, Shannon CE. A simplified derivation of linear least square smoothing and prediction theory. Proceedings of the IRE. 1950;38(4):417-425
28.Kailath T. An innovations approach to least-squares estimation–part I: Linear filtering in additive white noise. Automatic Control, IEEE Transactions on. 1968;13(6):646-655
29.Zadeh LA, Ragazzini JR. An extension of Wiener’s theory of prediction. Journal of Applied Physics. 1950;21(7):645-655
30.Anderson BDO, Moore JB. Optimal Filtering. Vol. 11. New Jersey: Prentice-hall Englewood Cliffs; 1979
31.Bar-Shalom Y, Li XR. Estimation and Tracking- Principles, Techniques, and Software. Vol. 1993. Norwood, MA: Artech House, Inc; 1993
32.Grewal MS, Andrews AP. Kalman Filtering; Theory and Practice Using MATLAB. 3rd ed. New Jersey: John Wiley and Sons; 2008
33.Kailath T, Sayed AH, Hassibi B. Linear Estimation. New Jersey: Prentice-Hall; 2000
34.Maybeck PS. Stochastic Models, Estimation, and Control. Vol. 1. Academic Press; 1979. Republished 1994
35.Simon D. Optimal State Estimation: Kalman, H [Infinity] and Nonlinear Approaches. New Jersey: John Wiley and Sons; 2006
36.Sorenson HW. Kalman filtering techniques. In: Leondes CT editor. Advances in Control Systems Theory and Applications, Vol. 3; 1966. pp. 219-292
37.Stark H, Woods JW. Probability, Random Processes, and Estimation Theory for Engineers. Englewood Cliffs, New Jersey: Prentice-Hall; 1986
38.Teunissen PJG, Khodabandeh A. BLUE, BLUP and the Kalman filter: Some new results. Journal of Geodesy. 2013;87(5):461-473
39.Khodabandeh A, Teunissen PJG. A recursive linear MMSE filter for dynamic systems with unknown state vector means. GEM - International Journal on Geomathematics. 2014;5(1):17-31
40.Ren W, Beard RW. Consensus of information under dynamically changing interaction topologies. In: American Control Conference, 2004. Proceedings of the 2004, Vol. 6. IEEE; 2004. pp. 4939-4944
41.Henderson HV, Pukelsheim F, Searle SR. On the history of the Kronecker product. Linear and Multilinear Algebra. 1983;14(2):113-120
42.Horn RA, Johnson CR. Matrix Analysis. New York: Cambridge UP; 1985
43.Wolfowitz J. Products of indecomposable, aperiodic, stochastic matrices. Proceedings of the American Mathematical Society. 1963;14(5):733-737
44.Blewitt G. An automatic editing algorithm for GPS data. Geophysical Research Letters. 1990;17(3):199-202
45.Mannucci AJ, Wilson BD, Yuan DN, Ho CH, Lindqwister UJ, Runge TF. A global mapping technique for GPS-derived ionospheric total electron content measurements. Radio Science. 1998;33(3):565-582
46.Schaer S. Mapping and predicting the Earth’s ionosphere using the global positioning system. PhD thesis. Bern, Switzerland: University of Bern; 1999
47.Brunini C, Azpilicueta FJ. Accuracy assessment of the GPS-based slant total electron content. Journal of Geodesy. 2009;83(8):773-785
48.Khodabandeh A, Teunissen PJG. Array-aided multifrequency GNSS Ionospheric sensing: Estimability and precision analysis. IEEE Transactions on Geoscience and Remote Sensing. 2016;54(10):5895-5913
49.Julien O, Macabiau C, Issler JL. Ionospheric delay estimation strategies using Galileo E5 signals only. In: GNSS 2009, 22nd International Technical Meeting of The Satellite Division of the Institute of Navigation, Savannah; 2009. pp. 3128-3141
Written By
Amir Khodabandeh, Peter J.G. Teunissen and Safoora Zaminpardaz
Submitted: 09 May 2017Reviewed: 20 September 2017Published: 20 December 2017