Open access peer-reviewed chapter

# Consensus-Based Distributed Filtering for GNSS

By Amir Khodabandeh, Peter J.G. Teunissen and Safoora Zaminpardaz

Submitted: May 9th 2017Reviewed: September 20th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.71138

## Abstract

Kalman filtering in its distributed information form is reviewed and applied to a network of receivers tracking Global Navigation Satellite Systems (GNSS). We show, by employing consensus-based data-fusion rules between GNSS receivers, how the consensus-based Kalman filter (CKF) of individual receivers can deliver GNSS parameter solutions that have a comparable precision performance as their network-derived, fusion center dependent counterparts. This is relevant as in the near future the proliferation of low-cost receivers will give rise to a significant increase in the number of GNSS users. With the CKF or other distributed filtering techniques, GNSS users can therefore achieve high-precision solutions without the need of relying on a centralized computing center.

### Keywords

• distributed filtering
• consensus-based Kalman filter (CKF)
• global navigation satellite systems (GNSS)
• GNSS networks
• GNSS ionospheric observables

## 1. Introduction

Kalman filtering in its decentralized and distributed forms has received increasing attention in the sensor network community and has been extensively studied in recent years, see e.g. [1, 2, 3, 4, 5, 6, 7, 8]. While in the traditional centralized Kalman filter setup all sensor nodes have to send their measurements to a computing (fusion) center to obtain the state estimate, in the distributed filtering schemes the nodes only share limited information with their neighboring nodes (i.e. a subset of all other nodes) and yet obtain state estimates that are comparable to that of the centralized filter in a minimum-mean-squared-error sense. This particular feature of the distributed filters would potentially make the data communication between the nodes cost-effective and develop the nodes’ capacity to perform parallel computations.

The structure of this contribution is as follows. We first briefly review the principles of the standard Kalman filter and its information form in Section 2. The additivity property of the information filter that makes this filter particularly useful for distributed processing is also highlighted. In Section 3 we discuss average consensus rules on which the sensor nodes agree to fuse each other information. Different consensus protocols are discussed and a ‘probabilistic’ measure for the evaluation of their convergence rates is proposed. Section 4 is devoted to the CKF algorithmic steps. Its two time-scale nature is remarked and a three-step recursion for evaluating the consensus-based error variance matrix is developed. In Section 5 we apply the CKF theory to a small-scale network of GNSS receivers collecting ionospheric observables over time. Conducting a precision analysis, we compare the precision of the network-based ionospheric solutions with those of their single-receiver and consensus-based counterparts. It is shown how the CKF of each receiver responses to an increase in the number of iterative communications between the neighboring nodes. Concluding remarks and future outlook are provided in Section 6.

## 2. Kalman filtering

Consider a time series of observable random vectors y1,,yt. The goal is to predict the unobservable random state-vectors x1,,xt. By the term ‘prediction’, we mean that the observables y1,,ytare used to estimate realizations of the random vectors x1,,xt. Accordingly, the means of the state-vectors x1,,xtcan be known, while their unknown realizations still need to be guessed (predicted) through observed realizations of y1,,yt. In the following, to show on which set of observables prediction is based, we use the notation x̂tτas the predictor of xtwhen based on yτ=y1TyτTT. The expectation, covariance and dispersion operators are denoted by E., C..and D., respectively. The capital Qis reserved for (co)variance matrices. Thus Cxtyτ=Qxtyτ.

### 2.1. The Kalman filter standard assumptions

To predict the state-vectors in an optimal sense, one often uses the minimum mean squared error (MMSE) principle as the optimality criterion, see e.g., [19, 20, 21, 22, 23, 24, 25]. In case no restrictions are placed on the class of predictors, the MMSE predictor x̂tτis given by the conditional mean Extyτ, known as the Best Predictor (BP). The BP is unbiased, but generally nonlinear, with exemptions, for instance in the Gaussian case. In case xtand yτare jointly Gaussian, the BP becomes linear and identical to its linear counterpart, i.e. the Best Linear Predictor (BLP)

x̂tτ=Ext+QxtyτQyτyτ1yτEyτE1

Eq. (1) implies that (1) the BLP is unbiased, i.e. Ex̂tτ=Ext, and that (2) the prediction error of a BLP is always uncorrelated with observables on which the BLP is based, i.e. Cxtx̂tτ,yτ=0. These two basic properties can be alternatively used to uniquely specify a BLP [26].

The Kalman filter is a recursive BP (Gaussian case) or a recursive BLP. A recursive predictor, say x̂tt, can be obtained from the previous predictor x̂tt1and the newly collected observable vector yt. Recursive prediction is thus very suitable for applications that require real-time determination of temporally varying parameters. We now state the standard assumptions that make the Kalman filter recursion feasible.

The dynamic model: The linear dynamic model, describing the time-evolution of the state-vectors xt, is given as

xt=Φt,t1xt1+dt,t=1,2,E2

with

Ex0=x00,Dx0=Qx0x0E3

and

Edt=0,Cdtds=Stδt,s,Cdtx0=0E4

for the time instances t,s=1,2,, with δt,sbeing the Kronecker delta. The nonsingular matrix Φt,t1denotes the transition matrix and the random vector dtis the system noise. The system noise dtis thus assumed to have a zero mean, to be uncorrelated in time and to be uncorrelated with the initial state-vector x0. The transition matrix from epoch sto tis denoted as Φt,s. Thus Φt,s1=Φs,tand Φt,t=I(the identity matrix).

The measurement model: The link between the observables ytand the state-vectors xtis assumed given as

yt=Atxt+εt,t=1,2,E5

with

Eεt=0,Cεtεs=Rtδt,s,Cεtx0=0,Cεtds=0E6

for t,s=1,2,, with Atbeing the known design matrix. Thus the zero-mean measurement noise εtis assumed to be uncorrelated in time and to be uncorrelated with the initial state-vector x0and the system noise dt.

### 2.2. The three-step recursion

Initialization: As the mean of x0is known, the best predictor of x0in the absence of data is the mean Ex0=x00. Hence, the initialization is simply given by

x̂00=x00,P00=Qx0x0E7

That the initial error variance matrix P00=Dx0x̂00is identical to the variance matrix Qx0x0follows from the equality Dx0x00=Dx0.

Time update: Let us choose Φt,t1x̂t1t1as a candidate for the BLP x̂tt1. According to Eq. (1), the candidate would be the BLP if it fulfills two conditions: (1) it must be unbiased and (2) it must have a prediction error uncorrelated with the previous data yt1. The first condition, i.e. E(Φt,t1x̂t1t1)=Ext, follows from Eq. (2) and the equalities Ex̂t1t1=Ext1and Edt=0. The second condition, i.e. C(xtΦt,t1x̂t1t1yt1)=0, follows from the fact that the prediction error xtΦt,t1x̂t1t1is a function of the previous BLP prediction error xt1x̂t1t1and the system noise dt, i.e. (cf. 2)

xtΦt,t1x̂t1t1=Φt,t1xt1x̂t1t1+dt,E8

that are both uncorrelated with the previous data yt1. Hence, the time update is given by

x̂tt1=Φt,t1x̂t1t1,Ptt1=Φt,t1Pt1t1Φt,t1T+StE9

The error variance matrix Ptt1=Dxtx̂tt1follows by applying the covariance propagation law to (8), together with Cxt1x̂t1t1dt=0.

Measurement update: In the presence of new data yt, one may yet offer x̂tt1as a candidate for the BLP x̂tt. Such a candidate fulfills the unbiasedness condition Ex̂tt1=Ext, but not necessarily the zero-correlation condition, that is, C(xtx̂tt1,yt)0. Note, however, that C(xtx̂tt1,yt1)=0. Thus the zero-correlation condition C(xtx̂tt1,yt)=0would have been met if the most recent data ytof yt=yt1TytTTwould be a function of the previous data yt1, thereby fully predicted by yt1. Since an observable is its own best predictor, this implies that yt=Atx̂tt1, where Atx̂tt1is the BLP of yt. But this would require the zero-mean quantity vt=ytAtx̂tt1to be identically zero which is generally not the case. It is therefore the presence of vtthat violates the zero-correlation condition. Note that vtis a function of the prediction error xtx̂tt1and the measurement noise εt, i.e. (cf. 5)

vt=Atxtx̂tt1+εt,E10

that are both uncorrelated with yt1. Therefore, vtcannot be predicted by the previous data yt1, showing that vtcontains truly new information. That is why vtis sometimes referred to as the innovation of yt, see e.g. [27, 28, 29]. We now amend our earlier candidate x̂tt1by adding a linear function of vtto it. It reads x̂tt=x̂tt1+Ktvt, with Ktbeing an unknown matrix to be chosen such that the zero-correlation condition is met. Such a matrix, known as the Kalman gain matrix, is uniquely specified by

Kt=Ptt1AtTQvtvt1Cxtx̂tt1Ktvtyt=0E11

since Cxtx̂tt1yt=Ptt1AtTand Cvtyt=Qvtvt. The measurement update reads then

x̂tt=x̂tt1+Ktvt,withPtt=Ptt1KtQvtvtKtTE12

The error variance matrix Ptt=Dxtx̂ttfollows by an application of the covariance propagation law, together with Cxtx̂tt1vt=Ptt1AtT. Application of the covariance propagation law to (10) gives the variance matrix of vtas follows

Qvtvt=AtPtt1AtT+RtE13

since Cxtx̂tt1εt=0.

### 2.3. A remark on the filter initialization

In the derivation of the Kalman filter one assumes the mean of the random initial state-vector x0, in Eq. (3), to be known, see e.g. [30, 31, 32, 33, 34, 35, 36, 37]. This is because of the BLP structure (1) that needs knowledge of the means Extand E(yτ). Since in many, if not most, applications the means of the state-vectors x1,,xtare unknown, such derivation is therefore not appropriate. As shown in Ref. [38], one can do away with this need to have both the initial mean x00and variance matrix Qx0x0, given in Eq. (3), known. The corresponding three-step recursion would then follow the Best Linear Unbiased Prediction (BLUP) principle and not that of the BLP. The BLUP is also a MMSE predictor, but within a more restrictive class of predictors. It replaces the means Extand E(yτ)by their corresponding Best Linear Unbiased Estimators (BLUEs). Within such BLUP recursion, the initialization Eq. (7) is revised and takes place at time instance t=1in the presence of the data y1. Provided that matrix A1is of full column rank, the predictor x̂11follows from solving the normal equations

N1x̂11=r1,withN1=A1TR11A1,r1=A1TR11y1E14

Thus

x̂11=N11r1,andP11=N11E15

The above error variance matrix P11is thus not dependent on the variance matrix of x1, i.e. Qx1x1=Φ1,0Qx0x0Φ1,0T+S1. This is, however, not the case with the variance matrix of the predictor x̂11itself, i.e. Qx̂11x̂11=Dx̂11. This variance matrix is given by [38]

Qx̂11x̂11=Qx1x1+P11E16

showing that P11Qx̂11x̂11. Matrices Pttand Qx̂ttx̂tt(t=1,2,) are used for two different purposes. The error variance matrix Ptt=Dxtx̂ttis a measure of ‘closeness’ of x̂ttto its target random vector xt, thereby meant to describe the ‘quality’ of prediction, i.e. precision of the prediction error xtx̂tt. The variance matrix Qx̂ttx̂tt=Dx̂tthowever, is a measure of closeness of x̂ttto the nonrandom vector Ext, as Dx̂tt=D(Extx̂tt). Thus Qx̂ttx̂ttdoes not describe the quality of prediction, but instead the precision of the predictor x̂tt.

The MMSE of the BLUP recursion is never smaller than that of the Kalman filter, as the Kalman filter makes use of additional information, namely, the known mean x00and variance matrix Qx0x0. When the stated information is available, the BLUP recursion is shown to encompass the Kalman filter as a special case [39]. In the following we therefore assume that the means of the state-vectors x1,,xtare unknown, a situation that often applies to GNSS applications.

### 2.4. Filtering in information form

The three-step recursion presented in Eqs. (7), (9) and (12) concerns the time-evolution of the predictor x̂ttand the error variance matrix Ptt. As shown in Eq. (15), both P11and x̂11can be determined by the normal matrix N1=P111and the right-hand-side vector r1=P111x̂11. One can therefore alternatively develop recursion concerning the time-evolution of Ptt1and Ptt1x̂tt. From a computational point of view, such recursion is found to be very suitable when the inverse-variance or information matrices St1and Rt1serve as input rather than the variance matrices Stand Rt. To that end, one may define [34]

information vectoritτ:=Ptτ1x̂tτandinformation matrixItτ:=Ptτ1E17

Given the definition above, the information filter recursion concerning the time-evolution of ittand Ittwould then follow from the recursion Eqs. (15), (9) and (12), along with the following matrix-inversion equalities

Timeupdate:(Φt,t1Pt1t1Φt,t1T+St)1=MtMt(Mt+St1)1MtMeasurementupdate:(Ptt1Ptt1AtTQvtvtAtPtt1)1=Ptt11+AtTRt1AtE18

where Mt=Φt1,tTPt1t11Φt1,t.

The algorithmic steps of the information filter are presented in Figure 1 . In the absence of data, the filter is initialized by the zero information i10=0and I10=0. In the presence of the data yt, the corresponding normal matrix Ntand right-hand-side vector rtare added to the time update information itt1and Itt1to obtain the measurement update information ittand Itt. The transition matrix Φt,t1and inverse-variance matrix St1would then be used to time update the previous information it1t1and It1t1.

Singular matrix St: In the first expression of Eq. (18) one assumes the variance matrix Stto be nonsingular and invertible. There are, however, situations where some of the elements of the state-vector xtare nonrandom, i.e., the corresponding system noise is identically zero. As a consequence, the variance matrix Stbecomes singular and the inverse-matrix St1does not exist. An example of such concerns the presence of the GNSS carrier-phase ambiguities in the filter state-vector which are treated constant in time. In such cases the information time update in Figure 1 must be generalized so as to accommodate singular variance matrices St. Let S˜tbe an invertible sub-matrix of Stthat has the same rank as that of St. Then there exists a full-column rank matrix Htsuch that

St=HtS˜tHtTE19

Matrix Htcan be, for instance, structured by the columns of the identity matrix Icorresponding to the columns of Ston which the sub-matrix S˜tis positioned. The special case

St=S˜t000=I0S˜tI0THt=I0,E20

shows an example of the representation (19). With Eq. (19), a generalization of the time update ( Figure 1 ) can be shown to be given by

Itt1=MtMtHtHtTMtHt+S˜t11HtTMtE21

Thus instead of St1, the inverse-matrix S˜t1and Htare assumed available.

### 2.5. Additivity property of the information measurement update

As stated previously, the information filter delivers outcomes equivalent to those of the Kalman filter recursion. Thus any particular preference for the information filter must be attributed to the computational effort required for obtaining the outcomes. For instance, if handling matrix inversion requires low computational complexity when working with the input inverse-matrices St1and Rt1, the information filter appears to be more suitable. In this subsection we will highlight yet another property of the information filter that makes this recursion particularly useful for distributed processing.

As shown in Figure 1 , the information measurement update is additive in the sense that the measurement information Ntand rtis added to the information states Itt1and itt1. We now make a start to show how such additivity property lends itself to distributed filtering. Let the measurement model Eq. (5) be partitioned as

yt=Atxt+εty1,tyi,tyn,t=A1,tAi,tAn,txt+ε1,tεi,tεn,t,t=1,2,E22

Accordingly, the observable vector ytis partitioned into nsub-vectors yi,t(i=1,,n), each having its own design matrix Ai,tand measurement noise vector εi,t. One can think of a network of nsensor nodes where each collects its own observable vector yi,t, but aiming to determine a common state-vector xt. Let us further assume that the nodes collect observables independently from one another. This yields

Cεi,tεj,t=Ri,tδi,j,fori,j=1,,n,andt=1,2,E23

Thus the measurement noise vectors εi,t(i=1,,n) are assumed to be mutually uncorrelated. With the extra assumption Eq. (23), the normal matrix Nt=AtTRt1Atand right-hand-side vector rt=AtTRt1ytcan then be, respectively, expressed as

Nt=i=1nNi,t,andrt=i=1nri,tE24

where

Ni,t=Ai,tTRi,t1Ai,t,andri,t=Ai,tTRi,t1yi,tE25

According to Eq. (24), the measurement information of each node, say Ni,tand ri,t, is individually added to the information states Itt1and itt1, that is

Itt=Itt1+i=1nNi,t,itt=itt1+i=1nri,tE26

Now consider the situation where each node runs its own local information filter, thus having its own information states Ii,ttand ii,tt(i=1,,n). The task is to recursively update the local states Ii,ttand ii,ttin a way that they remain equal to their central counterparts Ittand ittgiven in Eq. (26). Suppose that such equalities hold at the time update, i.e. Ii,tt1=Itt1and ii,tt1=itt1. Given the number of contributing nodes n, each node just needs to be provided with the average quantities

N¯t=1ni=1nNi,t,andr¯t=1ni=1nri,tE27

The local states Ii,tt1and ii,tt1would then be measurement updated as (cf. 26)

Ii,tt=Ii,tt1+nN¯t,ii,tt=ii,tt1+nr¯tE28

that are equal to the central states Ittand itt, respectively. In this way one has multiple distributed local filters i=1,,n, where each recursively delivers results identical to those of a central filter.

To compute the average quantities N¯tand r¯t, node imay need to receive all other information Nj,tand rj,t(ji). In other words, node iwould require direct connections to all other nodes ji, a situation that makes data communication and processing power very expensive (particularly for a large number of nodes). In the following cheaper ways of evaluating the averages N¯tand r¯tare discussed.

## 3. Average consensus rules

In the previous section, we briefly discussed the potential applicability of the information filter as a tool for handling the measurement model Eq. (22) in a distributed manner. With the representation Eq. (28) however, one may be inclined to conclude that such applicability is limited to the case where the nodes i=1,,n, have ‘direct’ communication connections to one another in order to receive/send their measurement information Ni,tand ri,t(i=1,,n).

Instead of having direct connections, the idea is now to relax such a stringent requirement by assuming that the nodes are linked to each other at least through a ‘path’ so that information can flow from each node to all other nodes. It is therefore assumed that each node along the path plays the role of an agent transferring information to other nodes. To reach the averages N¯tand r¯t, the nodes would then agree on specific ‘fusion rules’ or consensus protocols, see e.g. [6, 8, 40]. Note that each node exchanges information with neighboring nodes (i.e. those to which the node has direct connections) and not the entire nodes. Therefore, a repeated application of the consensus protocols is required to be carried out. The notion is made precise below.

### 3.1. Communication graphs

The way the nodes interact with each other to transfer information is referred to as the interaction topology between the nodes. The interaction topology is often described by a directed graph whose vertices and edges, respectively, represent the nodes and communication links [4]. The interaction topology may also undergo a finite number of changes over sessions k=1,,ko. In case of one-way links, the directions of the edges face toward the receiving nodes (vertices). Here we assume that the communication links between the nodes are two-way, thus having undirected (or bidirectional) graphs. Examples of such representing a network of 20 nodes with their interaction links are shown in Figure 2 . Let an undirected graph at session kbe denoted by Gk=VEkwhere V=1nis the vertex set and EkijijVis the edge set. We assume that the nodes remain unchanged over time, that is why the subscript kis omitted for V. This is generally not the case with their interaction links though, i.e. the edge set Ekdepends on k. As in Figure 2 (b), the number of links between the nodes can be different for different sessions k=1,,ko. Each session represents a graph that may not be connected. In a ‘connected’ graph, every vertex is linked to all other vertices at least through one path. In order for information to flow from each node to all other nodes, the union of the graphs Gk(k=1,,ko), i.e.

G=VEwithE=k=1koEkko:afinite numberE29

is therefore assumed to be connected. We define the neighbors of node ias those to which the node ihas direct links. For every session k, they are collected in the set Ni,k=jjiEk. For instance for network (a) of Figure 2 , we have only one session, i.e. ko=1, in which N2,1=1345represents the neighbors of node 2. In case of network (b) however, we have different links over four sessions, i.e. ko=4. In this case, the neighbors of node 2are given by four sets: N2,1=5in session 1 (red), N2,2=in session 2 (yellow), N2,3=4in session 3 (green) and N2,4=in session 4 (blue).

### 3.2. Consensus protocols

Given the right-hand-vector ri,t, suppose that node iaims to obtain the average r¯tfor which all other vectors rj,t(ji) are required to be available (cf. 27). But the node ionly has access to those of its neighbors, i.e. the vectors rj,t(jNi,k). For the first session k=1, it would then seem to be reasonable to compute a weighted-average of the available vectors, i.e.

ri,t1=jiNi,1wij1rj,tE30

as an approximation of r¯t, where the scalars wij1( jiNi,1) denote the corresponding weights at session k=1. Now assume that all other nodes jiagree to apply the fusion rule Eq. (30) to those of their own neighbors. Thus the neighboring nodes jNi,kalso have their own weighted-averages rj,t1. But they may have access to those to which the node ihas no direct links. In other words, the weighted-averages rj,t1(jNi,1) contain information on the nodes to which the node ihas no access. For the next session k=2, it is therefore reasonable for the node ito repeat the fusion rule Eq. (30), but now over the new vectors rj,t1(jiNi,2), aiming to improve on the earlier approximation ri,t1. This yields the following iterative computations

ri,tk=jiNi,kwijkrj,tk1,k=1,2,E31

with rj,t0:=rj,t. Choosing a set of weights wijk, the nodes i=1,,nagree on the consensus protocol (31) to iteratively fuse their information vectors ri,tk. Here and in the following, we use the letter ‘k’ for the ‘session numberk=1,,ko(cf. 29) and for the ‘number of iterative communicationsk=1,,kn(cf. 34). The maximum iteration knis assumed to be not smaller than the maximum session number ko, i.e. knko.

The question that now comes to the fore is how to choose the weights wijksuch that the approximation ri,tkgets close to r¯tthrough the iteration Eq. (31). More precisely, the stated iteration becomes favorable if ri,tkr¯twhen kfor all nodes i=1,,n. To address this question, we use a multivariate formulation. Let pbe the size of the vectors ri,t(i=1,,n). We define the higher-dimensioned vector r=r1,tTrn,tTT. The multivariate version of Eq. (31) reads then

r1,tkri,tkrn,tk=w11kIpw1ikIpw1nkIpwi1kIpwiikIpwinkIpwn1kIpwnikIpwnnkIpr1,tk1ri,tk1rn,tk1,k=1,2,E32

or

rk=WkIprk1,k=1,2,E33

The n×nweight matrix Wkis structured by wijk(jiNi,k) and wijk=0.

(jiNi,k). The symbol is the Kronecker matrix product [41]. According to Eq. (33), after kniterations the most recent iterated vector rknis linked to the initial vector r0by k=1knWkIpr0. Thus the vectors ri,tk(i=1,,n) converge to r¯twhen

where the n-vector encontains ones. If the condition Eq. (34) is met, the set of nodes 1ncan asymptotically reach average consensus [4]. It can be shown that (34) holds if the weight matrices Wk(k=1,,ko) have bounded nonnegative entries with positive diagonals, i.e. wijk0and wiik>0, having row- and column-sums equal to one, i.e. j=1nwijk=1and i=1nwijk=1(i,j=1,,n), see e.g. [3, 5, 40, 42, 43].

Examples of such consensus protocols are given in Table 1 . As shown, the weights form a symmetric weight matrix Wk, i.e. wjik=wijk. In all protocols presented, self-weights wiikare chosen so that the condition j=1nwijk=1is satisfied. The weights of Protocols 1 and 2 belong to the class of ‘maximum-degree’ weights, while those of Protocol 3 are referred to as ‘Metropolis’ weights [8]. The weights of Protocols 1 and 3 are driven by the degrees (number of neighbors) of nodes i=1,,n, denoted by dgik=#Ni,k. For instance, in network (a) of Figure 2 we have dg11=4as node 1 has 4 neighbors, while dg141=7as node 14 has 7 neighbors. Protocol 4 is only applicable to networks like (b) in Figure 2 , i.e. when each node has at most one neighbor at a session [4]. In this case, each node exchanges its information to just one neighbor at a session. Thus for two neighboring nodes iand jwe have wiik=wjjk=wijk=0.5, each averaging ri,tk1and rj,tk1to obtain ri,tk=rj,tk.

ProtocolswijkijEkwiik
Protocol 11maxu1ndguk1uiwiuk
Protocol 21n1uiwiuk
Protocol 311+maxdgikdgjk1uiwiuk
Protocol 4121uiwiuk
otherwise wijk=0

### Table 1.

Examples of average-consensus protocols forming the weights wijkin Eq. (31).

The degree (number of neighbors) of node iis denoted by dgik=#Ni,k. Protocol 4 is only applicable when each node has at most one neighbor at a session.

To provide insight into the applicability of the protocols given in Table 1 , we apply them to the networks of Figure 2 . Twenty values (scalars), say ri(i=1,,20), are generated whose average is equal to 5, i.e. r¯=5. Each value is assigned to its corresponding node. For network (a), Protocols 1, 2 and 3 are separately applied, whereas Protocol 4 is only applied to network (b). The corresponding results, up to 30 iterations, are presented in Figure 3 . As shown, the iterated values rik(i=1,,20) get closer to their average (i.e. r¯=5), the more the number of iterative communications.

### 3.3. On convergence of consensus states

Figure 3 shows that the states ri,tk(i=1,,n) converge to their average r¯t, but with different rates. The convergence rate depends on the initial states ri,t0=ri,tand on the consensus protocol employed. From the figure it seems that the convergence rates of Protocols 1 and 3 are about the same, higher than those of Protocols 2 and 4. Note that the stated results are obtained on the basis of specific ‘realizations’ of ri,t(i=1,,n). Consider the states ri,tto be random vectors. In that case, can the results be still representative for judging the convergence performances of the protocols? To answer this question, let us define the difference vectors dri,tk=ri,tkr¯tthat are collected in the higher-dimensioned vector drk=dr1,tTkdrn,tTkT. The more the number of iterations, the smaller the norm of drkbecomes. According to Eq. (33), after kniterations the difference vector drknis linked to r=r1,tTrn,tTTthrough

drkn=Lkn1nenenTIprE35

Now let the initial states ri,thave the same mean and the same variance matrix Dri,t=Q(i=1,,n), but mutually uncorrelated. An application of the covariance propagation law to (35), together with Lknen=en, gives

Ddrkn=L2kn1nenenTQE36

Thus the closer the squared matrix L2knto 1/nenenT, the smaller the variance matrix Eq. (36) becomes. In the limit when kn, the stated variance matrix tends to zero. This is what one would expect, since drkn0. Under the conditions stated in Eq. (34), matrices Wkhave λn=1as the largest absolute value of their eigenvalues [42]. A symmetric weight matrix Wcan then be expressed in its spectral form as

W=i=1n1λiuiuiT+1nenenTE37

with the eigenvalues λ1λn1<λn=1, and the corresponding orthogonal unit eigenvectors u1,,un1,un=1/nen. By a repeated application of the protocol W, we get Lkn=Wkn. Substitution into Eq. (36), together with Eq. (37), gives finally

Ddrkn=i=1n1λi2knuiuiTQλn12knInQE38

The above equation shows that the entries of the variance matrix (36) are largely driven by the second largest eigenvalue of W, i.e. λn1. The smaller the scalar λn1, the faster the quantity λn12kntends to zero, as kn. The scalar λn1is thus often used as a measure to judge the convergence performances of the protocols [7]. For the networks of Figure 2 , λn1of Protocols 1, 2 and 3 are about 0.92, 0.97, 0.91, respectively. As Protocol 3 has the smallest λn1, it is therefore expected to have the best performance. Note, in Protocol 4, that the weight matrix Wkvaries in every session, the performance of which cannot be judged by a single eigenvalue λn1. One can therefore think of another means of measuring the convergence performance. Due to the randomness of the information vectors ri,t(i=1,,n), one may propose ‘probabilistic’ measures such as

Probmaxidri,tknQq,q>0E39

to evaluate the convergence rates of the protocols, where dri,tQ2:=dri,tTQ1dri,t. Eq. (39) refers to the probability that the maximum-norm of the difference vectors dri,tkn=ri,tknr¯t(i=1,,n) is not larger than a given positive scalar qfor a fixed number of iterations kn. The higher the probability Eq. (39), the better the performance of a protocol. For the scalar case Q=σ2, Eq. (39) is reduced to

Probmaxidri,tknqσE40

which is the probability that the absolute differences dri,tkn(i=1,,n) are not larger than q times the standard-deviation σ. For the networks of Figure 2 , 100,000 normally-distributed vectors as samples of r=r1r20Tare simulated to evaluate the probability (40). The results for Protocols 1, 2, 3 and 4 are presented in Figure 4 . The stated probability is plotted as a function of qfor three numbers of iterative communications kn=10,20and 30. As shown, Protocol 3 gives rise to highest probabilities, while Protocol 2 delivers lowest probabilities. After 10 iterations, the probability of having absolute differences smaller than one-fifth of the standard-deviation σ(i.e. q=0.2) is about 80% for Protocol 1, whereas it is less than 5% for Protocol 2. After 30 iterations, the stated probability increases to 80% for Protocol 2, but close to 100% for Protocols 1 and 3.

Figure 4 demonstrates that the convergence performance of Protocol 4 is clearly better than that of Protocol 2, as it delivers higher probabilities (for the networks of Figure 2 ). Such a conclusion however, cannot be made on the basis of the results of Figure 3 . This shows that results obtained on the basis of specific ‘realizations’ of ri,t(i=1,,n) are not necessarily representative.

## 4. Consensus-based Kalman filters

### 4.1. Two time-scale approach

In Section 2.5 we discussed how the additivity property of the measurement update Eq. (26) offers possibilities for developing multiple distributed local filters i=1,,n, each delivering local states Ii,ttand ii,ttequal to their central counterparts Ittand itt. In doing so, each node has to evaluate the averages N¯tand r¯tat every time instance t. Since in practice the nodes do not necessarily have direct connections to each other, options such as the consensus-based fusion rules (cf. Section 3) can alternatively be employed to ‘approximate’ N¯tand r¯t. As illustrated in Figures 3 and 4 , such consensus-based approximation requires a number of iterative communications between the nodes in order to reach the averages N¯tand r¯t. The stated iterative communications clearly require some time to be carried out and must take place during every time interval tt+1(see Figure 5 ). We distinguish between the sampling rate Δand the sending rate δ. The sampling rate refers to the frequency with which the node icollects its observables yi,t(t=1,2,), while the sending rate refers to the frequency with which the node isends/receives information Nj,tkand rj,tk(k=1,,kn) to/from its neighboring nodes. As shown in Figure 5 , the sending rate δshould therefore be reasonably smaller than the sampling rate Δso as to be able to incorporate consensus protocols into the information filter setup. Such a consensus-based Kalman filter (CKF) would thus generally be of a two time-scale nature [2], the data sampling time-scale t=1,2,,versus the data sending time-scale k=1,,kn. The CKF is a suitable tool for handling real-time data processing in a distributed manner for the applications in which the state-vectors xt(t=1,2,) change rather slowly over time (i.e. Δcan take large values) and/or for the cases where the sensor nodes transfer their data rather quickly (i.e. δcan take small values).

Under the assumption δΔ, the CKF recursion follows from the Kalman filter recursion by considering an extra step, namely, the ‘consensus update’. The algorithmic steps of the CKF in information form are presented in Figure 6 . Compare the recursion with that of the information filter given in Figure 1 . Similar to the information filter, the CKF at node iis initialized by the zero information Ii,10=0and ii,10=0. In the presence of the data yi,t, node icomputes its local normal matrix Ni,tand right-hand-side vector ri,tto send them to its neighboring nodes jNi,k(k=1,,kn). In the consensus update, iterative communications between the neighboring nodes iNi,kare carried out to approximate the averages N¯tand r¯tby Ni,tknand ri,tkn, respectively. After a finite number of communications kn, the consensus states Ni,tknand ri,tknare, respectively, added to the time update information Ii,tt1and ii,tt1to obtain their measurement update version Ii,ttand ii,ttat node i(cf. 28). The time update goes along the same lines as that of the information filter.

### 4.2. Time evolution of the CKF error covariances

With the consensus-based information filter, presented in Figure 6 , it is therefore feasible to develop multiple distributed filters, all running in parallel over time. By taking recourse to an average-consensus protocol, not all the nodes are needed to be directly linked, thereby allowing non-neighboring nodes to also benefit from information states of each other. The price one has to pay for such an attractive feature of the CKF is that the local predictors

x̂i,tt=Ii,tt1ii,tt,i=1,,n,E41

will have a poorer precision performance than that of their central counterpart x̂tt. This is due to the fact that the consensus states Ni,tknand ri,tkni=1nare just approximations of the averages N¯tand r¯t. Although they reach the stated averages as kn, one of course always comes up with a finite number of communications kn. As a consequence, while the inverse-matrix Itt1represents the error variance matrix Ptt=Dxtx̂tt(cf. 17), the inverse-matrices Ii,tt1i=1ndo not represent the error variance matrices Pi,tt=Dxtx̂i,tt. To see this, consider the local prediction errors xtx̂i,ttwhich can be expressed as ( Figure 6 )

xtx̂i,tt=Ii,tt1Ii,tt1xtx̂i,tt1nri,tknNi,tknxtE42

Note that the terms xtx̂i,tt1and ri,tknNi,tknxtare uncorrelated. With lijas the entries of the product matrix Lknin Eq. (34), one obtains4

ri,tknNi,tknxt=j=1nlijrj,tNj,txt,Dri,tknNi,tknxt=j=1nlij2Nj,tE43

since Drj,tNj,txt=Nj,t. With this in mind, an application of the covariance propagation law to (42) results in the error variance matrix

Pi,tt=Ii,tt1Ii,tt1Pi,tt1Ii,tt1+n2j=1nlij2Nj,tIi,tt1E44

that is not necessarily equal to Ii,tt1(see the following discussion on Eqs. (47) and (48)).

In Figure 7 we present the three-step recursion of the error variance matrix Pi,tt(for node i). As shown, the node iwould need an extra input, i.e., the term j=1nlij2Nj,tin order to be able to compute Pi,tt. In practice however, such additional information is absent in the CKF setup. This means that the node idoes not have enough information to evaluate the error variance matrix Pi,tt. Despite such restriction, it will be shown in Section 5 how the recursion of Pi,ttconveys useful information about the performance of the local filters i=1,,n, thereby allowing one to a-priori design and analyze sensor networks with different numbers of iterative communications.

To better appreciate the recursion given in Figure 7 , let us consider a special case where a stationary state-vector xtis to be predicted over time. Thus Φt,t1=Iand St=0t=12. Moreover, we assume that all nodes deliver the same normal matrices Ni,t=Ni=1n.

The central error variance matrix Pttwould then simply follow by inverting the sum of all normal matrices over nnodes and ttime instances. Collecting observables up to and including time instance t, the stated variance matrix reads Ptt=1/tnN1. We now compare Pttwith its consensus-based local counterpart at node i, i.e. Pi,tt. The aforementioned assumptions, together with j=1nlij=1, give

Ni,tkn=j=1nlijNj,t=N,andn2j=1nlij2Nj,t=αnNE45

in which the scalar αis given by

α:=nj=1nlij2E46

Substitution into the stated recursion provides us with the time-evolution of the error variance matrix Pi,ttas follows ( Figure 7 )

Ii,10=0,Fi,1=0Ii,11=nN,Pi,11=α1nN1Ii,21=Ii,11,Fi,2=αnNPi,21=Pi,11Ii,tt1=Ii,t1t1,Fi,t=αt1nNPi,tt1=Pi,t1t1Ii,tt=tnN,Pi,tt=α1tnN1E47

This shows that the consensus-based error variance matrix Pi,ttis αtimes its central counterpart Ptt=1/tnN1. With the vector lli1linT, application of the Cauchy-Schwarz inequality gives the lower-bound

α=enTenlTllTen2=1E48

as lTen=1. Thus scalar αis never smaller than 1, i.e. Pi,ttPtt, showing that the performance of the consensus-based predictor x̂i,ttis never better than that of its central version x̂tt. The lower-bound Eq. (48) is reached when l=1/nen, i.e. when lij=1/n(j=1,,n). According to Eq. (34), this can be realized if Lkn1/nenenT, for which the number of iterations knmight be required to be reasonably large. The conclusion reads therefore that the local filters at nodes i=1,,n,generate information matrices Ii,tt, the inverse of which are different from the actual error variance matrices of the predictors x̂i,tt, i.e. Ii,tt1Pi,tt.

## 5. Applications to GNSS

The purpose of this section is to demonstrate how the CKF theory, discussed in Section 4, can play a pivotal role in applications for which the GNSS measurements of a network of receivers are to be processed in a real-time manner. In a GNSS network setup, each receiver serves as a sensor node for receiving observables from visible GNSS satellites to determine a range of different parameters such as positions and velocities in an Earth-centered Earth-fixed coordinate system, atmospheric delays, timing and instrumental biases, see e.g. [11, 12]. As the observation equations of the receivers have satellite specific parameters in common, the receivers’ observables are often integrated through a computing (fusion) center to provide network-derived parameter solutions that are more precise than their single-receiver versions. Now the idea is to deliver GNSS parameter solutions without the need of having a computing center, such that their precision performance is still comparable to that of network-derived solutions.

As previously discussed, consensus-based algorithms and in particular the CKF can be employed to process network data in a distributed filtering scheme, i.e. no computing center is required. In order to illustrate such applicability, we simulate a network of 13 GNSS receivers located in Perth, Western Australia ( Figure 8 ). As shown in the figure, each node (white circle) represents a receiver having data links (red lines) to its neighbors with inter-station distances up to 4 km. We therefore assume that the receivers receive each other data within the ranges not longer than 4 km. For instance, receiver 1 is directly connected to receivers 2 and 6, but not to receiver 3 (the inter-station distance between receivers 1 and 3 is about 8 km).

### 5.1. GNSS ionospheric observables: Dynamic and measurement models

Although the GNSS observables contain information on various positioning and non-positioning parameters, here we restrict ourselves to ionospheric observables of the GPS pseudo-range measurements only [44]. One should however bear in mind that such restriction is made just for the sake of presentation and illustration of the theory discussed in Sections 3 and 4. Would one, for instance, make use of the very precise carrier-phase measurements and/or formulate a multi-GNSS measurement setup, solutions of higher precision levels are therefore expected.

Let the scalar yi,tsdenote the pseudo-range ionospheric observable that the receiver icollects from satellite sat time instance t. The corresponding measurement model, formed by the between-satellite differences yi,tpsyi,tsyi,tpsp, reads (cf. 5)

yi,tps=ai,t;opsνo,t+ai,t;ϕpsνϕ,t+ai,t;ψpsνψ,tbtps+εi,tpsE49

where the term within .refers to the first-order slant ionospheric delays, and btpsdenotes the between-satellite differential code biases (DCBs). We use a regional single-layer model [45, 46] to represent the slant ionospheric delays in terms of 1) νo,tas the vertical total electron content (TEC), 2) νϕ,tand 3) νψ,tas the south-to-north and west-to-east spatial gradient of νo,t, respectively. The corresponding known coefficients follow from Ref. [47]

ai,t;os=1coszi,ts,ai,t;ϕs=1coszi,tsϕi,tsϕo,t,ai,t;ψs=1coszi,tscosϕi,tsψi,tsψo,tE50

with .ps.s.p. The angles ψi,tsand ϕi,ts, respectively, denote the longitude and latitude of the ionospheric piercing points (IPPs) corresponding to the receiver-to-satellite line-of-sight is(see Figure 9 ). They are computed with respect to those of the reference IPP at time instance t, i.e. ψo,tand ϕo,t. The angle zi,tsdenotes the zenith angle of the IPPs. These angles are computed based on the mean Earth’s radius 6378.137km and height of layer 450 km. The measurement noises εi,tsare assumed to be mutually uncorrelated with the dispersion (cf. 6)

Dεi,ts=1.0220.02+sinθi,ts2σ2E51

forming the variance matrices Rtin Eq. (6), where θi,tsis the satellite elevation angle. The scalar σis set to σ65.6cm as the zenith-referenced standard-deviation of the GPS ‘geometry-free’ pseudo-range measurements [48].

Suppose that mnumber of satellites s=1,,m, are tracked by the network receivers i=1,,n=13, during the observational campaign. The state-vector sought is structured as

xt=νo,tνϕ,tνψ,tbtp1btp2btpmTE52

Thus the state-vector xtcontains three TEC parameters νo,t, νϕ,t, νψ,tand m1between-satellite DCBs btps(sp). The dynamic model is assumed to be given by (cf. 2, 4 and 21)

νo,tνϕ,tνψ,t=νo,t1νϕ,t1νψ,t1+dodϕdψ,andbtps=bt1psspE53

Thus the DCBs btpsare assumed constant in time, while the temporal behavior of the TEC parameters νo,t, νϕ,t, νψ,tis captured by a random-walk process. The corresponding zero-mean process noises are assumed to be mutually uncorrelated, having the standard-deviations σdo=1mm/secand σdϕ=σdpsi=5mm/rad/sec[49].

### 5.2. Observational campaign

The network receivers i=1,,n(n=13), shown in Figure 8 , are assumed to track GPS satellites over 16 hours from 8:00 to 24:00 Perth local time, on 02-06-2016. The observation sampling rate is set to Δ=1minute. Thus the number of observational epochs (time instances) is 960. As to the data sending rate δ(cf. 5), we assume three different sending rates δ=5,10and 15 seconds. Thus the number of iterative communications between the neighboring receivers takes the values kn=4,6and 12. The consensus protocol 3 ( Table 1 ) is applied to the CKF of each receiver.

As the satellites revolve around the Earth, not all of which are simultaneously visible to the ‘small-scale’ network of Figure 8 . Their visibility over time is shown in Figure 10 (left panel) in which the satellites with elevation angles smaller than 10 degrees are excluded. There are 31 GPS satellites (i.e. m=31), with PRN 4 absent (PRN refers to the satellite identifier). PRN 22 has the maximum duration of visibility, while PRN 21 has the minimum duration of visibility. Note also that PRNs 2, 6, 16, 17, 19, 26 and 32 disappear (set) and reappear (re-rise). That is why their visibility is shown via two separate time intervals. Figure 10 (right panel) shows the trajectories of the ionospheric pierce points on the ionospheric single layer that are made by receiver-to-satellite line-of-sight paths. It is the spatial distribution of these points that drives the coefficients ai,t;ops, ai,t;ϕps, ai,t;ψpsin Eq. (49).

In the following we present precision analyses on the measurement update solutions of xtin Eq. (52), given the network and satellite configurations shown in Figures 8 and 10 , respectively. Throughout the text, PRN 10 is chosen as the pivot satellite p(cf. (49)). By the term ‘standard-deviation’, we mean the square-root of prediction errors’ variance.

### 5.3. Central (network-based) versus local (single-receiver) solutions

Before discussing the precision performance of the CKF solutions, we first compare the network-based (central) TEC solutions with the solutions that are obtained by the data of one single-receiver only (referred to as the local solutions). At the filter initialization, the standard-deviations of the local TEC solutions are 133.6times larger than those of the central TEC solutions (i.e. square-root of the number of nodes). This is because of the fact that each of the 13 network receivers independently provides equally precise solutions. In that case, the central solution follows then by averaging all the 13 local solutions. Due to the common dynamic model Eq. (53) however, the local solutions become correlated over time. After the filter initialization, the central solution would therefore not follow the average of its local versions. The standard-deviation results, after one hour of the filter initialization, are presented in Figure 11 . Only the results of the receiver 1 are shown as local solutions (in red). As shown, the standard-deviations get stable over time as the filters reach their steady-state. On the right panel of the figure, the local-to-central standard-deviation ratios are also presented. In case of the vertical TECs νo,t, the ratios vary from 1.5 to 3. For the horizontal gradients νϕ,tand νψ,t, the ratios are about 2 and 2.5, respectively.

### 5.4. Role of CKF in improving local solutions

With the results of Figure 11 , we observed that the central TEC solutions considerably outperform their local counterparts in the sense of delivering more precise outcomes, i.e. the local-to-central standard-deviation ratios are considerably larger than 1. We now employ the CKF for each node (receiver) i=1,,13,to improve the local solutions’ precision performance via consensus-based iterative communications between the receivers. In doing so, we make use of the three-step recursion given in Figure 7 to evaluate the error variance matrices Pi,tt(i=1,,13), thereby computing the CKF-to-central standard-deviation ratios. The stated ratios are presented in Figure 12 for two different data sending rates δ=15seconds (left panel) and δ=5seconds (right panel). In both cases, the CKF-to-central standard-deviation ratios are smaller than their local-to-central versions shown in Figure 11 (right panel), illustrating that employing the CKF does indeed improve the local solutions’ precision. Since more iterative communications take place for δ=5, the corresponding ratios are very close to 1. In that case, the CKF of each receiver is expected to have a similar precision performance to that of the central (network-based) filter. For the case δ=15however, the CKF performance of each receiver does very much depend on the number of the receiver’s neighbors. This is because of the fact that only 4 iterative communications between the receivers take place (i.e. kn=4). The receivers with the minimum number of neighbors, i.e. receivers 1, 3 and 13 ( Figure 8 ), have the worst precision performance as the corresponding ratios take largest values. On the other hand, the receivers with the maximum number of neighbors, i.e. receivers 4, 7, 9 and 8, have the best performance as the corresponding ratios are close to 1.

Next to the solutions of the TEC parameters νo,t, νϕ,tand νψ,t, we also analyze CKF solutions of the between-satellite DCBs btps(sp) in Eq. (52). Because of the difference in the satellites visibility over time (cf. Figure 10 ), the DCBs’ standard-deviations are quite distinct and very much depend on the duration of the satellites visibility. The more a pair of satellites psare visible, the smaller the standard-deviation is expected. We now consider the required time to have between-satellite DCBs solutions with standard-deviation smaller than 0.5 nanoseconds. Because of the stated difference in the standard-deviations, each between-satellite DCB corresponds to a different required time. For the central filter, the minimum value of such required time is 7 minutes, with the 25th percentile as 12, median as 38, 75th percentile as 63 and the maximum as 84 minutes. Thus after 84 minutes of the filter initialization, all central DCB solutions have standard-deviations smaller than 0.5 nanoseconds. Such percentiles can be represented by a ‘boxplot’. We compute the stated percentiles for all the CKF solutions and compare their boxplots with the central one in Figure 13 . The results are presented for three different data sending rates δ=15seconds (top), δ=10seconds (middle) and δ=5seconds (bottom). As shown, the more the number of iterative communications, the more similar the boxplots becomes, i.e. the nodes (receivers) are reaching consensus. Similar to the TEC solutions, the DCB precision performance of the CKF corresponding to the receivers 4, 7, 9 and 8 is almost similar to that of the central one, irrespective of the number of iterative communications. This follows from the fact that the stated receivers have the maximum number of neighbors ( Figure 8 ), thus efficiently approximating the averages N¯tand r¯tin Eq. (28) after a few iterations. On the other hand, the receivers with the minimum number of neighbors require more number of iterative communications in order for their CKF precision performance to get similar to that of the central filter.

## 6. Concluding remarks and future outlook

In this contribution we reviewed Kalman filtering in its information form and showed how the additive measurement update (28) can be realized by employing average-consensus rules, even when not all nodes are directly connected, thus allowing the sensor nodes to develop their own distributed filters. The nodes are assumed linked to each other at least through a ‘path’ so that information can flow from each node to all other nodes. Under this assumption, average-consensus protocols can deliver consensus states Ni,tknri,tknas an approximation of the averages N¯tr¯tin Eq. (28) at every time instance t=1,2,, thus allowing one to establish a CKF recursion at every node i=1,,n. To improve the stated approximation, the neighboring nodes have to establish a number of iterative data communications to transfer and receive their consensus states. This makes the CKF implementation applicable only for the applications in which the state-vectors change rather slowly over time (i.e. the sampling rate Δcan take large values) and/or for the cases where the sensor nodes transfer their data rather quickly (i.e. the sending rate δcan take small values).

We developed a three-step recursion of the CKF error variance matrix ( Figure 7 ). This recursion conveys useful information about the precision performance of the local filters i=1,,n, thereby enabling one to a-priori design and analyze sensor networks with different numbers of iterative communications. As an illustrative example, we applied the stated recursion to a small-scale network of GNSS receivers and showed the role taken by the CKF in improving the precision of the solutions at each single receiver. In near future the proliferation of low-cost receivers will give rise to an increase in the number of GNSS users. Employing the CKF or other distributed filtering techniques, GNSS users can therefore potentially deliver high-precision parameter solutions without the need of having a computing center.

## Acknowledgments

The second author is the recipient of an Australian Research Council Federation Fellowship (project number FF0883188). This support is gratefully acknowledged.

## More

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## How to cite and reference

### Cite this chapter Copy to clipboard

Amir Khodabandeh, Peter J.G. Teunissen and Safoora Zaminpardaz (December 20th 2017). Consensus-Based Distributed Filtering for GNSS, Kalman Filters - Theory for Advanced Applications, Ginalber Luiz de Oliveira Serra, IntechOpen, DOI: 10.5772/intechopen.71138. Available from:

### chapter statistics

2Crossref citations

### Related Content

Next chapter

#### A Reference Recursive Recipe for Tuning the Statistics of the Kalman Filter

By Mudambi R Ananthasayanam

#### Frontiers in Advanced Control Systems

Edited by Ginalber Luiz Serra

First chapter

#### Highlighted Aspects From Black Box Fuzzy Modeling For Advanced Control Systems Design

By Ginalber Luiz de Oliveira Serra

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

View all Books