Open access peer-reviewed chapter

Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies

Written By

Marwa Chaabane, Imen Baklouti, Majdi Mansouri, Nouha Jaoua, Hazem Nounou, Mohamed Nounou, Ahmed Ben Hamida and Marie‐France Destain

Reviewed: 15 April 2016 Published: 19 October 2016

DOI: 10.5772/63728

From the Edited Volume

Nonlinear Systems - Design, Analysis, Estimation and Control

Edited by Dongbin Lee, Tim Burg and Christos Volos

Chapter metrics overview

2,035 Chapter Downloads

View Full Metrics

Abstract

In this chapter, iterated sigma‐point Kalman filter (ISPKF) methods are used for nonlinear state variable and model parameter estimation. Different conventional state estimation methods, namely the unscented Kalman filter (UKF), the central difference Kalman filter (CDKF), the square‐root unscented Kalman filter (SRUKF), the square‐root central difference Kalman filter (SRCDKF), the iterated unscented Kalman filter (IUKF), the iterated central difference Kalman filter (ICDKF), the iterated square‐root unscented Kalman filter (ISRUKF) and the iterated square‐root central difference Kalman filter (ISRCDKF) are evaluated through a simulation example with two comparative studies in terms of state accuracies, estimation errors and convergence. The state variables are estimated in the first comparative study, from noisy measurements with the several estimation methods. Then, in the next comparative study, both of states and parameters are estimated, and are compared by calculating the estimation root mean square error (RMSE) with the noise‐free data. The impacts of the practical challenges (measurement noise and number of estimated states/parameters) on the performances of the estimation techniques are investigated. The results of both comparative studies reveal that the ISRCDKF method provides better estimation accuracy than the IUKF, ICDKF and ISRUKF. Also the previous methods provide better accuracy than the UKF, CDKF, SRUKF and SRCDKF techniques. The ISRCDKF method provides accuracy over the other different estimation techniques; by iterating maximum a posteriori estimate around the updated state, it re‐linearizes the measurement equation instead of depending on the predicted state. The results also represent that estimating more parameters impacts the estimation accuracy as well as the convergence of the estimated parameters and states. The ISRCDKF provides improved state accuracies than the other techniques even with abrupt changes in estimated states.

Keywords

  • Kalman filter
  • sigma point
  • state estimation
  • parameter estimation
  • nonlinear system

1. Introduction

Dynamic state‐space models [13] are useful for describing data in many different areas, such as engineering [48], biological data [9, 10], chemical data [11, 12], and environmental data [8, 1315]. Estimation of the state and model parameters based on measurements from the observation process is an essential task when analyzing data by state‐space models. Bayesian estimation filtering represents a solution of considerable importance for this type of problem definition as demonstrated by many existing algorithms based on the Bayesian filtering [1625]. The Kalman filter (KF) [2629] has been extensively utilized in several science applications, such as control, machine learning and neuroscience. The KF provides an optimum solution [28], when the model describing the system is supposed to be Gaussian and linear. However, the KF is limited when the model is considered to be nonlinear and present non‐Gaussian modeling assumptions. In order to relax these assumptions, the extended Kalman filter (EKF) [26, 27, 3032], the unscented Kalman filter (UKF) [3336], the central difference Kalman filter (CDKF) [37, 38], the square‐root unscented Kalman filter (SRUKF) [39, 40], the square‐root central difference Kalman filter (SRCDKF) [41], the iterated unscented Kalman filter (IUKF) [42, 43], the iterated central difference Kalman filter (ICDKF) [44, 45], the iterated square‐root unscented Kalman filter (ISRUKF) [46] and the iterated square‐root central difference Kalman filter (ISRCDKF) [47] have been developed. The EKF [26] linearizes the model describing the system to approximate the covariance matrix of the state vector. However, the EKF is not always performing especially for highly nonlinear or complex models. On behalf of linearizing the model, a class of filters called the sigma‐point Kalman filters (SPKFs) [48] uses a statistical linearization technique which linearizes a nonlinear function of a random variable via a linear regression. This regression is done between n points drawn from the prior distribution of the random variable, and the nonlinear functional evaluations of those points. The sigma‐point family of filters has been proposed to address the issues of the EKF by making use of a deterministic sampling approach. In this approach, the state distribution is approximated and represented by a set of chosen weighted sample points which capture the true mean and covariance of the state vector. These points are propagated through the true nonlinear system and capture the posterior mean and the covariance matrix of the state vector accurately to the third order (Taylor series expansion) for any nonlinearity. As part of the SPKF family, the UKF [26, 27, 33] has been developed. It uses the unscented transformation, in which a set of samples (sigma points) are propagated and selected by the nonlinear model, providing more accurate approximations of the covariance matrix and mean of the state vector. However, the UKF technique has the limit of the number of sigma‐points which are not so large and cannot represent complicated distributions. Another filter in the SPKF family is the central difference Kalman filter (CDKF) [37, 38]. It uses the Stirling polynomial interpolation formula. This filter has the benefit over the UKF in using only one parameter when generating the sigma‐point. To add some benefits of numerical stability, the SRUKF and the SRCDKF [41] have been developed. The advantage of these filters is that they ensured positive semidefiniteness of the state covariances. The iterated sigma‐point Kalman filter (ISPKF) methods employ an iterative procedure within a single measurement update step by resampling the sigma‐point till a termination criterion, based on the minimization of the maximum likelihood estimate, is satisfied.

The objectives of this chapter are threefold: (i) To estimate nonlinear state variables and model parameters using SPKF methods and extensions through a simulation example. (ii) To investigate the effects of practical challenges (such as measurement noise and number of estimated states/parameters) on the performances of the techniques. To study the effect of measurement noise on the estimation performances, several measurement noise levels will be considered. Then, the estimation performances of the techniques will be evaluated for different noise levels. Also, to study the effect of the number of estimated states/parameters on the estimation performances of all the techniques, the estimation performance will be studied for different numbers of estimated states and parameters. (iii) To apply the techniques to estimate the state variables as well as the model parameters of second‐order LTI system. The performances of the estimation techniques will be compared to each other by computing the execution times as well as the estimation root mean square error (RMSE) with respect to the noise‐free data.

Advertisement

2. State estimation problem

Next, we present the formulation of the state estimation problem.

2.1. Problem description and formulation

The state estimation problem for a system of nonlinear complex model is described as follows:

x˙=g(x,u,θ,w)y=l(x,u,θ,v)E1

where  xRn is the state variable vector, yRm is the measurement vector, θRq is the unknown vector, uRp is the input variable vector, wRn and vRm are respectively process and measurement noise vectors, and g and l are nonlinear differentiable functions. The discretization of the model (1) is presented as follows:

xk=f(xk1,uk1,θk1,wk1)yk=h(xk,uk,θk,vk)E2

which describes the state variables at some time step (k) in terms of their values at a previous time step (k1). Since we are interested to estimate the state vector xk, as well as the parameter vector θk, the parameter vector is assumed to be presented as follows:

θk=θk1+γk1E3

This means that it corresponds to a stationary process, with an identity transition matrix, driven by white noise. In order to include the parameter vector θk into the state estimation problem, let us define a new state vector zk  that augments the state vector xk and the parameter vector θk as follows:

zk=[xkθk]=[f(xk1,uk1,vk1,θk1)θk1+γk1]E4

where  zkRn+q. Also, defining the augmented noise vector as:

εk1=[vk1γk1]E5

The model (2) can be written as:

zk= (zk1,uk1,εk1)E6
yk=(zk,uk,vk)E7

where and   are differentiable nonlinear functions. Thus, the objective here is to estimate the augmented state vector  zk, given the measurement vector  yk.

Advertisement

3. Description of state estimation methods

3.1. UKF

The UKF is a SPKF that uses the unscented transformation. This transformation is a method for calculating the statistics of a random variable that undergoes a nonlinear mapping. It is built on the theory that “it is easier to approximate a probability distribution than an arbitrary nonlinear function”.

The state distribution is represented by a Gaussian random variable (GRV) and by a set of deterministically chosen points. These points capture the true mean and covariance of the GRV and also capture the posterior mean and covariance accurately to the second order for any nonlinearity and to the third order for Gaussian inputs. Suppose that GRV zRL characterized by a mean z¯ and covariance Pz is used in the model. This variable is transformed by a nonlinear function y=f(z). To reach the statistics of  y, a 2L+1 sigma vector is defined as follows:

z0=z¯  zi=zΨ+((L+λ)Pz)i i=1,,Lzi=zΨ((L+λ)Pz)i i=L+1,,2LE8

where L is the dimension of the state z, λ=e2(L+κ)L is a scaling parameter and ((L+λ)Pz)i denotes the ith column of the matrix square root. The constant 104<e<1 defines the spread of the sigma‐points around  z¯. The constant κ is a scaling parameter which is usually set to zero or 3L [30].

Then, these sigma‐points are propagated through the nonlinear function,

Yi=f(Zi)i=0,..,2LE9

And the mean and covariance matrix of y can be approximated as weighted sample mean and covariance of the transformed sigma‐point of Yi as follows:

y¯=i=02LWi(m)YandPy˜k=i=02LWi(c)(Yiy¯)(Yiy¯)E10

where the weights are given by

W0(m)=λλ+L   W0(c)=λλ+L+(1e2+ζ)  Wi(m)=Wi(c)=12(λ+L)i=1,,2LE11

The parameter ξ is used to integrate prior knowledge about the distribution of z.

The algorithm of the UKF includes two steps: prediction and update. In the prediction step, we calculate the predicted state estimate z^k and the predicted estimate covariance Pk. In the update step, we calculate the updated state estimate z^k and the updated estimate covariance Pk after calculating the innovation residual Pzkyk and the optimal Kalman gain Kk.

The UKF technique is summarized in Algorithm 1.

3.2. CDKF method

The CDKF is another filter from the family of SPKF. This filter is based on Sterling polynomial interpolation formula instead of the unscented transformation used in UKF. The CDKF is similar to the UKF with the same or superior performance. However, it has an advantage over the UKF that it uses only one parameter instead of three parameters in the UKF. The CDKF uses a symmetric set of (2L+1) sigma‐point which are calculated as follows,

z0=z^ zi=z^+(hPz)ii=1,,Lzi=z^(hPz)ii=L+1,,2LE12

where L is the dimension of the state z, h is a scaling parameter (the optimal value is h=3) and i indicates the ith column of the matrix.

These sigma‐points are propagated through the nonlinear function to form the set of the posterior sigma‐point,

Yi=f(Ψi)i=0,,2LE13

Within the above results, the sterling approximation estimates of the mean z^, covariance Py and cross covariance Pz,y are obtained through a linear regression of weighted point,

y^k=i=02LWi(m)YiE14
Py¯k=i=1L[Wi(c1)(Yi,k|k1+Yi+L,k|k1)2+Wi(c2)(Yi,k|k1+Yi+L,k|k1+2Y0)2]E15
Pzkyk=W1(c1)Pz[Y1:L,k|k1YL+1:2L,k|k1]TE16

The set of corresponding weights for the mean Wi(m) which are used to compute the posterior mean is defined as:

W0(m)=h2Lh2, Wi(m)=12h2E17

And the set of corresponding weights for the covariance W0(c) which is used to recover the covariance and the cross‐covariance is defined as,

Wi(c1)=14h2, Wi(c2)=h214h4  i=1,,2LE18

The CDKF technique is summarized in Algorithm 2.

3.3. SRUKF method

One drawback of the UKF is that it requires the calculation of the matrix square‐root SkSkT=Pk, at each time step. That is why a square‐root form of the UKF has been developed to reduce the computational complexity. In this new method the covariance matrix Sk will be propagated directly, avoiding to refactorize at each time step [34].

The SRUKF is initialized as follows:

z^0=E[z0] and S0=chol{E[(z0z^0)(z0z^0)']}E19
Ψk1=[z^k1   z^k1 + hSk1   z^k1    hSk1 ]E20

Algorithm 1: UKF algorithm

• Initialization step:

z0=E[z0] and Pz0=[(zz^0)(zz^0)T]E101

• Prediction step:

Ψk1=[z^k1z^k1+(L+λ)Pzz^k1(L+λ)Pz]E102
Ψk|k1=f(Ψk1)E103
z^k=i=02LWi(m)Ψi,k|k1E104
Pk=i=02LWi(c)(Ψi,k|k1z^k)(Ψi,k|k1z^k)TE105
Yk|k1=h[Ψk|k1]E106
y^k=i=02LWi(m)Yi,k|k1E107

• Estimation (update) step:

Py˜k=i=02LWi(c)[Yi,k|k1y^k][Yi,k|k1y^k]TE108
Pzkyk=i=02LWi(c)[Ψi,k|k1z^k][Yi,k|k1y^k]TE109
Kk=PzkykPy˜k1E110
z^k=z^k+Kk(yky^k)E111
Pk=PkKkPy˜kKkTE112

Return the augmented state estimation z^k

Algorithm 2: CDKF algorithm

• Initialization step:

z0=E[z0] and Pz0=[(zz^0)(zz^0)T]E114

• Prediction step:

Ψk1=[z^k1z^k1+hPzz^k1hPz]E115
Ψk|k1=f(Ψk1)E116
z^k=i=02LWi(m)Ψi,k|k1E117
Pk=i=0L[Wi(c1)(Ψi,k|k1ΨL+i,k|k1)2+Wi(c2)(Ψi,k|k1+ΨL+i,k|k12Ψ0,k|k1)2]E118
Yk|k1=h[Ψk|k1]E119
y^k=i=02LWi(m)Yi,k|k1E120

• Estimation (update) step:

Py¯k=i=0L[Wi(c1)(Yi,k|k1YL+i,k|k1)2+Wi(c2)(Yi,k|k1+YL+i,k|k12Y0)2]E121
Pzkyk=W1(c1)Pk [Y1:L,k|k1YL+1:2L,k|k1]TE122
Kk=PzkykPy¯k1E123
z^k=z^k+Kk(yky^k)E124
Pk=PkKkPy¯kKkTE125

Return the augmented state estimation z^k

The Cholesky factorization decomposes a symmetric, positive‐definite matrix into the product of a lower triangular matrix and its transpose. This new matrix is utilized directly to obtain the sigma‐point: The scaling constant h is expressed as h=Lα2 , where α is a tunable parameter less than one.

In order to predict the current attitude based on each sigma‐point, these sigma‐points are transformed through the nonlinear process system

Ψk/k1=f[Ψk1]E21

Then, the state mean and the square‐root covariance are estimated and calculated through the transformed sigma‐point as follows:

z^k=i=02LWi(m)Ψi,k|k1E22
Sk=qr{[W1(c) (Ψ1:2L,k/k1Z^k)Rw]}E23
Sk=cholupdate {Sk, 0,kZ^k, W0(c)}E24

where W0(c)=2(1α2+12β), W0(m)=1α2 and Wi(m)=Wi(c)=12Lα2β, β is a tunable parameter used to include prior distribution. The transformed sigma‐point vector is then used to predict the measurements using the measurement model:

Yk|k1 =h[Ψk|k1]E25

The expected measurement y^k and square‐root covariance of y˜k=yky^k (called the innovation) are given by the unscented transform expressions just as for the process model:

y^k=i=0(m)Wi(m)Yi,k|k1E26
Sy˜k=qr{[W1(c)(Y1:2L,k|k1y^k)Rkv]}E27
Sy˜k=cholupdate {Sy˜k, Y0,k y^k, W0(c)}E28

In an attempt to find out how much to adjust the predicted state mean and covariance based on the actual measurement, the Kalman gain matrix Kk is calculated as follows:

PZkyk=i=02LWi(c)[Ψi,k|k1Z^k][Yi,k|k1y^k]E29
KK= PZkyk/Sy˜kT/Sy˜kE30

Finally, the state mean and covariance are updated using the actual measurement and the Kalman gain matrix:

z^k=z^k+Kk(yky^k)E31
U=KkSy˜kE32
Sk=cholupdate {Sk, U, 1}E33

where Rw is the process noise covariance, Rv is the measurement noise covariance, chol is Cholesky method of matrix factorization, qr is QR matrix decomposition and cholupdate is a Cholesky factor updating.

The SRUKF technique is summarized in Algorithm 3.

3.4. SRCDKF method

Like the SRUKF, the matrix square‐root Sk will be propagated directly, avoiding the computational complexity to refactorize at each time step in the CDKF. The SRCDKF is initialized with a state mean vector and the square root of a covariance.

z^0=E[z0] and S0=chol{E[(z0z^0)(z0z^0)']}E34

After the Cholesky factorization we obtain the sigma‐point:

Ψk1=[z^k1        z^k1 + hSk1       z^k1  hSk1 ]E35

The sigma‐point vector is then gone through the nonlinear process system, which predicts the current attitude based on each sigma‐point.

Ψk/k1=f[Ψk1]E36

The estimated state mean and square‐root covariance are calculated from the transformed sigma‐point using,

z^k=i=02LWi(m)Ψi,k|k1E37
Sk=qr{[W1(c1)(Ψ1:L,k|k1ΨL+1:2L,k|k1) W1(c2)(Ψ1:L,k|k1+L+1:2L,k|k12Ψ0,k|k1)]}E38

where Wi(c1)=14h2,  Wi(c2)=h214h4, W0(m)=h2Lh2 and  Wi(m)=12h2. The next step, the sigma‐point for measurement update is calculated as,

Ψk|k1=[z^k|k1         z^k|k1 + hSk|k1        z^k|k1  hSk|k1 ]E39

The transformed sigma‐point vector is then used to predict the measurements using the measurement model:

Yk|k1 =h(Ψk|k1)E40

The expected measurement y^k and square‐root covariance of y˜k=yky^k (called the innovation) are given by expressions just as for the process model:

y^k=i=02LWi(m)Yk|k1E41
Sy˜k=qr{[W1(c1)(Y1:L:2L,k|k1YL+1:2L,k|k1)W1(c2)(Y1:L,k|k1YL+1:2L,k|k12Y0,k|k1)]}E42

In an attempt to find out how much to adjust the predicted state mean and covariance based on the actual measurement, the Kalman gain matrix Kk is calculated as follows:

Pzkyk=W1(c1)Sk[Y1:L,k|k1YL+1:2L,k|k1]TE43
Kk=Pzkyk/Sy˜kT/Sy˜kE44

Then, the state mean and covariance are updated using the actual measurement and the Kalman gain matrix is:

z^k=z^k+Kk(yky^k)E45
U=KkSy˜kE46
Sk=cholupdate{Sk,U,1}E47

The SRCDKF technique is summarized in Algorithm 4.

3.5. ISPKF

In order to achieve superior performance of the statical linearization methods in terms of efficiency and accuracy, the ISPKFs have been developed. These filters include IUKF, ICDKF, ISRUKF and ISRCDKF. The major difference between the ISPKFs and the noniterated SPKFs is shown in the step where the updated state estimation is calculated using the predicted state and the observation. Instead of relying on the predicted state, the observation equation is relinearized over times by iterating an approximate maximum a posteriori estimate, so the state estimate will be more accurate.

3.5.1. IUKF

The difference between the UKF and the IUKF consists in the iteration strategy.

After generating the prediction and the update steps, and getting both the state estimate z^k and the covariance matrix Pk, an iteration loop is set up with the following initializations:

z^k,0=z^k , Pk,0=Pk, z^k,1=z^k , Pk,1=Pk and j=2 with j is the jth iterate.

In this loop, for each j, new sigma‐points are generated in the same way as the standard UKF

Ψk,j=[z^k,j1z^k,j1+(L+λ)Pk,j1       z^k,j1(L+λ)Pk,j1]E48

Algorithm 3: SRUKF algorithm

• Initialization step: z^0=E[z0]

And S0=chol{E[(z0z^0)(z0z^0)']}E126

• Prediction step:

Ψk1=[z^k1         z^k1 + hSk1        z^k1  hSk1 ]E127
Ψk/k1=f[Ψk1]E128
Z^k=i=02LWi(m)Ψi,k|k1E129
Sk=qr{[W1(c) (Ψ1:2L,k/k1Z^k)Rw]}E130
Sk=cholupdate {Sk, Ψ0,kZ^k, W0(c)}E131
Yk|k1 =h[Ψk|k1]E132
y^k=i=0(m)Wi(m)Yi,k|k1E133

• Estimation (update) step:

Sy˜k=qr{[W1(c)(Y1:2L,k|k1y^k)Rkv]}E134
Sy˜k=cholupdate {Sy˜k, Y0,k y^k, W0(c)}E135
PZkyk=2Li=0Wi(c)[Ψi,k|k1Z^k][Yi,k|k1y^k]E136
KK= PZkyk/Sy˜kT/Sy˜kE137
z^k=z^k+Kk(yky^k)E138
U=KkSy˜kE139
Sk=cholupdate {Sk, U, 1}E140

Return the augmented state estimation z^k

Algorithm 4: SRCDKF algorithm

• Initialization step: z^0=E[z0]

And S0=chol{E[(z0z^0)(z0z^0)']}E141

• Prediction step:

Ψk1=[z^k1         z^k1 + hSk1        z^k1  hSk1 ]E142
Ψk/k1=f[Ψk1]E143
zk=i=02LWi(m)Ψi,k|k1E144
Sk=qr { [ W1(c1) (Ψ1:L,k|k1ΨL+1:2L,k|k1) W1(c2) (Ψ1:L,k|k1+ΨL+1:2L,k|k12Ψ0,k|k1)] }E145
Ψk|k1=[z^k|k1         z^k|k1 + hSk|k1        z^k|k1  hSk|k1 ]E146
Yk|k1 =h(Ψk|k1)E147
y^k=i=02LWi(m)Yk|k1E148

• Estimation(update) step:

Sy¯k=qr { [ W1(c1)(Y1:L,k|k1YL+1:2L,k|k1) W1(c2) (Y1:L,k|k1YL+1:2L,k|k12Y0,k|k1) ]}E149
Pzkyk=W1(c1)Sk[Y1:L,k|k1YL+1:2L,k|k1]TE150
Kk=Pzkyk/Sy¯kT/Sy¯kE151
z^k=z^k+Kk(yky^k)E152
U=KkSy¯kE153
Sk=cholupdate{Sk,U,1}E154

Return the augmented state estimation z^k

Then the prediction step and the update step are executed as follows:

z^k,j=i=02LWi(m)Ψi,j E49
Yk,j=h[Ψi,j]E50

where Yk,j represents the ith component of Yj

y^k,j=i=02LWi(m)Yi,jE51
Py˜k,j=i=02LWi(c)[Yi,jy^k,j][Yi,jy^k,j]TE52
Pzk,jyk,j=i=02LWi(c)[Ψi,jz^k,j][Yi,jy^k,j]TE53
Kk,j=Pzk,jyk,jPy˜k,j1E54
z^k,j=z^k,j+g.Kk,j(yky^k,j)E55
Pk,j=Pk,jKk,jPy˜k,jKk,jTE56

Those steps are repeated many times until a following inequality is not satisfied.

z^k,jTPk,j1T+y˜k,jTRk1y˜k,j<y˜k,j1TRk1y˜k,j1 or j<NE57

The IUKF is summarized in Algorithm 5.

3.5.2. ICDKF

The iterated sigma‐point methods have the ability to provide accuracy over other estimation methods since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around the updated state, instead of relying on the predicted state.

In the ICDKF, the prediction step is calculated as the standard CDKF and we get z^kand Pk.

Then the sigma‐point in measurement updating is calculated as follows:

Ψ(k|k1)=[z^(k|k1 )        z^(k|k1 )+ hPk        z^(k|k1 ) hPk ]E58

After that, the initialization z0=z^k is set up and then the iteration step is executed, so the following equations are repeated m times.

Ψj=[zjzj+hPkzjhPk]E59
Yj=h[Ψj]E60
y^k,j=i=02LWi(m)Yi,jE61
Py¯k=i=02LWi(c1)(Yi,jYL+i,j)2+Wi(c2)(Yi,j+YL+i,j2Y0,j)2E62
Pzkyk=W1(c1)Pz [Y1:L,jYL+1:2L,j]TE63
Kk,j=PzkykPy¯k1E64
z^k,j=z^k+Kk,j(yky^k,j)E65
Pk=PkKk,jPy¯kKk,jTE66

The algorithm of the ICDKF is summarized in Algorithm 6.

3.5.3. ISRUKF

The ISRUKF has the same principle as the IUKF. After executing the standard SRUKF, an iteration loop is started. The predicted estimated state  (zk, z^k) and the predicted and estimated covariance matrix  (Sk,Sk) obtained through the prediction and the update steps will be the initialization inputs for the iteration loop (z^k,0=z^k , Sk,0=Sk and z^k,1=z^k , Sk,1=Sk). Also let j=2 where j is the jth iteration.

In the iteration loop, and for each j, the new sigma‐point vector is generated as follows:

Ψk,j=[z^k,j1z^k,j1+hSk,j1z^k,j1hSk,j1]E67

Then, the prediction and the update steps are executed as follows:

Ψj=f(Ψk,j)E68
z^k,j=i=02LWi(m)Ψi,jE69
Sk=qr{[W1(c) (Ψ1:2L,jZ^k,j)Rw]}E70
Sk=cholupdate {Sk,j, Ψi,jZ^k,j, W1(c)}E71
Yi,j =h[Ψi,j]E72
y^k,j=i=0(m)Wi(m)Yi,jE73
Sy˜k,j=qr{[W1(c)(Y1:2L,jy^k,j)Rkv]}E74
Sy˜k,j=cholupdate{Sy˜k,j, Y0,j y^k,j, W0(c)}E75
PZkyk=i=02LWi(c)[Ψi,jZ^k,j][Yi,jy^k,j]E76
KK,j= PZkyk/Sy˜k,jT/Sy˜k,jE77
z^k,j=z^kj+Kk,j(yky^k,j)E78
U=Kk,jSy˜k,jE79
Sk,j=cholupdate {Sk,j1, U, 1}E80

The equations in the iterative loop are repeated m times.

The ISRUKF algorithm is summarized in Algorithm 7.

3.5.4. ISRCDKF

The ISRCDKF has the ability to provide accuracy over other SRCDKF since it relinearizes the measurement equation by iterating an approximate maximum a posteriori estimate around the updated state, instead of relying on the predicted state.

The algorithm of the ISRCDKF consists of generating the prediction step as the standard SRCDKF, then applying m iterations over the update step described as follows:

Ψj=[zj zj+ hSk zj hSk ]E81
Yj =h(Ψj)E82
y^k,j=i=02LWi(m)Yi,jE83
Sy˜k=qr{[W1(c1)(Y1:L,jYL+1:2L,j)W1(c2)(Y1:L,jYL+1:2L,j2Y0,j)]}E84
Pzkyk=W1(c1)Sk[Y1:L,jYL+1:2L,j]TE85
Kk,j=Pzkyk/Sy˜kt/Sy˜kE86
z^k,j=z^k+Kk,j(yky^k,j)E87
U=Kk,jSy˜kE88
Sk=cholupdate{Sk,U,1}E89

The ISRCDKF algorithm is summarized in Algorithm 8.

In the next section, the SPKF method performances will be assessed and compared to ISPKF methods. The performances of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods will be evaluated through a simulation example with two comparative studies in terms of estimation accuracy, convergence and execution times.

Advertisement

4. Simulation results

4.1. State and parameter estimations for a second‐order LTI system

Consider a second‐order LTI described by the following state variable,

x˙k=Axk+BU1(k)+vkE90

where vk is a Gaussian process noise (vk;0,101), and A= (1.92230.960410) is a matrix with scalar parameter B=[0.20.2 ].

Algorithm 5: IUKF algorithm

• Initialization step:

Z0=E[z0]E155
Pz0=[(zz^0)(zz^0)T]E156

• Prediction step:

Generate the UKF prediction step and return z^kand Pk

• Estimation (update) step:

• Generate the UKF update step and return Pkand z^k

• Iteration: Let z^k,0=z^k , Pk,0=Pk

And z^k,1=z^k , Pk,1=Pk

Also let g=1 and j=2.

Ψk,j=[z^k,j1  z^k,j1+(L+λ)Pk,j1        z^k,j1(L+λ)Pk,j1]E157
z^k,j=i=02LWi(m)Ψi,j E158
Yk,j=h[Ψi,j]E159
y^k,j=i=02LWi(m)Yi,jE160
Py˜k,j=i=02LWi(c)[Yi,jy^k,j][Yi,jy^k,j]TE161
Pzk,jyk,j=i=02LWi(c)[Ψi,jz^k,j][Yi,jy^k,j]TE162
Kk,j=Pzk,jyk,jPy˜k,j1E163
z^k,j=z^k,j+g.Kk,j(yky^k,j)E164
Pk,j=Pk,jKk,jPy˜k,jKk,jTE165

Define these equations:

y^k,j=h(z^k,j)E166
z˜k,j=z^k,j z^k,j1E167
y˜k,j=yk y^k,jE168

If the inequality is fulfilled

z^k,jTPk,j1T+y˜k,jTRk1y˜k,j<y˜k,j1TRk1y˜k,j1E169

And j<N then set g=η.g, j=j+1, and return to the iterated loop.

Otherwise set

z^k=z^k,j And Pk=Pk,j Return the augmented state estimation  z^k

Algorithm 6: ICDKF algorithm

• Initialization step:

Z0=E[z0]E170
Pz0=[(zz^0)(zz^0)T]E171

• Prediction step:

Ψk1=[z^k1z^k1+hPzz^k1hPz]E172
Ψk|k1=f(Ψk1)E173
z^k=i=02LWi(m)Ψi,k|k1E174
Pk=i=02LWi(c1)(Ψi,k|k1ΨL+i,k|k1)2+Wi(c2)(Ψi,k|k1+ΨL+i,k|k12Ψ0,k|k1)2E175
Ψ(k|k1)=[z^(k|k1 ) z^(k|k1 )+ hPk z^(k|k1 ) hPk ]E176

• Estimation(update) step:

z0=z^kE177

• Iteration: for j=0,.., m

Ψj=[zjzj+hPkzjhPk]E178
Yj=h[Ψj]E179
y^k,j=i=02LWi(m)Yi,jE180
Py¯k=i=02LWi(c1)(Yi,jYL+i,j)2+Wi(c2)(Yi,j+YL+i,j2Y0,j)2E181
Pzkyk=W1(c1)Pz [Y1:L,jYL+1:2L,j]TE182
Kk,j=PzkykPy¯k1E183

End

z^k,j=z^k+Kk,j(yky^k,j)E184
Pk=PkKk,jPy¯kKk,jTE185

Return the augmented state estimation  z^k

Algorithm 7: ISRUKF algorithm

• Initialization step: z0=E[z0]

S0=chol{E[(z0z^0)(z0z^0)']}E186

• Prediction step:

• Generate the SRUKF prediction step and return z^k and Sk

• Estimation (update) step:

Generate the SRUKF update step and return Skand z^k

• Iteration: Let z^k,0=z^k , Sk,0=Sk and z^k,1=z^k , Sk,1=Sk. Also let j=2

Ψk,j=[z^k,j1z^k,j1+hSk,j1z^k,j1hSk,j1]E187
Ψj=f(Ψk,j)E188
z^k,j=i=02LWi(m)Ψi,jE189
Sk=qr{[W1(c) (Ψ1:2L,jZ^k,j)Rw]}E190
Sk=cholupdate {Sk,j, Ψi,jZ^k,j, W1(c)}E191
Yi,j =h[Ψi,j]E192
y^k,j=i=0(m)Wi(m)Yi,jE193
Sy˜k,j=qr{[W1(c)(Y1:2L,jy^k,j)Rkv]}E194
Sy˜k,j=cholupdate{Sy˜k,j, Y0,j y^k,j, W0(c)}E195
PZkyk=i=02LWi(c)[Ψi,jZ^k,j][Yi,jy^k,j]E196
KK,j= PZkyk/Sy˜k,jT/Sy˜k,jE197
z^k,j=z^kj+Kk,j(yky^k,j)E198
U=Kk,jSy˜k,jE199
Sk,j=cholupdate {Sk,j1, U, 1}E200

End

U=Kk,mSy˜kE201
Sk=cholupdate{Sk,m,U,1}E202

Return the augmented state estimation z^k

Algorithm 8: ISRCDKF algorithm

• Initialization step: z^0=E[z0]

S0=chol{E[(z0z^0)(z0z^0)']}E203

• Prediction step:

Ψk1=[z^k1      z^k1+ hSk1     z^k1  hSk1 ]E204
Ψk/k1=f[Ψk1]E205
zk=i=02LWi(m)Ψi,k|k1E206
Sk=qr{[W1(c1)(Ψ1:L,k|k1ΨL+1:2L,k|k1) W1(c2)(Ψ1:L,k|k1+ΨL+1:2L,k|k12Ψ0,k|k1)]}E207
Ψ(k|k1)=[z^(k|k1 )      z^(k|k1 )+ hS(k|k1)      z^(k|k1 ) hS(k|k1)]E208

• Estimation(update) step:

z0=z^kE209

• Iteration: for j=0: m

Ψj=[zj    zj+ hSk    zj hSk ]E210
Yj =h(Ψj)E211
y^k,j=i=02LWi(m)Yi,jE212
Sy˜k=qr{[W1(c1)(Y1:L,jYL+1:2L,j)W1(c2)(Y1:L,jYL+1:2L,j2Y0,j)]}E213
Pzkyk=W1(c1)Sk[Y1:L,jYL+1:2L,j]TE214
Kk,j=Pzkyk/Sy˜kt/Sy˜kE215
z^k,j=z^k+Kk,j(yky^k,j)E216
U=Kk,jSy˜kE217
Sk=cholupdate{Sk,U,1}E218

End U=Kk,mSy˜k

Sk=cholupdate{Sk,m,U,1}E219

Return the augmented state estimation z^k

The nonstationary observation model is given by,

yk=Cxk+DU2(k)+nk E91

where C=[10] and D=[00.2 ]. The observation noise nk is a Gaussian noise N(nk;0,3.101). Given only the noisy observations yk, the different filters were used to estimate the underlying clean state sequence xk for k=1100.

4.1.1. Generation of dynamic data

It must be noted that this simulated state is assumed to be noise‐free. They are contaminated with Gaussian noise. Given noisy observations yk, the various KFs were used to estimate the clean state sequence xk=[x1x2] for k = 1...100. Figure 1 shows the changes in the state variable x1.

Figure 1.

Simulated data used in estimation: state variable (x1).

Here, the number of sigma‐points is fixed to 9 for all the techniques (L = 4). The process noise (vk;0,101) was added. The observation noise is N(nk;0,3.101). The initial value of the state vector is  x0=[10]'.

4.1.2. Comparative study: estimation of state variables from noisy measurements

The purpose of this study is to compare the estimation accuracy of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods when they are utilized to estimate the state variable of the system. Hence, it is considered that the state vector to be estimated zk=xk  and the model parameters P1,P2 are assumed to be known. The simulation results for state estimations of state variable xk using UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods are shown in Figures 2 and 3, respectively. Also, the performance comparison of the state estimation techniques in terms of RMSE and execution times is presented in Table 1.

Figure 2.

Estimation of state variables using various state estimation techniques (UKF, CDKF, SRUKF and SRCDKF).

Figure 3.

Estimation of state variables using various state estimation techniques (IUKF, ICDKF, ISRUKF and ISRCDKF).

Technique x1 (RMSE) x2 (RMSE) Time execution(s) Technique x1 (RMSE) x2 (RMSE) Time execution(s)
UKF 0.3539 0.4658 0.3577 IUKF 0.3342 0.4341 0.5952
CDKF 0.3512 0.4583 0.3367 ICDKF 0.3265 0.4315 0.4351
SRUKF 0.3495 0.4590 0.3354 ISRUKF 0.3254 0.4256 0.5803
SRCDKF 0.3324 0.4593 0.2586 ISRCDKF 0.3121 0.4213 0.4229

Table 1.

Comparison of state estimation techniques.

It is easily observed from Figures 2 and 3 as well as Table 1 that the ISRCDKF method achieves a better accuracy than the other methods.

4.1.3. Comparative study: simultaneous estimation of state variables and model parameters

The state variables and parameters are estimated and performed using the state estimation techniques UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF. The results of estimation for the model parameters using the estimation techniques (UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF) are shown in Figures 4 and 5, respectively. It can be seen from the results presented in Figures 4 and 5 that the IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods outperform the UKF method, and that the ISRCDKF shows relative improvement over all other techniques. These results confirm the results obtained in the first comparative study, where only the state variable is estimated. The advantages of the ISRCDKF over the other techniques can also be seen through their abilities to estimate the model parameters.

Figure 4.

Estimation of the model parameters (P1, P2) using UKF, CDKF, SRUKF and SRCDKF.

Figure 5.

Estimation of the model parameters (P1, P2) using IUKF, ICDKF, ISRUKF and ISRCDKF.

4.1.3.1. Root Mean Square Error analysis

The effects of the practical challenges on the performances of the UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF for state and parameter estimation are investigated in the next section.

4.1.3.1.1. Effect of number of state and parameter to estimate on the estimation RMSE

To study the effect of the number of states and parameters to be estimated on the estimation performances of UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF, the estimation performance is analyzed for different numbers of estimated states and parameters. Here, we will consider two cases, which are summarized below. In all cases, it is assumed that the state xk is measured.

Case 1: the state xk along with the first parameter P1 will be estimated.

Case 2: the state xk along with the two parameters P1 and P2 will be estimated.

The estimation of the state variables and parameter(s) for these two cases is performed using UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF, and the simulation results for the state variables and the model parameters for the two cases are shown in Tables 2 and 3. For example, for case 1, Table 2 compares the estimation RMSEs for the two state variables xk (with respect to the noise‐free data) and the mean of the estimated parameter P1 at steady state (i.e., after convergence of parameter(s)) using the estimation methods. Tables 2 and 3 also present similar comparisons for cases 1 and 2, respectively.

Technique x1
(RMSE)
x2
(RMSE)
P1
(mean)
Time
execution (s)
Technique x1
(RMSE)
x2
(RMSE)
P1
(mean)
Time
execution (s)
UKF 0.4221 0.5418 2.2453 0.3906 IUKF 0.3854 0.5093 1.9826 0.6963
CDKF 0.4192 0.5205 2.2232 0.3696 ICDKF 0.3827 0.4920 1.9786 0.5160
SRUKF 0.4063 0.4978 2.2228 0.3835 ISRUKF 0.3757 0.4748 1.9661 0.6798
SRCDKF 0.3970 0.4943 2.1858 0.3420 ISRCDKF 0.3737 0.4720 1.9297 0.5154

Table 2.

Root mean square errors of estimated state variables and mean of estimated parameter: case 1.

The results also show that the number of parameters to estimate affects the estimation accuracy of the state variables. In other words, for all the techniques the estimation RMSE of xk increases from the first comparative study (where only the state variables are estimated) to case 1 (where the states and one parameter P1 is estimated) to case 2 (where the states and two parameters, P1 and P2, are estimated). For example, the RMSEs obtained using ISRCDKF for x1 in the first comparative study and cases 1–2 of the second comparative study are 0.3121, 0.3737 and 0.3846, respectively, which increase as the number of estimated parameters increases (see Tables 2 and 3). This observation is valid for the other state estimation techniques.

It can also be shown from Tables 2 and 3 that, for all the techniques, estimating more model parameters affects the estimation accuracy. The ISRCDKF method, however, still provides advantages over other methods in terms of the estimation accuracy.

Technique x1
(RMSE)
x2
(RMSE)
P1
(mean)
P2
(mean)
Technique x1
(RMSE)
x2
RMSE)
P1
(mean)
P2
(mean)
UKF 0.1962 0.6590 1.9484 -0.9798 IUKF 0.4056 0.4927 19408 -0.9721
CDKF 0.4170 0.4932 1.9482 -0.9786 ICDKF 0.4012 0.4908 1.9389 -0.9720
SRUKF 0.4133 0.4977 1.9481 -0.9776 ISRUKF 0.3989 0.4843 1.9342 -0.9677
SRCDKF 0.4090 0.4956 1.9436 ‐0.9741 ISRCDKF 0.3846 0.4875 1.9305 ‐0.9486

Table 3.

Root mean square errors of estimated state variables and mean of estimated parameter: case 2.

4.1.3.1.2. Effect of noise content on the estimation RMSE

It is assumed that a noise is added to the state variable. In order to show the performance of the estimation algorithms in the presence of noise, three different measurement noise values, 101, 102 and 103, are considered. The simulation results of estimating the state xk using the UKF, IUKF, CDKF, ICDKF, SRUKF, ISRUKF, SRCDKF and ISRCDKF methods when the noise levels vary in {101,102 and 103} are shown in Table 4.

Noise levels UKF CDKF SRUKF SRCDKF IUKF ICDKF ISRUKF ISRCDKF
10-1 x1 0.3539 0.3512 0.3495 0.3324 0.3342 0.3265 0.3254 0.3121
x2 0.4658 0.4593 0.4590 0.4593 0.4341 0.4315 0.4256 0.4213
10-2 x1 0.1293 0.1264 0.1208 0.1174 0.1134 0.1095 0.1075 0.1066
x2 0.3564 0.3493 0.3474 0.3457 0.3440 0.3371 0.3355 0.3314
10-3 x1 0.0460 0.0454 0.0448 0.0446 0.0436 0.0415 0.0394 0.0376
x2 0.3426 0.3360 0.3188 0.3062 0.2989 0.2918 0.2875 0.2830

Table 4.

Root mean square errors (RMSEs) of the estimated states for different noise levels.

In other words, for the estimation techniques, the estimation RMSEs of xk increase from the first comparative study (noise value =101) to case (where the noise value =103). For example, the RMSEs obtained using ISRCDKF for x1 where the noise level in {101,102 and 103} are 0.3121, 0.1066 and 0.0376, which increase as the noise variance increases (refer to Table 4).

Advertisement

5. Conclusions

In this chapter, various SPKF‐based methods are used to estimate nonlinear state variables and model parameters. They are compared for the estimation performance in two comparative studies. In the first comparative study, the state variables are estimated from noisy measurements of these variables, and the several estimation methods are compared by estimating the RMSE with respect to the noise‐free data. In the second comparative study, of the state variables as well as that the model parameters are estimated. Comparing the performances of the several state estimation extensions, the impact of the number of estimated model parameters on the convergence and accuracy of these methods is also evaluated. The results of the second comparative study show that, for all the techniques, estimating more model parameters affects the estimation accuracy as well as the convergence of the estimated states and parameters. The iterated square‐root central difference Kalman method, however, still provides advantages over other methods in terms of the estimation accuracy, convergence and execution times.

Advertisement

Acknowledgments

This work was made possible by NPRP grant NPRP7‐1172‐2‐439 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

References

  1. 1. W. Guo. Dynamic state‐space models. Journal of Time Series Analysis. 2003;24(2):149–158.
  2. 2. G. Kitagawa. Monte Carlo filter and smoother for non‐Gaussian nonlinear state space models. Journal of Computational. 1996;5(1):1–25.
  3. 3. J. H. Kotecha and P. M. Djurić. Gaussian sum particle filtering for dynamic state space models. Acoustics, Speech and Signal Processing. 2001;6(IEEE, 2001):3465–3468.
  4. 4. S.‐G. Cao, N. W. Rees, and G. Feng. Analysis and design of fuzzy control systems using dynamic fuzzy‐state space. Fuzzy Systems, IEEE Transactions on. 1999;7(2):192–200.
  5. 5. C.‐J. Kim and C. R. Nelson. State‐space models with regime switching: classical and Gibbs‐sampling approaches with. MIT Press, Cambridge. 1999;2.
  6. 6. A. Nabavi‐Niaki and M. R. Iravani. Steady‐state and dynamic models of unified power flow controller (UPFC) for power. Power Systems, IEEE Transactions. 1996;11(4):1937–1943.
  7. 7. M. Mansouri, O. Ilham, H. Snoussi, and C. Richard. Adaptive quantized target tracking in wireless sensor networks. Wireless Networks. 2011;17(7):1625–1639.
  8. 8. M. Mansouri, M. Mohamed‐Seghir, H. N. Nounou, M. N. Nounou, and H. Abu‐Rub. Bayesian methods for time‐varying state and parameter estimation in induction machines. International Journal of Adaptive Control and Signal Processing, 2015, 29(7), 905-924.
  9. 9. M. Mansouri, H. N. Nounou, M. N. Nounou, and A. A. Datta. State and parameter estimation for nonlinear biological phenomena modeled by S‐systems. Digital Signal Processing. 2014;28:1–17.
  10. 10. M. Mansouri, H. Nounou, and M. Nounou. Modeling of nonlinear biological phenomena modeled by S‐systems. Mathematical Biosciences. 2014;249:75–91.
  11. 11. A. Simoglou, E. Martin, and A. Morris. Statistical performance monitoring of dynamic multivariate processes using state space modelling. Computers and Chemical Engineering. 2002;26(6):909–920.
  12. 12. M. Mansouri, H. Nounou, and M. Nounou. State estimation of a chemical reactor process model‐a comparative study. IEEE 10th International Multi‐Conference, in Systems, Signals & Devices (SSD), 2013. 2013:1–6.
  13. 13. M. Mansouri, B. Dumont, and M.‐F. Destain. Modeling and prediction of time‐varying environmental data using advanced Bayesian methods. Exploring Innovative and Successful Applications of Soft Computing. 2013:112.
  14. 14. H. van der Kooij, R. Jacobs, B. Koopman, and F. van der Helm. An adaptive model of sensory integration in a dynamic. Biological Cybernetics. 2001;84(2):2103–2115.
  15. 15. M. Mansouri and M.‐F. Destain. Predicting biomass and grain protein content using Bayesian methods. Stochastic Environmental Research and Risk Assessment. 2015;29(4):1167–1177.
  16. 16. G. L. Plett. Extended Kalman filtering for battery management systems of LiPB‐based HEV battery packs: Part 3. State and parameter estimation. Journal of Power Sources. 2004;134(2):277–292.
  17. 17. D. Dochain. State and parameter estimation in chemical and biochemical processes: a tutorial. Journal of Process. 2003;13(8):801–818.
  18. 18. T. A. Wenzel, K. Burnham, M. Blundell, and R. Williams. Dual extended Kalman filter for vehicle state and parameter. Vehicle System Dynamics. 2006;44(2):153–171.
  19. 19. G. Evensen. The ensemble Kalman filter for combined state and parameter estimation. Control Systems, IEEE. 2009;29(3):83–104.
  20. 20. J. Ching, J. L. Beck, and K. A. Porter. Bayesian state and parameter estimation of uncertain dynamical systems. Probabilistic Engineering Mechanics. 2006;21(1):81–96.
  21. 21. H. Moradkhani, S. Sorooshian, H. V. Gupta, and P. R. Houser. Dual state–parameter estimation of hydrological models using ensemble Kalman filter. Advances in Water Resources. 2005;28(2):135–147.
  22. 22. P. Moireau, D. Chapelle, and P. Le Tallec. Joint state and parameter estimation for distributed mechanical systems. Computer Methods in Applied Mechanics and Engineering. 2008;197(6):659–677.
  23. 23. S. Aborhey and D. Williamson. State and parameter estimation of microbial growth processes. Automatica. 1978;14(5):493–498.
  24. 24. S. Rao, M. Buss, and V. Utkin. Simultaneous state and parameter estimation in induction motors using first‐and second order sliding modes. Industrial Electronics, IEEE Transactions on. 2009;56(9):3369–3376.
  25. 25. A. V. Savkin and I. R. Petersen. Recursive state estimation for uncertain systems with an integral quadratic constraint. IEEE Transactions on Automatic Control. 1995;40(6):1080–1083.
  26. 26. D. Simon. Optimal state estimation: Kalman, H8, and nonlinear approaches. John Wiley and Sons, 2006, pages 552, 978-0-471-70858-2.
  27. 27. M.S. Grewal, and A.P. Andrews, Kalman filtering: theory and practice using MATLAB. Theory and Practice, 2001, 5(5), pp.634-6.
  28. 28. V. Aidala. Parameter estimation via the Kalman filter. IEEE Trans. on Automatic Control. 1977;22(3):471–472.
  29. 29. L. Matthies, T. Kanade, and R. Szeliski. Kalman filter‐based algorithms for estimating depth from image sequences. International Journal of Computer Vision. 1989;3(3):209–238.
  30. 30. S. Julier and J. Uhlmann. New extension of the Kalman filter to nonlinear systems. Proceedings of SPIE. 1997;3(1):182–193.
  31. 31. L. Ljung. Asymptotic behavior of the extended Kalman filter as a parameter estimator for linear systems. IEEE Trans. 1979;24(1):36–50.
  32. 32. Y. Kim, S. Sul, and M. Park. Speed sensorless vector control of induction motor using extended Kalman filter. IEEE Trans. on Industrial Applications. 1994;3(5):1225–1233.
  33. 33. E. Wan and R. Van Der Merwe. The unscented Kalman filter for nonlinear estimation. Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000. AS‐SPCC. The IEEE 2000. 2000;153–158.
  34. 34. R. Van Der Merwe and E. Wan. The square‐root unscented Kalman filter for state and parameter‐estimation. In Acoustics, Speech, and Signal Processing, 2001. Proceedings (ICASSP’01) 2001 IEEE International Conference. 2001;6:3461–3464.
  35. 35. S. Sarkka. On unscented Kalman filtering for state estimation of continuous‐time nonlinear systems. IEEE Trans Automatic Control. 2007;52(9):1631–1641.
  36. 36. M. Mansouri, H. Nounou, M. Nounou, and A. A. Datta. Modeling of nonlinear biological phenomena modeled by S‐systems. In: Biomedical Engineering and Sciences (IECBES), 2012 IEEE EMBS Conference on IEEE. 2012;305–310.
  37. 37. J. Zhu, N. Zheng, Z. Yuan, Q. Zhang, X. Zhang, and Y. He. A slam algorithm based on the central difference Kalman filter. In: Intelligent Vehicles Symposium, 2009 IEEE. 2009;123–128.
  38. 38. M. M. Andreasen. Non‐linear DSGE models and the central difference Kalman filter. Journal of Applied Econometrics. 2013;28(6):929–955.
  39. 39. S. Holmes, G. Klein, and D. W. Murray. A square root unscented Kalman filter for visual monoslam. In: Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on. IEEE. 2008;3710–3716.
  40. 40. M. Huang, W. Li, and W. Yan. Estimating parameters of synchronous generators using square‐root unscented Kalman filter. Electric Power Systems Research. 2010;80(9):1137–1144.
  41. 41. M. NøRgaard, N. K. Poulsen, and O. Ravn. New developments in state estimation for nonlinear systems. Automatica. 2000;36(11):1627–1638.
  42. 42. R. Zhan and J. Wan. Iterated unscented Kalman filter for passive target tracking. Aerospace and Electronic Systems, IEEE Transactions on. 2007;43(3):1155–1163.
  43. 43. XK. Yiyu. Iterated unscented kalman filter. Journal of Huazhong University of Science and Technology (Nature Science Edition) 2007, 11, 006.
  44. 44. L.M. Sibley, Gabe, Gaurav S. Sukhatme, and Larry Matthies. "The Iterated Sigma Point Kalman Filter with Applications to Long Range Stereo." In Robotics: Science and Systems, vol. 8, no. 1, pp. 235-244. 2006.
  45. 45. G. Sibley, G. Sukhatme, and L. Matthies. The iterated sigma point Kalman filter with applications to long range stereo. In Robotics: Science and Systems. 2006;8(1):235–244.
  46. 46. M. Mansouri, O. Avci, H. Nounou, and M. Nounou. A comparative assessment of nonlinear state estimation methods for structural health monitoring. Model Validation and Uncertainty Quantification. 2015;3:45–54.
  47. 47. J. Heming, M. Hongwei, Gao Wei, Che Yanting. Application of ISRCDKF in SINS initial alignment with large azimuth misalignment angle. AISS (Advances in Information Sciences and Service Sciences). 2013;5:746–755.
  48. 48. R. Van Der Merwe, E. A. Wan, S. Julier et al. Sigma‐point Kalman filters for nonlinear estimation and sensor‐fusion: applications to integrated navigation. In: Proceedings of the AIAA Guidance, Navigation & Control Conference. 2004;16–19.

Written By

Marwa Chaabane, Imen Baklouti, Majdi Mansouri, Nouha Jaoua, Hazem Nounou, Mohamed Nounou, Ahmed Ben Hamida and Marie‐France Destain

Reviewed: 15 April 2016 Published: 19 October 2016