aSRUKF: filter initialization, sigmapoints calculation and filter process update step. The Kronecker product is described by ⨂ and
Abstract
Since the initial developments in the statespace theory in the 1950s and 1960s, the state estimation has become an extensively researched and applied discipline. All systems that can be modelled mathematically are candidates for state estimators. The state estimators reconstruct the states that represent internal conditions and status of a system at a specific instant of time using a mathematical model and the information received from the system sensors. Moreover, the estimator can be extended for system parameter estimation. The resulting Kalman filter (KF) derivatives for state and parameter estimation also require knowledge about the noise statistics of measurements and the uncertainties of the system model. These are often unknown, and an inaccurate parameterization may lead to decreased filter performance or even divergence. Additionally, insufficient system excitation can cause parameter estimation drifts. In this chapter, a sensitivitybased adaptive squareroot unscented KF (SRUKF) is presented. This filter combines a SRUKF and the recursive predictionerror method to estimate system states, parameters and covariances online. Moreover, local sensitivity analysis is performed to prevent parameter estimation drifts, while the system is not sufficiently excited. The filter is evaluated on two testbeds based on an axis serial mechanism and compared with the joint state and parameter UKF.
Keywords
 Unscented Kalman
 filter
 recursive predictionerror method
 state estimation
 parameter estimation
 covariance estimation
 sensitivity analysis
1. Introduction
State estimation is applicable to almost all areas of engineering and science. It is interesting to engineers for different reasons such as the control of a system using a statefeedback controller or monitoring the system states that are not measureable with sensors, or the sensors are too expensive or too difficult to install. The system states can be defined as variables, which provide a representation of internal conditions and status of a system at a specific instant of time. Applications that include a mathematical model of any system are candidates for state estimation. The estimations can be useful, for example, car assistance systems [1], predictive maintenance [2], structure health estimation [3], and many other applications (see [4] and references therein).
Different algorithms were proposed for online state estimation. A historical survey of the filtering algorithms can be found in [5]. The Kalman filter (KF) was presented in [6] and nowadays is the most widely applied algorithm for state estimation on linear systems. The KF is a linear optimal estimator [7]. This means that the KF is the best filter that uses a linear combination of the system measurements and states in order to estimate the last ones. The main operation of the KF is the propagation of the mean and covariance of the (Gaussian) random variables (RVs) through time. The KF assumes that the model and the noise statistics affecting the system are known. Otherwise, the estimates can degrade.
Different derivatives of the KF have been developed for nonlinear systems during the last decades. The extended Kalman filter (EKF) presented in [8] is the most commonly used estimator for nonlinear system. This filter linearizes the system and measurement equations at the current estimate. This may lead to poor performances for highly nonlinear or highly noisy systems [9]. To address the linearization errors of the EKF, the unscented Kalman filter (UKF) was presented in [10]. This filter uses the unscented transformation (UT) to pick a minimal set of points around the mean of the GRV. These points capture the true mean and covariance of the GRV, and they are then propagated through the true nonlinear function capturing the a posteriori mean and covariance more accurately.
The mathematical models usually describe the behaviour of the systems, and generally the structure and the parameters need to be determined. Once the structure is defined, system inputs and measurements can be used to identify the model parameters. This can be performed offline [11, 12]. However, the parameters usually may vary during operations. In order to monitor these variations online, the nonlinear extensions of the KF can be extended for parameter estimation [9].
The KF derivatives can only achieve good performances under a priori assumptions, for example, accurate system models, noise statistics knowledge, and proper initial conditions [7, 9, 13]. If one of these assumptions is not guaranteed, the KF derivative can potentially become unstable and the estimations can be diverged [14, 15, 16]. Moreover, tuning the performance of these filters implies primarily adjusting the process and measurement noise covariances to match the (unknown) realsystem noise statistics. In the last decades, numerous methods were presented to estimate these unknown covariances. The autocovariance leastsquare method was presented in [17, 18], and it was extended (and simplified) in [19], and diagonal process and noise covariances were considered in [20]. This method estimates the noise covariances using least squares and it can only be used with KF. The method was extended for nonlinear or timevarying systems using an EKF in [21]. Online covariance estimation for EKF and squareroot cubature Kalman filter (SRCuKF) was presented in [22]. These methods implement a combination of a KF derivative and a recursive predictionerror method (RPEM) to estimate covariances online. In [23], an adaptive UKF was presented to estimate only covariances online.
In this chapter, a sensitivitybased adaptive squareroot unscented Kalman filter (SBaSRUKF) is presented. This filter estimates system states, parameters and covariances online. Using local state sensitivity models (SMs), this filter prevents parameter and covariance estimation drifts, while the system is not sufficiently excited. Sensitivity analysis (SA) for the UKF is also presented. The performance of this filter is validated in simulations on two testbeds and compared with the joint UKF for parameter and state estimation.
Section 2 covers some algorithms for recursive estimation of states, parameters, and covariances. The SBaSRUKF is the main topic of this chapter. This filter uses a KF derivative for state estimation. In Section 2.1, the KF for state estimation in linear dynamic systems is presented. The UKF, a nonlinear extension of the KF, is described in Section 2.2 and also extended for estimating system parameters. Section 2.3 covers parameter estimation using the RPEM. The UKF and the RPEM are combined in Section 2.4 to obtain the aSRUKF. In order to identify unknown parameters, the system inputs should be persistently exciting. Sensitivity models (SMs) are presented in this section and are used to evaluate the system excitation and prevent parameter estimation drifts while the system is not sufficiently excited.
Section 3 covers the testbed used for the filter evaluations. A planar onelink robot system is described in Section 3.1, and a pendulum robot (pedubot) is mathematically modelled in Section 3.2. The first testbed is used for the SM analysis, and the chaotic system is used to compare the filter performance with the joint SRUKF. The evaluation results of the SBaSRUKF are presented in Section 4. The SMs are analysed with different system inputs on the first testbed in Section 4.1, and the filter performance for state and parameter estimation is compared with the joint SRUKF in Section 4.2. Section 5 completes the chapter with conclusions.
2. Recursive estimation
This section discusses some recursive approaches to estimate states, parameters and covariances of a general system. The KF as the optimal linear estimator for linear dynamic systems is presented. Nonlinear extensions of the KF are discussed, as well as an extension for parameter estimation. A recursive GaussNewton method for parameter estimation is also presented in this section. Finally, the last subsection discusses the SBaSRUKF, which is the main topic of this chapter, and the SMs that are used for excitation monitoring.
2.1. Kalman filter (KF)
The KF is the most widely applied algorithm for state estimation on linear dynamic systems that are corrupted by stochastic noises (e.g. Gaussian noise). It uses a parametric mathematical model of the system and a series of (noisy) measurements from, for example, sensors to estimate the system states online [4]. In general, the state distribution of a system can be approximated by random variables (RVs). The main operation of the KF is the propagation of the mean and covariance of these (Gaussian) RVs through time. The KF is an optimal linear filter for these types of systems [7, 9]. It is a recursive algorithm, which enables new measurements to be processed as they arrive to correct and update the state and measurement estimates.
In general, a linear discretetime system corrupted by additive noises can be written as follows:
where
The KF iterative nature can be separated in two main steps: the process update and the correction step. In the process update, based on the knowledge of the system dynamics, the state estimate (
The KF is a linear optimal estimator, but it assumes that the system model and noise statistics are known. Otherwise, the filter estimates can degrade. Tuning the performance of the filter implies primarily adjusting the process and measurement covariance matrices to match the (unknown) realsystem noise statistics. In practical implementations of the KF, the filter tuning is performed online, and empirical values are normally used. Extensive research has been done in this field to estimate the noise covariances from data (see [17, 18, 19, 20] and references therein).
As mentioned before, the KF is the optimal linear estimator, which estimates states of a linear dynamic system using the inputs, measurements and a parametric mathematical model of the system. Even though many systems are close enough to linear and linear estimators give acceptable results, all systems are ultimately nonlinear. Extensions of the KF have been presented in the last decades to deal with nonlinear systems. Some examples are the EKF and the sigmapoint Kalman filters (SPKFs).
2.2. Nonlinear filtering
The EKF and the UKF (a SPKF type) are derivatives of the KF for nonlinear systems. The EKF was originally proposed in [8] and is the most commonly applied state estimator for nonlinear systems. However, if the system nonlinearities are severe or the noises affecting the system are high, the EKF can be difficult to tune, often gives wrong estimates and can lead to filter divergence easily. This is because the EKF uses linearized system and measurement models at the current estimate and propagates the mean and covariance of the GRVs through these linearizations. The UKF was presented in [10] and addresses the deficiencies of the EKF linearization providing a direct and explicit mechanism for approximating and transforming the mean and covariance of the GRVs.
In general, a discretetime statespace model of a nonlinear system can be described by
where
The nonlinear extensions of the KF can also estimate the unknown parameters of a system. The UKF was extended for joint state and parameter estimation in [24]. In this case, the system state vector
remaining
Squareroot (SR) filtering increases mathematically the precision of the KF when hardware precision is not available. In [25], an SR version of the UKF was presented, which uses linear algebra techniques such as the QR decomposition and the Cholesky factor [26] to calculate the SR of the estimation error covariance. The SR form improves the numerical stability of the filter and guarantees positive semidefiniteness of this covariance. Additionally, the computational complexity for state and parameter estimation is reduced [25].
2.3. Recursive predictionerror method
In this section, the recursive predictionerror method (RPEM) is briefly discussed. This method is extensively analysed in [11, 12] and uses a parameterized predictor that estimates the system outputs at the current time step. The resulting predicted system output is then compared to the actual system measurement, and the predictor parameters are corrected such as that the prediction error is minimized.
The quadratic criterion function defined as
is minimized using the stochastic GaussNewton method in order to obtain the predictor parameters. The prediction error
where
The recursive solution that minimizes the quadratic criterion function in Eq. (5) is given by the following scheme:
The userdefined parameter
The selection of the forgetting factor essentially defines the measurements that are relevant for the current estimation of the parameter predictor. The most common choice is to take a constant forgetting factor for systems that change gradually. Other criterions for selection of this factor and the convergence of the RPEM are discussed extensively in [11, 12].
2.4. Sensitivitybased adaptive squareroot Kalman filter
This is the main section of this chapter. The earlier sections were written to provide the needed methods for this section, and the later sections are written to analyse and test the performance of the filter described in this section.
The aSRUKF is discussed in this section. This filter combines the SRUKF and the RPEM. While the KF derivative estimates the system states and measurements, the RPEM calculates the unknown parameters and covariances.
In this filter, the innovation error in Eq. (6) is calculated and minimized using the recursive scheme presented in Eq. (7) in order to estimate the unknown system parameters and covariances. Besides the matrix
The equations of a SRUKF are then extended in order to calculate the output sensitivity. To simplify the reading flow, the following definitions are presented:
where
The process update step of the aSRUKF is presented in
Table 1
. After the filter initialization, the sigma points (
aSRUKF  Initialization 


SRUKF  Sigma points 


Sigmapoints propagation 


SM  Sigmapoint derivatives 
with 

Sensitivity propagation 


SRUKF  A priori state estimate 


SR estimation error covariance 


SM  A priori state sensitivity 


Derivative of estimation error covariance 
with in which Further details of the construction of the matrix 
The correction step is shown in
Table 2
. A new set of sigma points (
SRUKF  Sigma points 

(a) 
Output sigma points 


Estimated measurement 


SM  Sigmapoints derivative 

(b) 
Output sigmapoints derivative 


Output sensitivity 

(c)  
SRUKF  SR innovation error covariance 


Crosscovariance matrix 


SM  Derivative of innovation error covariance 


Derivative of Crosscovariance matrix 


SRUKF  Kalman gain 


A posteriori state estimate 


SR estimation error covariance 
with 

SM  Kalman gain derivative 


A posteriori state sensitivity 

(d)  
Derivative of estimation error covariance update 


RPEM  Parameter and covariance estimation 

(e) 
The SR factor of the innovation error covariance (
Local sensitivity analysis can be utilized to determine if a system input or a system modification can excite the system parameters in order to identify them. The a posteriori state sensitivity from
Table 2
The variable
The local state sensitivity can be also calculated as follows (cf. [29]):
This sensitivity computation is compared in Section 4 with a posterior state sensitivity obtained using the SBaSRUKF.
The RPEM can be combined with different KF derivatives to estimate system parameters and covariances. An EKF and a SRCuKF were used to calculate the output sensitivity in [22], which is then used to estimate the unknown values. More information about the aSRUKF can be found in [28, 30].
3. Testbeds
In this section, two testbeds are presented and modelled. These modelled systems are used in Section 4 to test the performance of the SBaSRUKF. The planar onelink robot system is presented and extended with a second arm to form a pendulum robot (pendubot). The pendubot is a chaotic and inherently unstable system.
3.1. Planar onelink robot system
This section describes the planar onelink robot system shown in Figure 1 . It consists of a long rectangular aluminium link driven by a DC motor via a shaft and a onestate toothed gear.
The angular position is measured with an incremental rotary encoder and the motor torque is determined by measuring the motor current. To simplify the motor model, it is assumed that the motor current is directly proportional to the armature current and that the motor torque is proportional to this current by a constant factor. Additionally, the link acceleration is measured using a microelectromechanical sensor (MEMS) accelerometer attached to the link. The motor position is controlled by a classical cascade structure with P and Pfeedback controllers for position and speed.
The corresponding continuous statespace representation of the planar onelink robot system, where the link angular position and acceleration are measured, can be written as follows:
The system states are the link angular position (
where
3.2. Pendubot
This section describes the pendulum robot (pendubot) that is presented in
Figure 2
. The pendubot is built adding an underactuated link to the planar onelink robot system described in Section 3.1. The actuated joint (
The system states result as
in which
where
The viscous and Coulomb frictions are described with the parameters
where the
The parameters
4. Experiments
In this section, the SBaSRUKF is tested on the planar onelink robot system and on the pendubot. Both testbed models were discretized using firstorder explicit Euler with a sample time of
4.1. State sensitivity analysis and parameter and covariance estimation
Sensitivity analysis (SA) was performed on simulation using the planar onelink robot system. The system parameters were identified offline on the real testbed using PredictionError Method. The parameters defined as
were used for the simulation. Noise distributions with the following covariance matrices
^{3}were added to the simulation to incorporate model and measurement uncertainties. An scurve profile was considered as a desired link angular position.
The following system states, parameters and covariances were estimated:
The remaining tuning factors for SBaSRUKF were set as
The values of
The filter initial system states were set to zero and the initial system parameters were set as the true values multiply by a random factor between
In order to test the sensitivitybased section of the filter, the link angular position was held at
Figure 3
compares the a posteriori state sensitivity calculated using the SBaSRUKF and the state sensitivity using Eq. (10). The first diagram shows the estimated and true link angular position of the planar onelink robot system. The following diagrams present the normalized SA related to the link angular speed (
Figure 4
shows the state, parameter and covariance estimation of the planar onelink robot system. The aSRUKF was used in two configurations: SBaSRUKF utilized SA to monitor the system excitation while aSRUKF did not. After the initialization, the two estimators needed almost the same time to converge to the offline identified values. The parameters estimated using the SBaSRUKF did not diverge while the link position was held. Because two of the estimated parameters using the aSRUKF diverged, this filter needed more time to converge after the stop phase. The two filters were able to estimate the link mass and length parameter during the stop phase. While the viscous and the Coulomb friction coefficients and the inverse inertia estimated with the SBaSRUKF remained constant during the stop phase, the aSRUKF was able to estimate the Coulomb friction with bias (before
The SBaSRUKF was able to estimate the parameters and covariance of the proposed testbed. The online estimations converged to the offline identified values without bias. The sensitivitybased approach used as a system excitation monitor prevented parameter estimation drifts and did not modify the convergence time of the filter.
4.2. Comparison between SBaSRUKF and joint state and parameter SRUKF
The SBaSRUKF and the joint SRUKF were compared on the pendubot for state and parameter estimation. The SBaSRUKF identified also covariances.
The system parameters were identified offline and used for the simulation as
An scurve profile was selected as the desired position of the first link. The following states and parameters were estimated online:
The values
To simulate model and measurement uncertainties, noise distributions with the following covariance matrices were added to the system for the simulation:
The tuning parameters for the joint SRUKF were chosen as
and the parameters for the SBaSRUKF were set as
The filter initial system states were set to zero and the initial system parameters were set as the true values multiply by a random factor between
The first four values of
The state estimation of the pendubot is presented in
Figure 5
. The SBaSRUKF was slightly faster to reach the true system states (cf. diagrams 1 and 4) and after ca.
Figure 6
shows the parameter estimation of the pendubot for both filters. Using the same tuning parameter set, while the SBaSRUKF estimated the
5. Conclusions
In this chapter, some approaches for state, parameter and covariance estimation were discussed. Moreover, a sensitivitybased adaptive squareroot unscented Kalman filter (SBaSRUKF) was discussed and its performance was analysed. This filter estimates system states, parameters and covariances online. Additionally, sensitivity models were presented and used as system excitation monitor to prevent parameter and covariance estimation drifts while the system excitation was not sufficient.
Two testbeds based on an axis serial mechanism were modelled and used for testing the performance of the filter. Two experiments were performed in simulation on these two testbeds: a state sensitivity analysis and a comparison between the joint state and parameter squareroot unscented Kalman filter (SRUKF) and the SBaSRUKF. Simulation results show that the SBaSRUKF outperforms the joint SRUKF with the same tuning configuration. While the joint SRUKF did not estimate two of the five parameters correctly, the SBaSRUKF identified all the parameters. Moreover, the estimation of covariances reduces the parameter estimation bias. Configuring the right excitation thresholds for the system excitation monitor in Eq. (9) prevented parameter estimation drifts, while the input was not persistently exciting and did not only increased but also in some cases reduced the convergence time of the filter.
References
 1.
Wielitzka M, Dagen M, Ortmaier T. State estimation of vehicle‘s lateral dynamics using unscented Kalman filter. 53rd IEEE Conference on Decision and Control (CDC); Los Angeles, USA. 2014:50155020. DOI: 10.1109/CDC.2014.7040172  2.
Ding SX. Modelbased Fault Diagnosis Techniques: Design Schemes, Algorithms, and Tools. 1st ed. Berlin, Germany: Springer; 2008. p. 493  3.
Simon D, Simon DL. Kalman filtering with inequality constraints for turbofan engine health estimation. IEE Proceedings  Control Theory and Applications. 2006; 153 (3):371378. DOI: 10.1049/ipcta:20050074  4.
Grewal MS, Andrews AP. Kalman Filtering: Theory and Practice with MATLAB. 4th ed. Hoboken, New Jersey, USA: WileyIEEE Press; 2014. p. 640  5.
Sorenson HW. Leastsquares estimation: from Gauss to Kalman. IEEE Spectrum. 1970; 7 (7):6368. DOI: 10.1109/MSPEC.1970.5213471  6.
Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960; 82 (1):3545. DOI: 10.1115/1.3662552  7.
Anderson BDO, Moore JB. Optimal Filtering. 1st ed. Mineola, NY, USA: Dover Publications; 2006. p. 368  8.
Schmidt SF. Applications of state space methods to navigation problems. C. T. Leondes (Ed.) Advanced in Control Systems. 1966; 3 :293340  9.
Simon D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. 1st ed. Hoboken, New Jersey, USA: WileyInterscience; 2006. p. 526  10.
Julier SJ, Uhlmann JK. New extension of the Kalman filter to nonlinear systems. In: Signal Processing, Sensor Fusion, and Target Recognition VI. Orlando, FL, USA: International Society for Optics and Photonics; 1997. pp. 182194. DOI: 10.1117/12.280797  11.
Ljung L, Soderstrom T. Theory and Practice of Recursive Identification. 1st ed. Cambridge, Massachusetts, USA: The MIT Press; 1983. p. 551  12.
Ljung L. System Identification: Theory for the User. 2nd ed. Prentice Hall: Upper Saddle River, NJ, USA; 1999. p. 672  13.
Saab SS, Nasr GE. Sensitivity of discretetime Kalman filter to statistical modeling errors. Optimal Control Applications and Methods. 1999; 20 (5):249259. DOI: 10.1002/(SICI)10991514(199909/10)20:5<249::AIDOCA659>3.0.CO;22  14.
Fitzgerald R. Divergence of the Kalman filter. IEEE Transactions on Automatic Control. 1971; 16 (6):736747. DOI: 10.1109/TAC.1971.1099836  15.
Price C. An analysis of the divergence problem in the Kalman filter. IEEE Transactions on Automatic Control. 1968; 13 (6):699702. DOI: 10.1109/TAC.1968.1099031  16.
Ljung L. Asymptotic behavior of the extended Kalman filter as a parameter estimator for linear systems. IEEE Transactions on Automatic Control. 1979; 24 (1):3650. DOI: 10.1109/TAC.1979.1101943  17.
Odelson B. Estimating Disturbance Covariances From Data For Improved Control Performance [dissertation]. Madison, WI, USA: University of WisconsinMadison; 2003. p. 309  18.
Odelson BJ, Rajamani MR, Rawlings JB. A new autocovariance leastsquares method for estimating noise covariances. Automatica. 2006; 42 (6):303308. DOI: 10.1016/j.automatica.2005.09.006  19.
Rajamani MR, Rawlings JB. Estimation of the disturbance structure from data using semidefinite programming and optimal weighting. Automatica. 2009; 45 (1):142148. DOI: 10.1016/j.automatica.2008.05.032  20.
Riva MH, Díaz Díaz J, Dagen M, Ortmaier T. Estimation of covariances for Kalman filter tuning using autocovariance method with Landweber iteration. In: IASTED International Symposium on Intelligent Systems and Control; Marina del Rey, CA, USA. 2013. DOI: 10.2316/P.2013.807009  21.
Rajamani MR. Databased Techniques to Improve State Estimation in Model Predictive Control [dissertation]. Madison, WI, USA: University of WisconsinMadison; 2007. p. 262  22.
Riva MH, Beckmann D, Dagen M, Ortmaier T. Online parameter and process covariance estimation using adaptive EKF and SRCuKF approaches. In: 2015 IEEE Conference on Control Applications (CCA); Sydney, Australia. 2015. p. 12031210. DOI: 10.1109/CCA.2015.7320776  23.
Han J, Song Q, He Y. Adaptive unscented Kalman filter and its applications in nonlinear control. In: Moreno VM, Pigazo A, editors. Kalman Filter Recent Advances and Applications. Croatia: InTech; 2009. pp. 124. DOI: 10.5772/6799  24.
Wan EA, Van der Merwe R. The unscented Kalman filter for nonlinear estimation. In: IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium; Lake Louise, Canada. 2000. p. 153158. DOI: 10.1109/ASSPCC.2000.882463  25.
Van der Merwe R, Wan EA. The squareroot unscented Kalman filter for state and parameterestimation. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 01); Salt Lake City, USA. 2001. p. 34613464. DOI: 10.1109/ICASSP.2001.940586  26.
Press WH, Flannery BP, Teukolsky SA, Vetterling WT. Numerical Recipes in C: The Art of Scientific Computing. 2nd ed. Cambridge, New York, USA: Cambridge University Press; 1992. p. 994  27.
Henderson HV, Searle SR. The vecpermutation matrix, the vec operator and Kronecker products: A review. Linear and Multilinear Algebra. 1981; 9 (4):271288. DOI: 10.1080/03081088108817379  28.
Riva MH, Dagen M, Ortmaier T. Adaptive Unscented Kalman Filter for online state, parameter, and process covariance estimation. In: 2016 American Control Conference (ACC); Boston, MA, USA. 2016. p. 45134519. DOI: 10.1109/ACC.2016.7526063  29.
Eykhoff P. System Identification Parameter and State Estimation. 1st ed. Hoboken, New Jersey, USA: WileyInterscience; 1974. p. 555  30.
Riva MH, Dagen M, Ortmaier T. Comparison of covariance estimation using autocovariance LS method and adaptive SRUKF. In: 2017 American Control Conference (ACC); Seattle, WA, USA. 2017. p. 57805786. DOI: 10.23919/ACC.2017.7963856
Notes
 The hat ̂ over a vector represents the estimate of the vector, for example, x ̂ describes the estimate of the state vector x .
 Matlab does not allow the use of cholupdate in realtime applications; using coder . extrinsic ′ cholupdate ′ , it is possible to use the function in Simulink but the application does not run in real time. The cholupdate line can be replaced with chol S xx , k − T S xx , k − + w 0 c X 0 , k ∗ − x ̂ k − .
 The function diag v transforms the v ∈ R n vector into a n × n square matrix with the elements of v on the diagonal of the matrix.