Open access peer-reviewed chapter

Sensitivity-Based Adaptive SRUKF for State, Parameter, and Covariance Estimation on Mechatronic Systems

By Mauro Hernán Riva, Mark Wielitzka and Tobias Ortmaier

Submitted: May 10th 2017Reviewed: November 14th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.72470

Downloaded: 733

Abstract

Since the initial developments in the state-space theory in the 1950s and 1960s, the state estimation has become an extensively researched and applied discipline. All systems that can be modelled mathematically are candidates for state estimators. The state estimators reconstruct the states that represent internal conditions and status of a system at a specific instant of time using a mathematical model and the information received from the system sensors. Moreover, the estimator can be extended for system parameter estimation. The resulting Kalman filter (KF) derivatives for state and parameter estimation also require knowledge about the noise statistics of measurements and the uncertainties of the system model. These are often unknown, and an inaccurate parameterization may lead to decreased filter performance or even divergence. Additionally, insufficient system excitation can cause parameter estimation drifts. In this chapter, a sensitivity-based adaptive square-root unscented KF (SRUKF) is presented. This filter combines a SRUKF and the recursive prediction-error method to estimate system states, parameters and covariances online. Moreover, local sensitivity analysis is performed to prevent parameter estimation drifts, while the system is not sufficiently excited. The filter is evaluated on two testbeds based on an axis serial mechanism and compared with the joint state and parameter UKF.

Keywords

  • Unscented Kalman
  • filter
  • recursive prediction-error method
  • state estimation
  • parameter estimation
  • covariance estimation
  • sensitivity analysis

1. Introduction

State estimation is applicable to almost all areas of engineering and science. It is interesting to engineers for different reasons such as the control of a system using a state-feedback controller or monitoring the system states that are not measureable with sensors, or the sensors are too expensive or too difficult to install. The system states can be defined as variables, which provide a representation of internal conditions and status of a system at a specific instant of time. Applications that include a mathematical model of any system are candidates for state estimation. The estimations can be useful, for example, car assistance systems [1], predictive maintenance [2], structure health estimation [3], and many other applications (see [4] and references therein).

Different algorithms were proposed for online state estimation. A historical survey of the filtering algorithms can be found in [5]. The Kalman filter (KF) was presented in [6] and nowadays is the most widely applied algorithm for state estimation on linear systems. The KF is a linear optimal estimator [7]. This means that the KF is the best filter that uses a linear combination of the system measurements and states in order to estimate the last ones. The main operation of the KF is the propagation of the mean and covariance of the (Gaussian) random variables (RVs) through time. The KF assumes that the model and the noise statistics affecting the system are known. Otherwise, the estimates can degrade.

Different derivatives of the KF have been developed for nonlinear systems during the last decades. The extended Kalman filter (EKF) presented in [8] is the most commonly used estimator for nonlinear system. This filter linearizes the system and measurement equations at the current estimate. This may lead to poor performances for highly nonlinear or highly noisy systems [9]. To address the linearization errors of the EKF, the unscented Kalman filter (UKF) was presented in [10]. This filter uses the unscented transformation (UT) to pick a minimal set of points around the mean of the GRV. These points capture the true mean and covariance of the GRV, and they are then propagated through the true nonlinear function capturing the a posteriori mean and covariance more accurately.

The mathematical models usually describe the behaviour of the systems, and generally the structure and the parameters need to be determined. Once the structure is defined, system inputs and measurements can be used to identify the model parameters. This can be performed offline [11, 12]. However, the parameters usually may vary during operations. In order to monitor these variations online, the nonlinear extensions of the KF can be extended for parameter estimation [9].

The KF derivatives can only achieve good performances under a priori assumptions, for example, accurate system models, noise statistics knowledge, and proper initial conditions [7, 9, 13]. If one of these assumptions is not guaranteed, the KF derivative can potentially become unstable and the estimations can be diverged [14, 15, 16]. Moreover, tuning the performance of these filters implies primarily adjusting the process and measurement noise covariances to match the (unknown) real-system noise statistics. In the last decades, numerous methods were presented to estimate these unknown covariances. The autocovariance least-square method was presented in [17, 18], and it was extended (and simplified) in [19], and diagonal process and noise covariances were considered in [20]. This method estimates the noise covariances using least squares and it can only be used with KF. The method was extended for nonlinear or time-varying systems using an EKF in [21]. Online covariance estimation for EKF and square-root cubature Kalman filter (SRCuKF) was presented in [22]. These methods implement a combination of a KF derivative and a recursive prediction-error method (RPEM) to estimate covariances online. In [23], an adaptive UKF was presented to estimate only covariances online.

In this chapter, a sensitivity-based adaptive square-root unscented Kalman filter (SB-aSRUKF) is presented. This filter estimates system states, parameters and covariances online. Using local state sensitivity models (SMs), this filter prevents parameter and covariance estimation drifts, while the system is not sufficiently excited. Sensitivity analysis (SA) for the UKF is also presented. The performance of this filter is validated in simulations on two testbeds and compared with the joint UKF for parameter and state estimation.

Section 2 covers some algorithms for recursive estimation of states, parameters, and covariances. The SB-aSRUKF is the main topic of this chapter. This filter uses a KF derivative for state estimation. In Section 2.1, the KF for state estimation in linear dynamic systems is presented. The UKF, a nonlinear extension of the KF, is described in Section 2.2 and also extended for estimating system parameters. Section 2.3 covers parameter estimation using the RPEM. The UKF and the RPEM are combined in Section 2.4 to obtain the aSRUKF. In order to identify unknown parameters, the system inputs should be persistently exciting. Sensitivity models (SMs) are presented in this section and are used to evaluate the system excitation and prevent parameter estimation drifts while the system is not sufficiently excited.

Section 3 covers the testbed used for the filter evaluations. A planar one-link robot system is described in Section 3.1, and a pendulum robot (pedubot) is mathematically modelled in Section 3.2. The first testbed is used for the SM analysis, and the chaotic system is used to compare the filter performance with the joint SRUKF. The evaluation results of the SB-aSRUKF are presented in Section 4. The SMs are analysed with different system inputs on the first testbed in Section 4.1, and the filter performance for state and parameter estimation is compared with the joint SRUKF in Section 4.2. Section 5 completes the chapter with conclusions.

2. Recursive estimation

This section discusses some recursive approaches to estimate states, parameters and covariances of a general system. The KF as the optimal linear estimator for linear dynamic systems is presented. Nonlinear extensions of the KF are discussed, as well as an extension for parameter estimation. A recursive Gauss-Newton method for parameter estimation is also presented in this section. Finally, the last subsection discusses the SB-aSRUKF, which is the main topic of this chapter, and the SMs that are used for excitation monitoring.

2.1. Kalman filter (KF)

The KF is the most widely applied algorithm for state estimation on linear dynamic systems that are corrupted by stochastic noises (e.g. Gaussian noise). It uses a parametric mathematical model of the system and a series of (noisy) measurements from, for example, sensors to estimate the system states online [4]. In general, the state distribution of a system can be approximated by random variables (RVs). The main operation of the KF is the propagation of the mean and covariance of these (Gaussian) RVs through time. The KF is an optimal linear filter for these types of systems [7, 9]. It is a recursive algorithm, which enables new measurements to be processed as they arrive to correct and update the state and measurement estimates.

In general, a linear discrete-time system corrupted by additive noises can be written as follows:

xk=Axk1+Buk1+wk,yk=Cxk+Duk+vk,E1

where xkRnxis the system state vector at discrete time k, and ukRnuand ykRnycorrespond to the system input and measurement vectors, respectively. The matrices ARnx×nx, BRnx×nu, CRny×nxand DRny×nuare often called system, input, output and feedforward matrices, respectively, and describe the system behaviour. The random variable vectors wkand vkrepresent the process and measurement noises. These are considered white Gaussian, zero mean, and uncorrelated and have covariance matrices Qkand Rk, respectively, as

wkN0Qk,vkN0Rk.E2

The KF iterative nature can be separated in two main steps: the process update and the correction step. In the process update, based on the knowledge of the system dynamics, the state estimate (x̂k1+)1 from the previous time step (k1) is used to calculate a new estimate at the current time (k). This step does not include any information of the system measurements and the resulting state estimate is called a priori estimate (x̂k). In the correction step, the a priori estimate is combined with the current system measurement (yk) to improve the state estimate. This estimate is called the a posteriori state estimate (x̂k+). The vectors x̂kand x̂k+estimate both the same quantity, but the difference between them is that the last one takes the measurement (yk) into account. A Kalman gain matrix (Kk)is calculated at every discrete step and weights the influence of the model and the measurements on the current state estimate. This gain is calculated using the system matrices and the process (Qk) and measurement (Rk) covariances. More information about the KF equations and generalizations can be found in [4, 7, 9].

The KF is a linear optimal estimator, but it assumes that the system model and noise statistics are known. Otherwise, the filter estimates can degrade. Tuning the performance of the filter implies primarily adjusting the process and measurement covariance matrices to match the (unknown) real-system noise statistics. In practical implementations of the KF, the filter tuning is performed online, and empirical values are normally used. Extensive research has been done in this field to estimate the noise covariances from data (see [17, 18, 19, 20] and references therein).

As mentioned before, the KF is the optimal linear estimator, which estimates states of a linear dynamic system using the inputs, measurements and a parametric mathematical model of the system. Even though many systems are close enough to linear and linear estimators give acceptable results, all systems are ultimately nonlinear. Extensions of the KF have been presented in the last decades to deal with nonlinear systems. Some examples are the EKF and the sigma-point Kalman filters (SPKFs).

2.2. Nonlinear filtering

The EKF and the UKF (a SPKF type) are derivatives of the KF for nonlinear systems. The EKF was originally proposed in [8] and is the most commonly applied state estimator for nonlinear systems. However, if the system nonlinearities are severe or the noises affecting the system are high, the EKF can be difficult to tune, often gives wrong estimates and can lead to filter divergence easily. This is because the EKF uses linearized system and measurement models at the current estimate and propagates the mean and covariance of the GRVs through these linearizations. The UKF was presented in [10] and addresses the deficiencies of the EKF linearization providing a direct and explicit mechanism for approximating and transforming the mean and covariance of the GRVs.

In general, a discrete-time state-space model of a nonlinear system can be described by

xk=fxk1θk1uk1+wk1,yk=hxkθkuk+vk,E3

where θkRnpis the (unknown) parameter vector and fand hare arbitrary vector-valued functions usually called system and measurement functions. As a KF derivative, the UKF aim is to minimize the covariance of the state estimation error to find an optimal estimation of the state true dynamic probability density function (pdf). The main component of this filter is the UT. This transformation uses a set of appropriately selected weighted points to parameterize the mean and covariance of the pdf. Two steps characterize also the UKF. In the process update, the sigma points are calculated and then propagated through the nonlinear system functions to recover the mean and covariance of the new a priori estimates. The estimated measurement (ŷk) is calculated in the correction step and together with the actual measurement are used to correct the a priori estimate. This results in the a posteriori state estimate. While the UKF matches the true mean of xkcorrectly up to the third order, the EKF only matches up to the first order. Both filters approximate the true covariance of xkup to the third order. However, the UKF correctly approximates the signed of the terms to the fourth power and higher meaning that the resulting error should be smaller [7, 9].

The nonlinear extensions of the KF can also estimate the unknown parameters of a system. The UKF was extended for joint state and parameter estimation in [24]. In this case, the system state vector xkwas extended by including the unknown parameters θkto obtain a joint state and parameter vector as

x~k=xkθk,E4

remaining θk=θk1during the process update.

Square-root (SR) filtering increases mathematically the precision of the KF when hardware precision is not available. In [25], an SR version of the UKF was presented, which uses linear algebra techniques such as the QR decomposition and the Cholesky factor [26] to calculate the SR of the estimation error covariance. The SR form improves the numerical stability of the filter and guarantees positive semi-definiteness of this covariance. Additionally, the computational complexity for state and parameter estimation is reduced [25].

2.3. Recursive prediction-error method

In this section, the recursive prediction-error method (RPEM) is briefly discussed. This method is extensively analysed in [11, 12] and uses a parameterized predictor that estimates the system outputs at the current time step. The resulting predicted system output is then compared to the actual system measurement, and the predictor parameters are corrected such as that the prediction error is minimized.

The quadratic criterion function defined as

Vkθk=12ekTθkΛ1ekθk,E5

is minimized using the stochastic Gauss-Newton method in order to obtain the predictor parameters. The prediction error ekθkat the discrete time kis described as

ekθk=ykŷkθk,E6

where ykcorresponds to the actual system measurement, ŷkθkrefers to the parameterized predictor output using parameter set θkand Λis a user-defined weight factor.

The recursive solution that minimizes the quadratic criterion function in Eq. (5) is given by the following scheme:

Δk=Δk1+1λekekTΔk1,Sk=λΔk+dŷkdθ̂k1Θk1dŷkTdθ̂k1,Lk=Θk1dŷkTdθ̂k1Sk1,Θk=InθLkdŷkdθ̂k1Θk1InθLkdŷkTdθ̂k1/λ+LkΔkLkT,θ̂k=θ̂k1+Lkek.E7

The user-defined parameter 0<λ1is often called the forgetting factor. The matrix Δkis calculated using the prediction error. This matrix is used to calculate Sk, where the derivative of the output w.r.t. to the unknown parameter vector (dŷkdθ̂k1)appears. The gain vector Lkis multiplied by the innovation error to update then the parameter estimation. It should be noted that besides the matrix ŷk=dŷkdθ̂k1, all parameters, vectors, and matrices are defined after an initialization. The matrix ŷkcan be calculated modifying a KF derivative.

The selection of the forgetting factor essentially defines the measurements that are relevant for the current estimation of the parameter predictor. The most common choice is to take a constant forgetting factor for systems that change gradually. Other criterions for selection of this factor and the convergence of the RPEM are discussed extensively in [11, 12].

2.4. Sensitivity-based adaptive square-root Kalman filter

This is the main section of this chapter. The earlier sections were written to provide the needed methods for this section, and the later sections are written to analyse and test the performance of the filter described in this section.

The aSRUKF is discussed in this section. This filter combines the SRUKF and the RPEM. While the KF derivative estimates the system states and measurements, the RPEM calculates the unknown parameters and covariances.

In this filter, the innovation error in Eq. (6) is calculated and minimized using the recursive scheme presented in Eq. (7) in order to estimate the unknown system parameters and covariances. Besides the matrix ŷk, all parameters, vectors, and matrices of the recursive scheme are defined. The derivative of the estimated measurement (ŷk)w.r.t. the vector (θ̂k1)containing the unknown values of parameters and covariances needs to be calculated. This matrix is also called the output sensitivity and describes the influence of a variation of a parameter on the system output. The output sensitivity can be obtained using a KF derivative.

The equations of a SRUKF are then extended in order to calculate the output sensitivity. To simplify the reading flow, the following definitions are presented:

wim=wic=12nx+λf,i=1,,m=2nx,λf=α2nx+κ,w0m=λf2nx+λf,w0c=w0m+1α2+β,η=nx+λf,m1=11,nx,m2=11,l,l=2nx+1,wc=w0cwmc,wm=w0mwmm,Wkrc=wcm1T,E8

where wim,care a set of scalar weights, αdetermines the spread of sigma points around the estimated state x̂k, βincorporates information about the noise distribution (e.g. β=2assumes that the system is affected by Gaussian noise), and κis a scaling factor, which can be used to reduce the higher-order errors of the mean and covariance approximations [9]. The Kronecker product [27] is described by .

The process update step of the aSRUKF is presented in Table 1 . After the filter initialization, the sigma points (Xk1) that describe the pdf of the state estimate are calculated using the UT. At the same time, the sigma-point derivatives (Φk1) are also obtained. The sigma points and its derivatives are propagated through the system function and the system derivative function, respectively, to obtain the a priori state estimate (x̂k) and the a priori state estimate sensitivity (X̂k). Considering additive process noises, the SR factor of the estimation error covariance (Sxx,k) is calculated using the QR decomposition (qr) and the rank-one update to Cholesky factorization (cholupdate2) in which the signum function (sign) is used to determine the sign of the first calculated weight factor. If the weight factor results negative, a Cholesky downdate takes place; otherwise, a Cholesky update occurs. The next step calculates the derivative of the SR factor of the estimation error covariance (Sxx,k). In this step, the function treshapeMis used. This function converts a vector of dimension nxnx+1/2nθ×1into a nx×nxnθmatrix with nθlower triangular blocks of size (nx×nx). Additionally, the operator sis utilized to stack the matrix columns to form a column vector. Further information about this step can be found in [28].

aSRUKFInitializationx̂0=xinitRnx,θ̂0=θinitRnθ,Syy.k=dSyy,kdθk,Θk=ΘinitRnθ×nθPxx,k=dPxx,kdθk,Pyy,k=dPyy,kdθk,Sxx.k=dSxx,kdθk,Δk=0Rny×ny,
Pxx,0=Sxx,0Sxx,0T=Pxx,initRnx×nx,
Pxx,0=Sxx,0=0Rnx×nxnθ,Pyy,0=Syy,0=0Rny×nynθ
SRUKFSigma pointsXk1=Xk1,1Xk1,l=x̂k1x̂k1m1+ηSxx,k1x̂k1m1ηSxx,k1
Sigma-points propagationXk=fXk1θ̂k1uk1
SMSigma-point derivativesΦk1,j=Φk1,1,jΦk1,l,j=dXk,1dθ̂k,jdXk,mdθ̂k,j=X̂k1,jX̂k1,jm1+ηSxx,k1,jX̂k1,jm1ηSxx,k1,j
with
X̂k1=dx̂k1dθ̂k1
Sensitivity propagationΦk,j,i=fxkXk1,θ̂k1Φk1,j,i+fθk,jXk1,θ̂k1
SRUKFA priori state estimatex̂k=XkwmT
SR estimation error covarianceSxx,k=qrw1cX1:2nx,kx̂kSQ,k,
Sxx,k=cholupdateSxx,kw0cX0,kx̂ksignw0c
SMA priori
state sensitivity
X̂k=ΦkInθwmT
Derivative of estimation error covarianceSxx,k=treshapeMInθAls,Sxx,kPxx,ks,
with
Als,Sxx,k=Sxx,kInx+InxSxx,kNnx)Enxand
Pxx,k=ΦkX̂km2InθWkrcTXkx̂km2T+WkrcXkx̂km2TlInθΦkX̂km2T+dQkdθkθ̂k1
in which Tl=I0T1Tnθ, and T=01l×l0nθl×lIl.
Further details of the construction of the matrix Nnxand the elimination matrix Enxcan be found in [28]

Table 1.

aSRUKF: filter initialization, sigma-points calculation and filter process update step. The Kronecker product is described by ⨂ and defines the element-wise multiplication. The soperator stacks the matrix columns to form a column vector.

The correction step is shown in Table 2 . A new set of sigma points (Xk) and its derivatives (Φk) can be generated using steps (a) and (b) from Table 2 . If the nonlinearity is not severe, these steps can be skipped. This saves computational effort but sacrifices filter performance. The (new) sigma points and its derivatives are then propagated through the measurement function and its derivative, respectively. The resulting points are used to calculate the estimated measurements (Yk) as well as the output sensitivities (ŷk). These are used within the RPEM to estimate the system parameters and covariances.

SRUKFSigma pointsXk=Xk,1Xk,l=x̂kx̂km1+ηSxx,kx̂km1ηSxx,k1(a)
Output sigma pointsYk=hXkθ̂k1uk
Estimated measurementŷk=YkwmT
SMSigma-points derivativeΦk,j=Φk,1,jΦk,l,j==X̂k,jX̂k,jm1+ηSxx,k,jX̂k,jm1ηSxx,k,j(b)
Output
sigma-points derivative
Ψk,j,i=hxkXk,θ̂k1Φk,j,i+hθk,jXk,θ̂k1
Output sensitivityŷk=ΨkInθwmTwith
Ψk=Ψk,1Ψk,nθ
(c)
SRUKFSR innovation error covarianceSyy,k=qrw1cY1:2nx,kŷkSR,k,
Syy,k=cholupdateSyy,kw0cY0,kŷksignw0c
Cross-covariance matrixPxy,k=WkrcYkŷkm2Xkx̂km2T
SMDerivative of innovation error covarianceSyy,k=treshapeMInθAls,Syy,kPyy,ks,with
Als,Syy,k=Syy,kIny+InySyy,kNny)Enyand
Pyy,k=Ψkŷkm2InθWkrcTYkŷkm2T+WkrcYkŷkm2TlInθΨkŷkm2T
Derivative of Cross-covariance matrixPxy,k=ΦkX̂km2InθWkrcTYkŷkm2T+WkrcXkx̂km2TlInθΨkŷkm2T
SRUKFKalman gainKk=Pxy,k/SyyT/Syy,k
A posteriori state estimatex̂k+=x̂k+Kkykŷk
SR estimation error covarianceforz=1 tonydo
Sxx,k=cholupdateSxx,kUk:z,
with Uk=KkSyy,k
SMKalman gain derivativeKk=Pxy,k/InθPyy,kPxy,k/Pyy,kPyy,k/InθPyy,k
A posteriori state sensitivityX̂k+=X̂kKkŷk+KkInθykŷk(d)
Derivative of estimation error covariance updateSxx,k=treshapeMInθAls,Sxx,kPxx,ks,with
Pxx,k=Pxx,kPxy,kTnyInθKkTPxy,kInθKkT
RPEMParameter and covariance estimationΔk=Δk1+1λekeTΔk1,
Sk=λΔk+ŷkΘk1ŷkT,
Lk=Θk1ŷkTSk1,
Θk=InθLkŷkΘk1InθLkŷkT/λ+LkΔkLkT,
θ̂k=θ̂k1+Lkek.
(e)

Table 2.

aSRUKF: filter correction step and the RPEM for parameter and covariance estimation.

The SR factor of the innovation error covariance (Syy,k), the cross-covariance (Pxy,k) together with its derivatives matrices (Syy,k, Pxy,k) are calculated in order to obtain the Kalman gain matrix (Kk) and its sensitivity (Kk). The aSRUKF treats also the measurement noises as additive. The a posteriori state estimation (x̂k+), the a posteriori state sensitivity (X̂k+) together with the SR factor of the estimation error covariance (Sxx,k) and its sensitivity (Sxx,k) close the loop of the aSRUKF.

Local sensitivity analysis can be utilized to determine if a system input or a system modification can excite the system parameters in order to identify them. The a posteriori state sensitivity from Table 2 (d) can be used to determine the influence of parameters to the system states. This sensitivity results from the correction step of the aSRUKF. As long as the sensitivity X̂k+remains below a user-defined threshold, the parameter update from Table 2 (e) can be skipped to prevent parameter estimation drifts. A time window Nwcan be used to calculate maxX̂kNw+2X̂k+2to normalize the sensitivity values. A threshold vector tvis then defined with values between 0 and 1. The update procedure can be described as

for p=1to nθ dosa=0for n=1to nx dosa=sa+dxk,ndθk,pifsa>tvpthenupdate_values=TrueE9

The variable sarepresents the sensitivity sum w.r.t. a system parameter θk,pover all system states (xk,1,,xk,nx). The threshold vector tvshould be selected with caution. Too high values prevent parameter estimation drifts but can increase the convergence time of the filter. Moreover, the parameter excitation can be significantly reduced and the resulting estimation can be biased. The performance of the SB-aSRUKF is evaluated in Section 4.

The local state sensitivity can be also calculated as follows (cf. [29]):

dxkdθk,j=fxkx̂k1,θ̂k1dxk1dθk,j+fθk,jx̂k1,θ̂k1E10

This sensitivity computation is compared in Section 4 with a posterior state sensitivity obtained using the SB-aSRUKF.

The RPEM can be combined with different KF derivatives to estimate system parameters and covariances. An EKF and a SRCuKF were used to calculate the output sensitivity in [22], which is then used to estimate the unknown values. More information about the aSRUKF can be found in [28, 30].

3. Testbeds

In this section, two testbeds are presented and modelled. These modelled systems are used in Section 4 to test the performance of the SB-aSRUKF. The planar one-link robot system is presented and extended with a second arm to form a pendulum robot (pendubot). The pendubot is a chaotic and inherently unstable system.

3.1. Planar one-link robot system

This section describes the planar one-link robot system shown in Figure 1 . It consists of a long rectangular aluminium link driven by a DC motor via a shaft and a one-state toothed gear.

Figure 1.

Planar one-link robot system: structure and functionality.

The angular position is measured with an incremental rotary encoder and the motor torque is determined by measuring the motor current. To simplify the motor model, it is assumed that the motor current is directly proportional to the armature current and that the motor torque is proportional to this current by a constant factor. Additionally, the link acceleration is measured using a micro-electro-mechanical sensor (MEMS) accelerometer attached to the link. The motor position is controlled by a classical cascade structure with P- and P-feedback controllers for position and speed.

The corresponding continuous state-space representation of the planar one-link robot system, where the link angular position and acceleration are measured, can be written as follows:

ẋ=Ax+bxu,y=cxu.

The system states are the link angular position (x1=q1) and the link speed (x2=q̇1). The input ucorresponds to the motor torque (u=τm). The measurements are the link angular position (y1=q1) and acceleration (y2=q¨1). The matrix Aand the vector-valued functions band care then described as

A=010μvJtot,bxu=0μdJtotarctankx2mal22Jtotsinx1+τmJtot,cxu=x1μvJtotx2μdJtotarctankx2mal2g2Jtotsinx1+τmJtot,

where Jtotrepresents the total inertia including motor, shaft and link inertias. The link friction is modelled as a dry Coulomb (μdand k) and viscous friction (μv). The parameters ma, l2, and gare the link mass, length, and the gravity of Earth coefficient, respectively.

3.2. Pendubot

This section describes the pendulum robot (pendubot) that is presented in Figure 2 . The pendubot is built adding an under-actuated link to the planar one-link robot system described in Section 3.1. The actuated joint (q1) is located at the shoulder of the first link (arm) and the elbow joint (q2) allows the second link (forearm) to swing free. This joint contains only a second incremental rotatory encoder that measures the angle between the two links.

Figure 2.

Pendubot: structure and functionality.

The system states result as x1=q1, x2=q̇1, x3=q2, and x4=q̇2, where qiand q̇iare the corresponding position and speed of the i–joint, respectively. The state-space representation of the pendubot can be written as

ẋ=Ax+bxu,y=x1x3ẋ2T,

in which

A=0100000000000010,bxu=0ẋ20ẋ4T,

where

ẋ2ẋ4=q¨=q¨1q¨2=Dq1τmμv1q1̇dq̇1μv2q̇2Dq1Cqq̇q̇Dq1gq.

The viscous and Coulomb frictions are described with the parameters μv1and μv2and the function dq̇1=μdarctankq̇1. The matrices Dqand Cqq̇and the vector gqare the inertial and the Coriolis matrices and the gravity vector, respectively. They are defined as follows

Dq=ϑ1+ϑ2+2ϑ3cosq2ϑ2+ϑ3cosq2ϑ2+ϑ3cosq2ϑ2,Cqq̇=ϑ3sinq2q̇2q̇1q̇2q̇10,gq=ϑ4gcosq1+ϑ5gcosq1+q2ϑ5gcosq1+q2,

where the ϑiparameters are defined as

ϑ1=m1l12+m2+m3+m4l22+J1,ϑ2=m2l32+m4l42+J2,ϑ3=m2l3+m4l4l2,
ϑ4=m1l1+m2+m3+m4l2,ϑ5=m2l3+m4l4.

The parameters J1and J2correspond to the moments of inertia of link 1 and link 2 about their centroids. J1includes motor, gear and shaft inertias. The miand liparameters are defined in Figure 2 .

4. Experiments

In this section, the SB-aSRUKF is tested on the planar one-link robot system and on the pendubot. Both testbed models were discretized using first-order explicit Euler with a sample time of 0.2ms. System states, parameters and covariances were estimated online. The SB-aSRUKF is also compared with the joint state and parameter SRUKF in this section. Sensitivity analysis is also discussed.

4.1. State sensitivity analysis and parameter and covariance estimation

Sensitivity analysis (SA) was performed on simulation using the planar one-link robot system. The system parameters were identified offline on the real testbed using Prediction-Error Method. The parameters defined as

θ̂true=Ĵtot μ̂v μ̂d m̂al̂2 k̂T=5.59102kgm20.05Nmsrad0.27Nm0.11kgm10sradT,

were used for the simulation. Noise distributions with the following covariance matrices

Q=diag10115105,R=diag51075,

3were added to the simulation to incorporate model and measurement uncertainties. An s-curve profile was considered as a desired link angular position.

The following system states, parameters and covariances were estimated:

x1=q1link position,x2=q̇1link speed,θa=q11process covariance valuerelated to the linkang.pos.,θ1=μvviscous friction coefficient,θ2=μdCoulomb friction coefficient,θ3=Jtot1inverse inertia.

The remaining tuning factors for SB-aSRUKF were set as

Q=q22=5105,R=diag51072101,
α=0.08β=2κ=3nxλ=0.999,
Pxx,init=I2,Θinit=diag51013101106104108.

The values of Θinittune the parameter and covariance estimation, and the index order is the same as the above-defined θivalues. This means that the first value tunes the estimation of θa(process covariance value), the second value tunes θ1(viscous friction coefficient) and so on.

The filter initial system states were set to zero and the initial system parameters were set as the true values multiply by a random factor between 0<θfactor,i 10as

xinit=0,θinit=2108rad21.51/Ĵtot3μ̂v 8μ̂d 0.25m̂al̂2T.

In order to test the sensitivity-based section of the filter, the link angular position was held at q1=π/2after ca. 11sfor about 4.5s. At the same time, the system parameters μ̂v, μ̂d, and m̂awere quadrupled.

Figure 3 compares the a posteriori state sensitivity calculated using the SB-aSRUKF and the state sensitivity using Eq. (10). The first diagram shows the estimated and true link angular position of the planar one-link robot system. The following diagrams present the normalized SA related to the link angular speed (q̇̂1)and corresponding to the inverse inertia, viscous and Coulomb friction coefficients, and the link mass and length parameter. While the state sensitivity calculated using Eq. (10) was affected directly by input noises, the a posteriori state sensitivity provided an almost noise-free estimation. While the SAJtot1maxima were related with the acceleration (speed-up and brake phases), the SAμvmaxima coincided with the link maximal speed. The SAmal2was only zero while the arm was crossing the 0radposition and the SAμdwas the sensitivity value more affected by the system noise. This is caused because the maximal change rate of arctanoccurs when the argument is near zero. When the link speed is zero, the added noise varies near this value and its effect is amplified by arctan.

Figure 3.

Sensitivity analysis (SA): comparison between the a posteriori state sensitivity obtained using the SB-aSRUKF and the resulting using Eq. (10). The desired link position was set as an s-curve between − π / 2 and π / 2 . The link position was held at π / 2 after ca. 11 s for about 4.5 s . The parameter sensitivities are related to the link angular speed ( x ̂ 2 = q ̇ ̂ 1 ) .

Figure 4 shows the state, parameter and covariance estimation of the planar one-link robot system. The aSRUKF was used in two configurations: SB-aSRUKF utilized SA to monitor the system excitation while aSRUKF did not. After the initialization, the two estimators needed almost the same time to converge to the offline identified values. The parameters estimated using the SB-aSRUKF did not diverge while the link position was held. Because two of the estimated parameters using the aSRUKF diverged, this filter needed more time to converge after the stop phase. The two filters were able to estimate the link mass and length parameter during the stop phase. While the viscous and the Coulomb friction coefficients and the inverse inertia estimated with the SB-aSRUKF remained constant during the stop phase, the aSRUKF was able to estimate the Coulomb friction with bias (before μvdiverged). Because of the added noise, the parameter was excited and could be identified. This can be seen in the fourth diagram of Figure 3 . These SA values remained under the threshold used on the SB-aSRUKF and were filtered. The parameter estimation stayed then constant. It should be noted that the diagram corresponding to the viscous friction coefficient is zoomed to present the parameter change, and the oscillations of the aSRUKF are cut. These reached up more than 50 times the parameter true value.

Figure 4.

Planar one-link robot system: parameter and covariance estimation. The SB-aSRUKF uses SA to monitor the system excitation. The initial parameter θ init was randomly selected. The link position was held after ca. 11 s for about 4.5 s , and simultaneously the system parameters μ ̂ v , μ ̂ d , and m ̂ a were quadrupled.

The SB-aSRUKF was able to estimate the parameters and covariance of the proposed testbed. The online estimations converged to the offline identified values without bias. The sensitivity-based approach used as a system excitation monitor prevented parameter estimation drifts and did not modify the convergence time of the filter.

4.2. Comparison between SB-aSRUKF and joint state and parameter SRUKF

The SB-aSRUKF and the joint SRUKF were compared on the pendubot for state and parameter estimation. The SB-aSRUKF identified also covariances.

The system parameters were identified offline and used for the simulation as

θ̂true=ϑ̂1 ϑ̂2 ϑ̂3 ϑ̂4 ϑ̂5T=0.339kgm20.0667kgm20.0812kgm20.717kgm0.146kgmT,
μ̂v1 μ̂d μ̂v2 k̂=0.09Nmsrad0.226Nm0.003Nmsrad10srad.

An s-curve profile was selected as the desired position of the first link. The following states and parameters were estimated online:

x1=q1link1position,x2=q̇1link1speed,x3=q2link2position,x4=q̇2link2speed,θa=q11process covariancerelated to the link1ang.pos.,θb=q33process covariancerelated to the link2ang.pos.,ϑ1,,ϑ5pedubot minimalsetof parameters.

The values θaand θb, which correspond to the process covariance values, were only estimated using the SB-aSRUKF. The viscous and Coulomb friction coefficients were identified offline and remained constant for both filters.

To simulate model and measurement uncertainties, noise distributions with the following covariance matrices were added to the system for the simulation:

Q=diag210101.5107210101.5107,R=diag510751071.

The tuning parameters for the joint SRUKF were chosen as

Q01=diag((210101.5107210101.5107
107107107510101010)),
R01=diag5107510710,
α=0.85,β=2,κ=3nxnp,
P01=diag111110310510551,

and the parameters for the SB-aSRUKF were set as

Q02=diag1.51071.5107,Pxx,02=I4,
R02=diag5107510710,
α=0.85,β=2,κ=3nx,λ=0.999,
Θinit=diag10251025107107107510101010.

The filter initial system states were set to zero and the initial system parameters were set as the true values multiply by a random factor between 0<θfactor,i 5as

xinit=0,θinit1=1.5ϑ̂1 1.3ϑ̂2 1.5ϑ̂3 2ϑ̂4 2ϑ̂5T,θinit2=21010rad221010rad2θinit1TT.

The first four values of P01tune the initial state estimation, while the last ones the parameter estimation. The first two values of Θinittune the estimation of the covariance values θaand θbwhile the last values follow the index order of ϑidefined in Section 3. It should be noted that the settings related to the state and parameters estimation were equally tuned for both filters.

The state estimation of the pendubot is presented in Figure 5 . The SB-aSRUKF was slightly faster to reach the true system states (cf. diagrams 1 and 4) and after ca. 5sboth filters followed the dynamic of the true system states without any significant bias.

Figure 5.

Pendubot: state estimation using the SB-aSRUKF and joint SRUKF. Both filters followed the dynamic of the true system states without any significant bias.

Figure 6 shows the parameter estimation of the pendubot for both filters. Using the same tuning parameter set, while the SB-aSRUKF estimated the ϑ1to ϑ5parameters without bias, the joint SRUKF estimated ϑ1to ϑ3with slight bias, and it was not able to estimate ϑ4and ϑ5. These two parameters did not converge to the true system values. It should be noted that the diagram scales corresponding to parameters ϑ4and ϑ5are different between the filters. The parameter initialization and the tuning settings for the two filters were the same. The SB-aSRUKF outperforms the joint SRUKF for the parameter estimation of the pendubot.

Figure 6.

Pendubot: parameter and covariance estimation using the SB-aSRUKF and joint SRUKF. The SB-aSRUKF was configured to estimate the system parameters ϑ 1 to ϑ 5 and the process covariances corresponding to the link positions. The joint SRUKF estimates only the system parameters. It should be noted that the diagram scales for parameters ϑ 4 and ϑ 5 are different between the filters.

5. Conclusions

In this chapter, some approaches for state, parameter and covariance estimation were discussed. Moreover, a sensitivity-based adaptive square-root unscented Kalman filter (SB-aSRUKF) was discussed and its performance was analysed. This filter estimates system states, parameters and covariances online. Additionally, sensitivity models were presented and used as system excitation monitor to prevent parameter and covariance estimation drifts while the system excitation was not sufficient.

Two testbeds based on an axis serial mechanism were modelled and used for testing the performance of the filter. Two experiments were performed in simulation on these two testbeds: a state sensitivity analysis and a comparison between the joint state and parameter square-root unscented Kalman filter (SRUKF) and the SB-aSRUKF. Simulation results show that the SB-aSRUKF outperforms the joint SRUKF with the same tuning configuration. While the joint SRUKF did not estimate two of the five parameters correctly, the SB-aSRUKF identified all the parameters. Moreover, the estimation of covariances reduces the parameter estimation bias. Configuring the right excitation thresholds for the system excitation monitor in Eq. (9) prevented parameter estimation drifts, while the input was not persistently exciting and did not only increased but also in some cases reduced the convergence time of the filter.

Notes

  • The hat ̂ over a vector represents the estimate of the vector, for example, x ̂ describes the estimate of the state vector x .
  • Matlab does not allow the use of cholupdate in real-time applications; using coder . extrinsic ′ cholupdate ′ , it is possible to use the function in Simulink but the application does not run in real time. The cholupdate line can be replaced with chol S xx , k − T S xx , k − + w 0 c X 0 , k ∗ − x ̂ k − .
  • The function diag v transforms the v ∈ R n vector into a n × n square matrix with the elements of v on the diagonal of the matrix.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Mauro Hernán Riva, Mark Wielitzka and Tobias Ortmaier (December 20th 2017). Sensitivity-Based Adaptive SRUKF for State, Parameter, and Covariance Estimation on Mechatronic Systems, Kalman Filters - Theory for Advanced Applications, Ginalber Luiz de Oliveira Serra, IntechOpen, DOI: 10.5772/intechopen.72470. Available from:

chapter statistics

733total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Kalman Filters for Parameter Estimation of Nonstationary Signals

By Sarita Nanda

Related Book

First chapter

Highlighted Aspects From Black Box Fuzzy Modeling For Advanced Control Systems Design

By Ginalber Luiz de Oliveira Serra

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us