Open access peer-reviewed chapter

Review of Kalman Filter Developments in Analytical Engineering Design

Written By

Yuri V. Kim

Submitted: 09 June 2022 Reviewed: 09 August 2022 Published: 13 October 2022

DOI: 10.5772/intechopen.106991

From the Edited Volume

Kalman Filter - Engineering Applications

Edited by Yuri V. Kim

Chapter metrics overview

103 Chapter Downloads

View Full Metrics

Abstract

This chapter discusses using the Kalman Filter (KF) for the analytical design in such engineering applications, as a closed loop control system, which often can be considered as time-invariant and linear (LTI). The chapter discuses designing a navigation accelerometer with the electric spring. This is a typical example of the closed loop, negative feedback control. Two approaches are used: A-conventional, empirical and B-analytical with KF. The consideration of both of them opens a comprehensive understanding of the system dynamics and its potentials. The discussion is based on the suboptimal form of the KF-Filter with Bounded Growth of Memory (FBGM), proposed by the author.

Keywords

  • dynamic system
  • Kalman Filter
  • covariance
  • standard deviation
  • Riccati equation
  • stochastic process
  • white noise
  • optimal estimation and control
  • analytical design
  • accelerometer

1. Introduction

Appearance the Kalman Filter in 60-th [1, 2] led in the control system engineering to a trend that continues even today; implementing this powerful a scientific tool directly into a real system. This tendency is based on the “doctrine” of considering the KF, as a “magic box”, which is capable to achieve a superior performance sometimes even with attempt for estimation not observable system state variables. However, directly implementing KF in the “real word system”, a developer may often face many unexpected issues, resulted in the negative conclusion about its practical applicability.

Both situations are extreme. A conscious application of KF with premature engineer analysis and sub-optimization, following by sacrificing the theoretically achievable maximal accuracy for the expense of the filter robustness and the simplicity can lead to the applicable results, similar to the conventional solution. More fine tuning the control gains in this case allows for getting better performance. This approach can be presented as a “rule of thumb” – meeting performance requirements, rather than achieving theoretical excellence was presented in many works of the author: [3, 4, 5, 6, 7, 8, 9, 10, 11]. In particular, some simple KF sub-optimization form – Kalman-Bucy filter (KBF), with bounded grows of memory (FBGM) was proposed in [8, 9].

FBGM guarantees accurate system performance, maintaining the continuity of the conventional solutions. Specifically, following by this approach is an appropriate in the case of the Linear Time Invariant (LTI) systems. Often, for the LTI system fast estimation and insertion of the initial conditions are not required and the KF can be used in its stationary (LTI) form with constant coefficients that can be found solving KF Ricatti equation in the steady state, when it degenerates in the algebraic form. If the main single criterion of system design quality can be formulated as the minimum of system error standard deviations (STD), then the KF can be used as the filter and the feedback closed loop controller simultaneously. Namely, this case is discussed in the example, presented in the chapter below.

For many similar examples the empirical engineer solutions are well-known for years and have become conventional, demonstrating a good performance and robustness in a wide range of possible operational conditions. However, using the KF and the analytical design is always useful to check available potentials and is specifically necessary, when there is a lack of experience in similar design. In the real life many factors usually play essential role for the system (devise) design. Indeed, some of them can hardly be mathematically formalized and put restrictions for a “freedom” of the analytical solution. However, often a single criterion for the considered system can be taking as a dominant –minimum steady state errors covariance matrix, which is the optimization criterion the optimal KF. That is why if at the first stage of system design developers are advised to use this criterion and the KF as a helpful tool for the analytical design.

The basic information about the KF can be found in [1, 2, 12, 13, 14, 15, 16], about the Analytical Design (AD) – in [9, 17, 18, 19, 20]. This AD approach is well compatible with System Engineering Model Based Design, providing with Matlab/Simulink Tools by the MathWorks American Company [21, 22].

It must be said, in addition, that due to essential progress in computerization, achieved up to date and implementing of the KF in a broad spectrum of new applications such as system identification, failure detection, image and data processing applied research in physics and mathematics (some of them are presented in this book), direct implementing of the KF algorithm and its further modifications has become common and justifiable remedy for many developers. MathWorks Company provides powerful and universal Simulink KF algorithms [23] that can be used on different stages of the research and implementation.

Advertisement

2. Kalman filter

As it has been mentioned in the introduction, the KF was proposed by Rudolf Kalman in 1960 [1]. From the physical point of view it was just a filter that can separate deterministic- stochastic signal from the measurement stochastic noise. However, the mathematical form of the presentation as a vector-matrix space state differential equation (many available inputs and outputs) and provided optimal criterion –minimum of the estimation covariance matrix errors made this filter be used as a powerful tool for a scientific research and engineering development in many areas. A big advantage of the filter is that it was presented as a discrete algorithm allowing for the recursive computation. It allows estimating variable at the current step, using its predicted estimation, based on the system model prediction (without the measurements). This estimation is taken with the control gain (weight coefficient) proportional to the estimation errors covariance matrix and in inverse proportion to the matrix of the measured noise. This coefficient is multiplied by the difference between the measured vector and its estimation. The discrete covariance matrix was provided allowing for the recursive solution of the non-linear matrix Riccati equation. Thus these equations of the filter, presented by Kalman for the discrete stochastic process [1], opened direct way to compute estimates and implement the filter in the form of the computation algorithm. Further many authors (for example, [12, 14]) presented these the discrete KF equations with detail explanation of the filter and the computation process. Pseudo-code of the filter provided by MathWorks (Simulink) for a broad pool of international users is presented in [23]. Following [12], generic pseudo-code of KF can be also presented as below.

Let us given the stochastic model of a linear dynamic system

ẋi+1=Φixi+Giwi,i=0,1,,N1E1

where xi is system state vector, i is computation step number, Φi is system pulse transfer function, Gi is system input impact function, wi is system input impact stochastic process

Every step system state vector is measured as follows

zi=Hixi+viE2

where zi is measurement vector, Hi is measurement matrix, vi is stochastic measurement error. The processes wiandvi are centered mutually non-correlated white Gaussian noises, having the covariance matrixes Qi and Ri respectively. The KF at each step will provide the optimal estimate x̂i of the system state vector xi with the minimum of the estimation errors x˜i=x̂ixi covariance matrix Pi=Ex˜ix˜iT

Then for the system (1), (2) KF pseudo-code is as follows

It has been mentioned in the introduction that in many occurrences direct implementation of the algorithm above may lead to the optimal filter instability and, at least, to essential requirements for the computational resources. However, sub-optimal form of the filter may solve the problem within required accuracy, but with warranted stability, robustness and very modest requirements for the computation (often even analogue devices). One of such sub-optimization, developed by the author, is presented below (Figure 1).

Figure 1.

Discrete Kalman Filter algorithm pseudo-code.

Advertisement

3. Kalman-Busy filter sub-optimization

Let us consider the analog Kalman filter form that was presented by Kalman and Busy in [2]. Given is linear, fully observable and controllable stochastic system

ẋ=Fx+Gw,z=Hx+v,E3

where: x is an n – vector of the system state, F is n×n system dynamics matrix, w is an n – vector of external disturbances, G is an n×n disturbances matrix, z is a mn is vector of measurements, v is a m is vector of measurement noise, H is a m×n measurements matrix.

Let us the following information about (3) is given: F,G,H are known matrices of time, these in the stationary case are matrix constants and

Ext0=0,Ewt=0,Evt=0,Ext0xTt0=P0,EwtvTτ=EvtwTτ=EwtxTt0=0,EwtwTτ=Qtδtτ,EvtvTτ=Rtδtτ,E4

where: P0 is the initial state covariance matrix, Rt is the covariance matrix of measurement noise, Qt is the covariance matrix of disturbance noise, δtτ is the Dirac delta function. Hence, wt and vt are Gauss white noise processes. Usually, matrixes Q and R are diagonal and in the stationary case, these matrices are constants. They have meaning of the band bounded spectral densities of practically correlated, “colour” noises wt and vt, correspondingly, that within their band bounds can be approximately taken as white

Q=Sw=diagσwi2Δtw,i=1,2,..nR=Sv=diagσvj2Δtv,j=1,2,..mE5

where σwi,Δtwandσvj,Δtv are the standard deviations (STD) and the sampling times (Δt) of these stochastic processes w and v correspondently. In KBF theory are idealistically presents like white noises with zero Δt and infinite covariance functions.

This system state (state vector x) at any time instant t can be estimated (found the optimal estimate x̂ for ) by providing the minimum of system state estimation error (x˜=x̂x) for the covariance matrix P=Ex˜tx˜Tt diagonal J=diagP. In other words, KBF filter merit criterion is as follows

Jmin=mindiagPE6

This criterion is provided when the KBF is used for estimation of the state vector of the system (3). The KBF is usually presented by the following matrix equations

ẋ̂=Fx̂+KzHx̂,x̂0,K=PHTR1,Ṗ=FP+PFTPHTR1HP+GQGT,P0E7

where: K=Kt is the KBF weigh matrix gain, P=Pt is the KBF estimate errors covariance matrix that can be found from the solution of the third matrix equation in (7) (Riccati type equation). Eq. (7) presumes that measurement vector z is continuously available, and then (7) presents KBF in the, so called, “filtering mode”. However, if at some time instance tp the measurement process is ended or temporarily interrupted and the vector measurements z since then is not available, then the KBF can be transitioned in the “prediction” mode. This mode is obtained from the filtering mode by setting in (7) KBF matrix gain to zero Kt0,whenttp

ẋ̂=Fx̂,x̂0=x̂p,K0,Ṗ=FP+PFT,P0=PpE8

To implement KBF in practice, especially in real time on-board computer (OBC) with limited computational capabilities, it is always useful to sub-optimize the filter, sacrificing potentially achievable maximal accuracy, for simplicity and robustness (quid pro quo), being satisfied by some tolerate level of (6) instead of exact minimum at any considered time instance. As author presented in his past publications [3, 4, 8, 9] KBF in the time domain can be equivalently decomposed into two filters, working in parallel; non-stationary KBF with time variant matrix coefficient K˜t and stationary KBF with time invariant (constant) coefficient K. Both can be determined pre-calculated in advance, before using KBF in real time. As it was showed in author’s works these parallel filters can be approximately represented as the consecutive – Filter With Bounded Grows of Memory (KFBGM) . The results of application of optimal KBF and sub-optimal filter (FBGM), represented below are close.

Both coefficients K˜andK can be determined by the solving of modified KBF Riccati matrix equations for covariance matrixes Pt and P. This KBF modification is as follows

ẋ̂=Fx̂+K˜zHx̂,x̂0,K˜=P˜HTR1,Ṗ˜=FP˜+P˜FP˜HTR1HP˜,P˜0=P0P,F=FKH,ẋ̂=Fx̂+KzHx̂,x̂0,K=PHTR1,FP+PFTPHTR1HP+GQGT=0x̂=x̂+x̂,x̂t0,x̂txP=P˜+P,P˜t0,PtPE9

Original Riccati equation for the matrix P is split here into two equivalent equations: the differential equation for the transfer state P˜ (3-d of (9)) and the algebraic one (7-th of (9)) for the steady state P. For many practical applications takes place the following inequality

diagP0>>diagPE10

In this case, as it was shown in [3], two parallel KBF filters (9) can be approximately substituted by the single suboptimal filter, working consequently in time in two modes: at the beginning in “initial filtering” (IF) mode for the “quasi- deterministic” system (assuming that in (3)w=0andQ=0) with time-variant (variable) gain K˜t gate and after is automatically switched to “steady filtering” (SF) for the substantially stochastic system model with time-invariant (constant) gain K(assuming that after some IF period t, tt, the transfer process practically entirely completed and SF can start). Unlike the KBF, that is a filter with unbounded grooving memory (assuming continuous solving of KBF Riccati equation to determine KBF gate taking into account for it all process prehistory), this suboptimal modification was named the “Filter with Bounded Growing Memory” (FBGM). The filter equations are presented by (11) and (12) below

ẋ̂=Fx̂+KzHx̂,K=K˜,t0tt,K,t>t,0,ifz0E11

where: t is the time, required for unbiased estimation of all n components of vector x, when covariance matrix P˜ decaying to a small matrix P˜t0. The gains K˜ and K are found with the following formulas

K˜=P˜HTR1,Ṗ˜=FP˜+P˜FTP˜HTR1HP˜,P˜0=P0,K=PHTR1,FP+PFTPHTR1HP+GQGT=0.E12

In other words, K˜ is computed for system (1), considered as a “quasi- deterministic”

(w=0,Q=0, only transfer process, caused by x0 takes place), and K- considering (1) as a “substantially- stochastic” system, where only steady state motion, caused by the random disturbance w takes place, however the transfer process has been approximately decayed.

Three modes can be considered for the FBGM: IF- when K=K˜, SF- when K=K, and the “prediction mode” P-when measurement vector z is not available and K=0.

This filter can be easily periodically restarted after any interruption (outage) in measuring process. In the work [4] was introduced the “observability index”χi (power of signal to power of noise ratio) for certain ith component of the signal measured from the estimated system (1), considered as a quasi-deterministic. At the transition time t when KBF is switched from the IF mode to the SF mode all the observability indexes χi become bigger than one (χi>>1,i=1,2,n). This index is helpful to use it for FBGM analysis in the IF mode.

For filter analysis in the SF mode also can be introduced special “filterability index”ξi=qjri,j=1,2,..m;i=1,2,..n – ratio of spectral density of signal exciting noise (Q) to spectral density of measured error noise (R) for each pair of these noise components (wj and vi correspondingly). That index is similar to the observability index χi and is helpful for the FBGM analysis in SF mode.

Further developing the idea of FBGM (11), (12) is a new, simple KBF suboptimal modification FBGM-Initial/Steady (FBGM-I/S) can be considered. This idea is very simple, to change the time variable FBGM coefficient K˜t, acting only during the transient period of the estimation process, by a permanent coefficient KIF that would accelerate it (comparatively to if the steady state coefficient K had been applied without K˜ from the very beginning).

The assumption is that during this time the potentially available with KBF accuracy can be sacrificed for making this period shorter.

This modification is as follows

ẋ̂=Fx̂+KzHx̂,K=KIF,t0>t>t,KSF,t>t,0,ifz0,E13

where KIF and KSF both are constant filter matrix gates chosen for IF and for SF modes correspondingly. (1.11) assumes that KIF is used only during filter transitional process that can be terminated as soon as possible (for a short time) and then the steady filtering process can start and last as long as measurements vector z is available. This idea just reflects conventional engineering approach for linear control system design to extend system bandwidth at the transfer process period and to narrow it at the steady state work period. However, if it is clear that the steady state matrix-gate KSF in (11) can be calculated using KBF formulas for K(12) (3-rd and 4-th). The question about how to determine the gate KIF can be discussed additionally, but this gate should provide to the filter wider passband and bigger speed than KSF. Indeed, this matrix gate can be just designated using conventional engineering criteria such as stability margin, overshooting, and decaying time, considering filter characteristic polynomial

Δs=sIF+kH=sn+a1sn1+..+an1s+a0E14

and arranging its coefficients ai to provide to (14) desired roots. For example, standard coefficients for the multiple roots λ

Δs=s+λnE15

where λ determines system cut frequency (bandwidth) that is approximately inverse to its response time and is usually limited by the stability margin and static error under action of constant perturbations. Then matrix KSF can be determined considering desired coefficients ai(14) or the bandwidth λ and standard coefficients (15).

However, it is important to note that the reliable information about matrixes Q and R is usually not available almost at any stage of system development and operation. Especially, this is related to the matrix Q. Hence, the question about the reliability of optimal filter gain KSF also appears and can be discussed. In this case, when matrix Q is not available, it can be accepted as zero Q=0 and P is not optimal, but at some satisfactory level P=D. Then KSF can be found from the equation

FD+PD+KSFHR1HTKSFT=0E16

where F=FKSFH,

This equation is covariance equation for the filter estimation errors caused by the measuring noise vt

x˜̇=Fx˜KSFHvE17

In fact, in practice the real physical processes wt and vt have more complex than Gaussian stationary white noise structure and hardly can be expressed by the matrixes Q and R, assumed in KBF theory. However, this abstraction could be helpful for practical needs if some “appropriate” levels of Q and R that would be resulted in the solution, which is compatible with conventional engineering practice are taken to tune FBGM.

Practically, often there is no need to take care about the transfer mode of the KF and it can work permanently with the constant coefficients, determined by the formula below for the steady state [16]

K=PHTR1E18

The steady state covariance matrix P as the solution of non-liner algebraic matrix Riccati equation below should be prematurely found

FP+PFTPHTR1HP+GQGT=0E19

This filter can be used simultaneously as LTI negative feedback closed loop controller. Mathematically it can be derived by subtracting (7) from (3) and presented by the following equation

x˜̇=FKHx˜+KHV+GwE20

These Eqs. (18)(20) are used with example below to illustrate the KF analytical design approach with a simple LTI dynamical system of the second order.

3.1 Example

Let’s consider a stable LTI, 2-nd order dynamic system

Jα¨+bα̇+=ME21

where J is system moment of inertia, b is system damping coefficient, c is system rigidity coefficient, α is system angular position, α̇ is system angular velocity, M is external torque, applied to the system rotation part, having inertia J.

This simple dynamic Eq. (21) can approximately present many various real technical systems.

In particular, it can present the sensitive element of the navigation accelerometer (floating pendulum) with the electrical spring [16]. They were commonly used in electro-mechanic Inertial Navigation Systems (INS) in (70–80) – th. This device (system model (21)) will be considered further in this example.

The accelerometer measures external torque, applied to its pendulum by the inertia force (Fi=m0a) Mi=m0la, caused by its motion with acceleration a. In the steady state this torque is balanced by the accelerometer’s spring Ms=cα. Hence, pendulum deviation α allows determining the acceleration a

α=m0lcaE22

To make the pendulum sensitive we need to eliminate the dry friction torque in its bearings Mfr . With this purpose the pendulum is usually put in the filled by the inert gas soldered chamber, floating in a viscous fluid in the accelerometer case. Then, the following inequality MfrMimin takes place. The fluid also creates the viscous torque, damping the pendulum oscillations.

The following ratios describe the pendulum parameters: J=m0l2 (m0 is pendulum mass, l is its length), b is pendulum floating chamber viscous friction coefficient, c=m0gl is pendulum rigidity coefficient, creating by the pendulum weight. Substituting these parameters in (21) and dividing it by J=m0l2, we can represent it as follows

α¨+2d0ω0α̇+ω02α=mE23

where 2dω0=bJ=bm0l2, ω02=cJ=gl, m=MJ=m0lm0l2a=1la. Hence,

ω0=gl is system natural frequency, d0=b2m0l2gl=b2m0gl3 is system specific damping coefficient.

If measured acceleration a is constant (a=const), then in the steady state (t) from (23) follows that

α=α=1gaE24

In further consideration we will assume that the devise angle α is measured by a kind of the electric sensor, providing the electric output constant voltage as follows

Ua=kaα=aE25

where ka=g is the pendulum scale coefficient.

In our case (23) in the canonical form of the 2-nd order differential equation dynamic unit presents uncontrolled system (a plant).

The purpose of further design is to find required for the pendulum (23) control that would provide to it (accelerometer) better dynamics. Usually the viscous fluid provides for the floating pendulum a big damping coefficient d0, however the “gravity spring” has not enough rigidity to have for the device required natural frequency (ω0ω). Therefore, we will develop for the pendulum additional (to the gravity torque) negative feedback electric positional torque (“electric spring”) that will increase original pendulum natural frequency ω0 to the desired value of ω. Schematically such an accelerometer is presented in Figure 2.

Figure 2.

Scheme of the accelerometer with the electric spring.

Two approaches are considered below: A-Empirical design (based on engineer experience and the continuity with the conventional design) and B-Analytical design (based on the suboptimal KF (FBGM) used as a linear controller).

3.2 Empiric approach

Eq. (22) can be presented in Laplas transformation form as follows

s2+2d0ω0s+ω02αs=1lasE26

where s is Laplas operator, αs and as are the Laplas transformations of the output pendulum angle and its input acceleration correspondingly.

Let us apply in inverse to this pendulum deviation direction the control torque Mc that physically would provided by a special electric motor (in the” braking mode”), installed on the pendulum rotation shaft and controlled by the voltage from the sensor of pendulum deviation angle α through the electronic amplifier. This negative feedback is the accelerometer “electric spring” (see Figure 2)

Mathematically, this specific control torque in (22) can be presented as follows

mc=McJ=ksα+Δαεα̇+Δα̇E27

where mc is specific control torque applied by the “electric spring” (27), ks=kαkamkMcJ is the electric spring rigidity coefficient, kα is angle α scale coefficient, kam is amplifier scale coefficient, kMc is control motor scale coefficient, Δα is angular sensor α measured error.

Substituting (27), in (26), we can rewrite it as follows

s2+2+δs+ω2αs=ω02ns+δs+ksΔαsE28

where ω2=ω02+ks=ks1+ω02ks, d=b2=d0ω0ω, n=ag is measured input overload, δ is a small additional damping coefficient, created by the counter electromotive force in the winding of the motor-torque in the electric spring. It is neglected in further consideration δ0

The output voltage can be determined as in (25), measuring the angle α as U=kaα.

The scale coefficient of the accelerometer is determining by the following formula

ka=lω2=lω02+ksE29

Usually, the ratio ε=ksω0, is much bigger than one ε1 or 1ε21 , then for the accelerometer dynamic parameters takes place following approximate formulas

ωks=εω0E30
dd0εE31

The coefficient ε=ωω0 is accelerometer bandwidth increasing coefficient. We can see from (30) to (31) that the electric spring will increase the accelerometer natural frequency and in the same ratio decrease its damping coefficient. That is why original damping coefficient for the devise has to be chosen much bigger then is usually required for the 2-nd order dynamic unit d0.707.

Generalizing presented above results of the empirical design approach to the electric spring accelerometer, we can conclude as a rule of thumb that using negative positional feedback increases dynamic system bandwidth, however decreases its damping coefficient.

Numerical data and simulation results for the approach A are presented below

3.3 Free pendulum

Pendulum parameters

m0=5103Kg,l=0.02m,J=m0l2=2106kgm2,b=6.295104Nm/rad/s,c=m0gl=9.8104Nm/rad,ka0=9.8Vs2/m

Pendulum standard dynamics coefficients can be calculated as following

ω0=22.136s1f0=3.523HzT0=0.0452s,d=7.1

Simulink simulation scheme is presented in Figure 3

Figure 3.

Simulink block-diagram of the floating pendulum-system model.

Step-response of this pendulum is presented in Figure 4.

Figure 4.

Step-response of the pendulum to its input acceleration a=1m/s.

This pendulum with electric spring was simulated with similar Simulink scheme and methodology. Simulation results are presented below.

3.4 Pendulum with the electric spring-accelerometer

3.4.1 Desired coefficients

Let us assume that we want to increase the device natural frequency that characterizes its bandwidth in ten times to have the ratio λ=ff0=10. Then the accelerometer will have the following dynamics parameters: ω=221.36rad/s,f=35.24Hzd=0.71. The electric spring specific rigidity coefficient is ks=ω2ω02=4.851104rad/s2 (ks=KsJ) and the scale coefficient is ka=980Vs2/m.

The accelerometer Simulink simulation scheme is presented in Figure 5.

Figure 5.

Simulink block-diagram of the Electric Spring Accelerometer.

There were introduced measurement random errors Δα (electronic noise) as a Band-limited Gaussian White Noise with σΔα=0.010 and Δt=0.001s;fm=1Δt=1kHz (see White Noise generator block in pink in Figure 5). This measured noise is presented in Figure 6.

Figure 6.

Measurement random electronic Noise: σΔα=0.010,Δt=0.001s.

Step-response of the accelerometer deviation angle α is presented in Figure 7.

Figure 7.

Step-response αt of the accelerometer to its input acceleration a=1m/s, αt=0.0580,σα0.005.

The filtering capability of this device we can evaluate by finding the STD of its output angle σα, caused by the electronic noise N=σΔα2Δt of the electric spring. This STD can be calculated with formula, following from [24]. Using this equation for the steady state, we can found that

σα=πfdfNσΔαE32

where f is accelerometer natural frequency in Hz, d is accelerometer specific damping coefficient, fN=1Δt is the range of the frequency in Hz, where the noise can be practically considered as a “white”, Δt is the noise sampling time. Using (32) we can calculate that in our example the ratio r=σασΔα = 0.395. That is, the filtering capability of the accelerometer cuts its random electronic noise approximately in 2.5 times.

As we can see from the above results, the empiric approach A allowed us for getting a quite satisfactory accelerometer dynamics.

3.4.2 Analytical design approach

Let us introduce a simplest 2-nd order unit, to use it as the system (future accelerometer sensor) model

ẋ1=wẋ2=x1E33

where w is system disturbance, x1andx2 are system state vector variables.

Let us assume that the state vector variable x2 is measured

z=x2+vE34

where v is an additive measured error.

Let us also assume that w and v are Gauss white noises, having spectral densities q and r, respectively and find the optimal KF, minimizing the STD of estimated system state vector random errors [14].

Then KF for (33) and (34) can be written as follows

x̂̇1=k12z2x̂2x̂̇2=x̂1+k22z2x̂2E35

The control gaits in (35) can be calculated with formulas

k12=P12rk22=P22rE36

where Pij are elements of the steady state covariance matrix P that can be found solving the algebraic equation Riccati [18] for (33)(35).

Considered above case allows for the analytical solution of [18] that is as follows [9]

P12=q1r=rξ,P22=2rP12=r2ξ,P11=1rP12P22=r2ξξ,E37

where ξ=qr is ratio of system exiting noise to measured error noise spectral densities and has meaning of “filterability”. Substituting P12 and P22 from (37) in (36) we can determine the filter coefficients with formulas

k12=ξ,k22=2ξξ,E38

The KF (35) can be used as the closed loop negative feedback controller [9]. This closed loop system can be presented if we subtract (35) from (33). It will be presented by the following equations

x˜̇1=k12x˜2+k12v+wx˜̇2=x˜1k22x˜2+k22vE39

where x˜1=x̂1x1,x˜2=x̂2x2 are errors of control of the closed loop system (39).

This system can be converted to the following form

α¨+k22α̇+k12α=w+k22v̇+k12vE40

Let us introduce the following designations k22=2,k12=ω2, then (40) can be rewritten in the following Laplas transformation form

s2+2dωs+ω2α=ws+2dωs+ω2vsE41

Where ω=k12=ξ4 is filter natural frequency, d=k222k12=22=0.707 is filter specific damping coefficient.

Let us put in (40) that w=1la=ω02n, where n=ag is overload, ω02=gl, than (41) can be rewritten as following

s2+2dωs+ω2α=ω02ns+2dωs+ω2vsE42

Now this equation (42) may represent the accelerometer with the electric spring, like (28). But, unlikely (28) and (41) presumes that all system damping is created electrically (by the negative feedback compensation torque-spring). However, the similarity of the transfer functions (output/input) of (28) and (42) allows us to note some generic futures of the optimal 2-nd order LTI unit. It was already found that optimal specific damping coefficient must be 0.707 (d=0.707) and this fact doesn’t depend on w and v statistic characteristics. Let us determine the ratio ξ for (41)

ξ=qr=ω04σn2ΔtaσΔα2ΔtΔαE43

Then the natural frequency for (42) will be determined by the formula

ω=ξ4=ω0σnσΔαΔtaΔtΔαE44

Using analytical design approach B, we hardly can count on the availability of reliable statistic characteristics of w and v. Rather, we can set them empirically from the standpoint of common sense. For example, let’s we want to get the same system as using approach A. In another words we want to have the same ratio λ=ωω0=10, as in the example A, then we must assume that takes place the following ratio σnσΔαΔtaΔtΔα=10 or σnΔtaσΔαΔtΔα=102. Then this ratio can be rewritten as follows

σnσΔα=102fΔαfaE45

Let’s assume, as using approach A, that fΔα=103Hz and accept that faff=35.24Hz,fa=100Hz. Then, as follows from (47), for the equivalency of both approaches, namely A and B, should take place the following equality σn=316σΔα (where σΔα=0.01=0.017rad). Hence, σn=5.5σa=5.5g=54m/s2.

Simulation result of extracting by the KF filter (39) (accelerometer) a noise-like signal from the white noise is presented in Figure 8.

Figure 8.

Extracting by the synthesized KF accelerometer the noise-like signal from the measured white noise. a) Input noise: σa=54m/s2,Δta=0.01s b) Estimates â c) Measured noise: σΔα=0.010,ΔtΔα=0.001s.

We can see that synthesized by using KF accelerometer is quite efficient to measure not only step-like input accelerations (as it was showed by the approach A), but the white noise-like accelerations as well. If considered above numerical data are accepted, then both approaches led to the same system (accelerometer) dynamics.

Presented above KF results are rather qualitative, assuming a dry accelerometer (without damping by the viscous fill fluid), where the damping was created electrically (in the negative positional feedback). Basically, they can be appropriately converted and applied to the practical design. However, more accurate KF synthesis for the accelerometer (23) can be performed, considering the following model

ẋ1=2d0ω0x1ω02x2ω02wẋ2=x1z=x2+vE46

where x1=α̇,x2=α,v=Δα,w=ag.

ẋ̂1=2d0ω0x̂1ω02x̂2+k12zx̂2ẋ̂2=x̂1+k22zx̂2E47

where k12=P12r,k22=P22r

Using this model, the readers can analyze presented example more precisely. The author leaves this option for readers’ independent exercise

Advertisement

4. Conclusion

In the example above the KF was used for the analytical design to find optimal dynamic characteristics for the navigation accelerometer. Optimal tuning for the devise coefficients were found, which can be easily implemented in the accelerometer hardware, using analogue elements (floating pendulum, precision bearings, electric angle sensor, electronic amplifier and torque motor). Comparing the results with the conventional design we can conclude that the design is a stable and robust, as well as for the conventionally designed accelerometer. Generalizing, we can conclude that using the KF for the analytical design in engineer applications leads to quite realistic results that can be verified with conventional solutions. It can also be noted, that even using the analytical design, the choice of the appropriate values for the KF matrixes QandR is rather based on the developer experience and intuition than on real statistic characteristics of the processes WtandVt, which are unlikely be available to use them.

Advertisement

Acknowledgments

The author dedicates this Chapter to his mentor colonel, professor of Moscow Aviation Institute and Zhukovsky Air Force Engineering Academy V. P Selesnev, who introduced him in Kalman Filtering applications.

References

  1. 1. Kalman RE. A new approach to linear filtering and prediction problems. ASME Journal of Basic Engineering. 1960;82(1):34-45
  2. 2. Kalman RE, Bucy RS. New results in linear filtering and prediction theory. ASME Journal of Basic Engineering. 1961;80(1):193-196
  3. 3. Kim YV. An approach to suboptimal filtering in applied problems of information processing. USSR Journal of Science Academy. 1990;1(1):109-123
  4. 4. Kim YV, Kobzov PP. Optimal filtering of a polynomial signal. USSR Journal of Science Academy. 1991;2(1):120-133
  5. 5. Kim YV, Nazarov AB. Synthesis of optimal dynamic characteristics for a single axis Inertial Navigation System in a steady state. LITMO. 1975;18:80-85
  6. 6. Kim YV, Goncharenko G. On an approach to observability analysis in INS correction problems. Orientation and Navigation. 1981;11:25-28
  7. 7. Kim YV, Ovseevich AI, Reshetnyak YN. Comparison of stochastic and guaranteed approaches to the estimation of the state of dynamic system. USSR Journal of Science Academy. 1992;2(1):87-94
  8. 8. Kim YV. Kalman filter decomposition in the time domain using observabiliy index. In: International Conference Proceedings. New York: Curran Associates Inc; 2008
  9. 9. Kim UV. Kalman filter and satellite attitude control system analytical design. International Journal of Space Science and Engineering. 2020;6(1):94-95
  10. 10. Zarach P, Missof H. Fundamentals of Kalman Filtering: Practical Approach. Reston, VA: American Institute of Aeronautics and Astronautics, Inc.; 2000
  11. 11. Vukovich G, Kim Y. The Kalman Filter as controller: Application to satellite formation flying problem. International Journal of Space Science and Engineering. 3(2):148-170
  12. 12. Bryson AE, Yu-Chi HJ. Applied Optimal Control. Levittown, PA: Taylor & Francis; 1975
  13. 13. Chernousko FL, Kolmanovskiy VB. Optimal Control under Random Disturbances. Russia: Nauka; 1978
  14. 14. Gelb A. Applied Optimal Estimation. Cambridge, MA: The M.I.T. Press; 1974
  15. 15. Krasovsky A. Automatic Flight Control systems and Analytical Design. Russia; 1973
  16. 16. Seleznev VP. Navigation Devices. Russia; M. Mashinostroenie; 1974
  17. 17. Bellman R. Dynamic Programming. Princeton NJ: Princeton University Press; 1957
  18. 18. Kwakernaak M, Sivan H. Linear Optimal Control Systems. Russia: Mir; 1972. pp. 83-88
  19. 19. MathWorks, Inc. Kalman Filtering-Matlab and Simulink, Documentation. The MathWorks. Inc; https://www.mathworks.com › control, visited Aug. 2022
  20. 20. Pontriagin L, Boltiasky V, Gamkrelidze R, Mischenko E. Mathematical Theory of Optimal Processes. Nauka; 1976
  21. 21. Aarenstrap R. Managing Model-Based Design. Natick, MA: Math Works, Inc; 2015
  22. 22. Letov A. Flight Dynamics and Control. Russia: Nauka; 1979
  23. 23. MathWorks I. Training Course, Adopting Model based Design. Natick, MA: The MathWorks. Inc; 2005
  24. 24. Kim YV, Ovseevich AI, Reshetnyak YN. Comparison of stochastic and guaranteed approaches to the estimation of the state of dynamic system. USSR Journal of Science Academy. New York: Scripta Technica Inc. (Rus., Engl. Transl). 1992;2:87–94

Written By

Yuri V. Kim

Submitted: 09 June 2022 Reviewed: 09 August 2022 Published: 13 October 2022