Open access peer-reviewed chapter - ONLINE FIRST

Perspective Chapter: Approximate Kalman Filter Using M-Robust Estimate Dynamic Stochastic Approximation with Parallel Adaptation of Unknown Noise Statistics by Huber’s M-Robust Parameter Estimator

Written By

Branko Kovačević, Zoran Banjac and Tomislav Unkašević

Submitted: 14 December 2023 Reviewed: 20 January 2024 Published: 05 April 2024

DOI: 10.5772/intechopen.1004294

Kalman Filters -Theory, Applications, and Optimization IntechOpen
Kalman Filters -Theory, Applications, and Optimization Edited by Asadullah Khalid

From the Edited Volume

Kalman Filters -Theory, Applications, and Optimization [Working Title]

Dr. Asadullah Khalid, Dr. Arif I. Sarwat and Ph.D. Hugo Riggs

Chapter metrics overview

11 Chapter Downloads

View Full Metrics

Abstract

The problem of designing a feasible adaptive M-robustified Kalman filter in a case of a thick-tailed Gaussian environment, characterized by impulsive noise-inducing observation and innovation outliers, and/or errors in mathematical model-inducing structural outliers, has been considered. Firstly, the time-varying criterion is used to generate a family of dynamic stochastic approximation algorithms. The M-robust estimate stochastic approximation is derived by minimizing the minimum variance criterion, the estimates of the latter being combined with the one-step minimum mean square error prediction to design M-robust estimate Kalman filter. Finally, the latter is combined with the Huber moving window M-robust parameter estimator of the unknown noises statistics, in parallel with the M-robust state estimation to design an adaptive M-robust estimate Kalman filter. Simulated maneuvering target tracking scenario revealed that the proposed adaptive M-robust estimate-based Kalman filter improves significantly the target estimation and tracking quality, being effective in suppressing multiple outliers with contamination degrees less than thirty percent. Moreover, the achieved improvement comes with additional computational efforts. However, these efforts are usually not significant enough to prevent real-time application.

Keywords

  • Kalman filter
  • adaptive filter
  • non-linear filter
  • non-Gaussian noise
  • impulsive noise
  • outliers
  • robust estimation
  • target tracking

1. Introduction

An optimal estimate of a stochastic variable, or a stochastic process, using noisy measurements, is the one that maximizes, or minimizes, a suitable performance index, the examples of the latter being least squares (LS), maximum a posteriori probability (MAP), maximum likelihood (ML), minimum variance (MV), minimum mean-square error (MMSE), least absolute value (LAV), etc. [1, 2, 3, 4, 5]. Moreover, the optimization criteria usually depend on suppositions upon the statistical performance of the system parameters and/or states that have to be estimated. One of the most important contributions to the estimation field, significant from both theoretical and practical perspectives, is the Kalman filter (KF), [1, 2, 3, 4, 5]. However, if the system dynamics, and adjoin measurements, are under the influence of severe nonlinear effects that may not be linearized adequately, and/or the noises are thick-tailed Gaussian distributed, KF deteriorates its performance, and even ceases to work [6, 7]. In such situations, it may be obtained bounded estimation error, in the statistical sense, by utilizing the dynamic stochastic approximation [8]. Therefore, the dynamic stochastic approximation performs quite well in various practical problems, involving optimization, state, and parameter estimation, as well as pattern recognition and classification, [9, 10, 11, 12].

Additionally, in numerous practical situations, the real noises are thick-tailed Gaussian distributed, generating rare but high-intensity samples, named observation and innovation outliers, corrupting the mostly normally distributed measurements. The third type of outliers, named structural outliers, is caused by errors in a mathematical model, such as unmodeled system dynamics, time-varying bias, computational errors, etc. However, since KF is a linear signal processor that linearly transforms observations to calculate the system state updates, it is susceptible to outlying samples, being not robust. Therefore, it is of interest to design a robustified modification of the optimal KF, being able to manage a local not Gaussian noise vicinity, typified by impulsive disturbances inducing unavoidable outliers [13, 14, 15, 16, 17, 18]. In such situations, robust methods of mathematical statistics produce appropriate tools to cope with outliers for reducing their influence on the estimation quality [19, 20, 21, 22, 23, 24, 25, 26]. Moreover, the Huber’s approximate maximum likelihood, or M-robust, approach is commonly applied by practitioners [21]. The history of research activities in the field of robust system parameter and/or state estimation through the application of different schemes, combining the Huber M-robust estimator and the LS, or KF, techniques, is pretty long, and many articles are devoted to these topics [17, 18, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Thus, the emphasized solutions can be divided into two classes. The first one contains the batch processing, off-line, or non-recursive estimation procedures, in which the KF state estimation formulation is transformed to an adjoin parameter regression problem, the latter being resolved by utilizing the Huber M-robust approach [28, 29, 30, 31, 32]. The so attained robust procedure requires an iterated numerical method, such as the classical Newton method or its appropriate modification, as well as iterated reweighted LS [24, 25, 29, 30, 31, 32]. Here, the robustness feature is obtained using simultaneous processing of predictions and observations, making the procedure efficient in reducing the effects of outliers, but at the cost of larger numerical efforts. Another class contains the recursive, sequential, or online parameter and state estimators, due to practical requirements for the fast computations in real-time signal processing. These estimators represent a good trade-off between the computational complexity and estimation quality [13, 14, 15, 16, 17, 18, 33, 34, 35, 36, 37]. Particularly, it was proposed, in the recent past, an approximate KF, where the M-robust estimate dynamic stochastic approximation is used for the measurement update [17, 18]. Unfortunately, the M-robust scheme lacks sufficient adaptability, having a lower quality upon the pure normally distributed observations, but may also degrade its performance otherwise [17, 18, 27, 33, 34, 35, 36, 37]. Therefore, an adaptation to the local stochastic environment is needed. An appropriate approach to solve this problem, utilizing a combination of both the dynamic stochastic approximation and general time-varying M-robust criterion, together with the parallel Huber’s moving window M-robust parameter estimator of the first and second-order unknown observation and state noises statistics, has been presented in this manuscript. The organization of the manuscript is the next. An overview of the KF and brief presentation of the robustness concepts are given in Chapter 2. Chapter 3 considers a design of the M-robust estimate KF. Chapter 4 is devoted to designing an adaptive M-robust estimate KF. The proposed approach combines the M-robust estimate KF and the Huber’s moving window M-robust parameter estimator of unknown noise statistics. Simulation results that illustrate theoretical derivation, and demonstrate clearly the merits of emphasized adaptive M-robust estimate KF, using the real-life maneuvering target tracking example, are given in Chapter 5. Concluding remarks are presented in Chapter 6.

Advertisement

2. Overview of Kalman filtering technique

The optimal KF performances hold on to the recursive linear structure and the minimum variance procedure for calculating the gain sequence. Implementation of the optimal KF assumes both the system dynamics and adjoin measurement processes to be given by the linear, discrete-time, and state-space representation

xk+1=Fkxk+Gkwk;yk=Hkxk+vkE1

where xk is the n×1 system state, yk is the r×1 multidimensional observation, wk is the m×1 state noise, and vk is the r×1 observation noise, at the discrete-time, k. Furthermore, the sequences, vk and wk, are the zero-mean white stochastic disturbances, being mutually uncorrelated, as well uncorrelated with the random initial state, x0, with the mean value, m0, and the covariance, P0, that is

EwkvkwkTvkT=Qkδkj00Rkδkj;Ewkvkx0m0T=0E2

where E is the mathematical expectation, and δkj is the Kronecker’s delta symbol (δkj=0 for kj and δkj=1 for k=j). Here, Fk is the n×n state transition matrix, Gk is the n×m state noise transition matrix, and Hk is the r×n observation matrix.

Let x̂k/j=ExkYj, (j=k1,k), with E being the conditional mathematical expectation, represent the optimal LS estimate of the present state, xk, upon the observation set Yj=yiij, while Pk/j=Exkx̂kjxkx̂kjT designating the underlying error covariances. Then, the optimal KF relations are given by [1, 2, 3, 4, 5].

time update (prediction):

x̂k+1/k=Fkx̂k/k;Pk+1/k=FkPk/kFkT+GkQkGkTE3

measurement update (estimation, filtering):

x̂k/k=x̂k/k1+Kkεk;εk=ykHkx̂k/k1E4
Kk=Pk/k1HkTSk1;Sk=HkPk/k1HkT+RkE5
Pk/k=IKkHkPk/k1;Kk=Pk/kHkTRk1E6

Thus, the filter can be initialized with

x̂0/0=Ex0=m0=0;P0/0=Ex0m0x0m0T=P0E7

Here, the measurement residual, or innovation, εk in (4), is a zero-mean white stochastic sequence, with the variance Sk, in (5).

However, since the optimal KF linearly transforms observations, the state updates are not constrained, unless the system states are independent of the observations when the state updates are identically zero. In this sense, the linear optimal KF is susceptible to outliers, being not robust.

Thus, in the case of the Gaussian distributed observations contaminated by outliers, the two estimators are frequently used by practitioners [14, 29, 30]. In this sense, if the observation noise pdf is given by the δ-contaminated Gaussian pdf

CNδσn2σ02=1δN0σn2+δN0σ02;0δ<1,σ02σn2E8

then there are two possible procedures to handle outliers:

  1. the standard KF treats the zero-mean Gaussian distributed observations with the increased known variance

    R=1δσn2+δσo2E9

  2. the nominal KF ignores abnormality, or outliers, and simply treats the Gaussian distributed observations with the given nominal variance, R=σn2.

In many cases of the non-Gaussian observations, the standard KF works fairly well, while the nominal one produces bad results, and even diverges [6, 7]. However, for complicated observation pdfs, and particularly if the non-Gaussian-ness is in a thick-tailed variety, as in (8), giving rise to observation and innovation outliers, the standard KF performance may also be quite poor [13, 14, 15, 16, 17, 18, 28, 29, 30, 31, 32, 33].

Formal mathematical definitions of robustness are given in the mathematical statistics [19, 20, 21, 22, 23], and the four such definitions exist. The two of them are ad hoc and data-oriented, named resistant and efficiency robustness, being preferable by the practitioners. Furthermore, an estimate is resistantly robust if it is insensitive both to a single outlying data point and the patchy, or grouped, ones. Additionally, some estimators are efficiently robust if they produce a high performance upon both the normally distributed measurements and normal ones under contamination, involving observation, innovation, and structural outliers [19, 20, 21]. Therefore, the practical robustness involves both the efficiency and resistant properties. The other two definitions, named min-max and qualitative robustness, have strong theoretical foundations [21, 23]. Thus, the qualitative robustness utilizes the Hampel’s influence function, measuring a robust estimator’s ability to cope with outliers [23]. On the other hand, the min-max robustness holds onto minimization of the estimator worst-case performance within the given distribution family to which the real observation noise distribution is confined [21]. Such optimization task is complex, and its exact solution may be derived only for a time-invariant model when the Fisher information is minimized [21, 27]. Thus, the robust min-max estimator reduces to the optimal ML one, whose likelihood function is defined by the worst-case pdf. On contrary, the Huber M-robust estimator is not quite the optimal ML one, but it approximates the last one to achieve both the efficiency and resistant robustness performances.

A simple and efficient concept to robustify the optimal KF can be developed using the Huber M-robust procedure. A version of this approach, based on both the one-step MMSE prediction and M-robust estimate dynamic stochastic approximation, is presented in the sequel.

Advertisement

3. M-robust estimate Kalman filtering

Although the dynamic stochastic approximation method is primarily designed to solve parameter regression problems, it may be extended to robust estimation of the dynamic system states, represented by (1), assuming the scalar measurements. The vectorial observations can be handled by considering components of the observation vector as the scalar observation, being used one at a time. Here is assumed that the observation vector components, in (1), are mutually uncorrelated scalar random variables, yielding a diagonal covariance, R in (2), A stable numerical algorithm to diagonalize a symmetric covariance matrix is the Choleskey factorization, or the UD decomposition [2]. In addition, due to simplicity, it uses the shortened notation x̂kk1=x¯k, x̂kk=x̂k, Pkk1=Mk, and Pkk=Pk. In this sense, one can define the time-varying functional of the scaled innovations, representing the M-robust optimization criterion

Jkx¯k=Eρεkx¯kskx¯kYkE10

Here, Ex¯kYk is the conditional mathematical expectation given the current system state prediction, x¯k, and the measurement set, Yk=y1yk, where ρis Huber’s robust score function

ρz=ΔzΔ2/2;zΔz2/2;z<Δ;ψ=ρ=minzΔsgnzE11

that has to cut off the outliers, while ψ is the influence function, that determines the estimator’s ability to reduce the effects of different outlier types, including the observation, innovation, and structural outliers [17, 18, 19, 22, 23, 24, 25, 26]. Here, the saturation threshold, Δ, is a tuning parameter, providing for the required efficiency upon the pure Gaussian distributed observations, in (8) with δ=0.

Using step-by-step minimization of the M-robust criterion (10), one generates a family of the recursive gradient-type algorithms

x̂k=x¯k+Γkgkx¯k;gkx¯k=x¯Jkx¯kE12

where x¯Jk is gradient vector of the optimization criterion in (10), while the innovation or measurement residual, ε, is given by

εk=εx¯k=ykHkx¯k;Sk=EεkεkT=HkMkHkT+Rk;sk=Sk1/2E13

with Sk being the residual variance, with the corresponding standard deviation, sk. Taking into account (10) and (11), one obtains the gradient vector expression

x¯Jkx¯k=kJkx¯kx¯k=1skEψεkskx¯kYkHkTE14

However, the expectation, in (14), is undeterminable and may be represented approximately by the current sample, producing the stochastic gradient relation

mk=1skψεkskHkTE15

Thus, by replacing (15) with (12), one gets

x̂k=x¯kΓkmk=x¯k+1skΓkHkTψεkskE16

with sk being the normalizing, or scaling, factor, in (13).

The gain sequence, Γk in (16), has to supervise the convergence rate. Therefore, to link the M-robust dynamic stochastic approximation method (16), and the optimal KF, in (3)(6), an additional approximate MV optimization criterion is adopted

J1Γk=TracePk;Pk=Ex˜k+x˜kT+;x˜k+=xkx̂kE17

with Pk being the underlying error covariance, while the matrix Trace is the sum of its diagonal components. The posed optimization problem, in (17), is a complex one that cannot be solved exactly, and a suboptimal solution has to be derived. Thus, the relation (16) can be rewritten in a recursive weighted LS form, being equal formally to the KF time-update (4), that is

x̂k=x¯k+Kkεk;Kk=sk2ωkΓkHkTE18

with the robust weighting, normalizing, or penalty, term

ωk=ψεk/skεk/skforεk0andsk01forεk=0and/orsk=0E19

The prediction error, x˜k, is given by (1) and (3), from which it follows

x˜k=xkx¯k=Fk1x˜k1++Gk1wk1E20

where, due to (18), (13), and (1), the estimation error, x˜k+, is determined by

x˜k+=xkx̂k=x˜kKkHkx˜kKkvkE21

with wk and vk being the state and observation noises sequences of the zero-mean uncorrelated random variables, in (1) and (2). Under the initial guess (7), one concludes, by the mathematical induction, that the errors (20) and (21) are unbiased. Furthermore, under the assumption (2), the estimation errors and the noises are mutually uncorrelated, yielding

Ex˜k=0;Ex˜k+=0;Ex˜k+wkT=0;Ex˜kvkT=0E22

Taking into account (20) and (22), one concludes that the covariance, Mk, is defined by the KF Eq. (3), yielding

Mk=Ex˜kx˜kT=Fk1Pk1Fk1T+Gk1Qk1Gk1TE23

Moreover, it follows from (2), (21), and (22) that the error covariance, Pk in (17), is given by

Pk=Ex˜k+x˜kT+=MkKkHkMkMkHkTKkT+RkKkKkTE24

By substituting (18) into (24), the relation (17) reduces to

J1Γk=TraceMk2sk2ωkTraceΓkHkTHkMk+sk4ωk2RkTraceΓkHkTHkΓkE25

After applying the partial derivation, based on the rules to partial derivation of trace of matrices product, [1, 4].

ATraceBAC=BTCT;ATraceABAT=2ABifB=BT

one obtains from (25)

J1ΓkΓk=2sk2ωkMkHkTHk+2sk4ωk2RkΓkHkTHk=0E26

Additionally, due to (11), the term ωk, in (19), can be approximated by only the zero and unity values, yielding ωk2ωk, from which it follows

J1ΓkΓk=2sk2ωkMksk2RkΓkHkTHk=0E27

Moreover, from (27) and (13), one gets

Γk=sk2Rk1Mk=Rk1HkMkHkT+1MkMk;sk2Rk11E28

The approximation of Γk by Mk is based on the following reasoning: in saturation region of the ψ nonlinearity (11), indicating the outliers presence, the observation noise variance, Rk in (9), has a huge value. On contrary, in linear segment of the ψ nonlinearity, indicating the outliers’ absence, the M-robust filter (18) reduces to the optimal KF, (3)(6), yielding a low prediction error covariance Mk. Applying (28), (18), and (24), one gets

Pk=Mk2sk2ωkMkHkTHkMk+Rksk4ωk2MkHkTHkMkMksk2ωkMkHkTHkMkE29

Taking into account (18), the expression (29) reduces to the optimal KF Eq. (6), that is

Pk=IKkHkMk;Γk=Mk;Kk=sk2ωkMkHkT;Kk=ωkPkHkTRk1E30

Additionally, derivation of the fourth equation for the gain Kk, in (30), is based on the matrix inversion lemma [1, 4]. Thus, by substituting Sk from (13) into the third equation in (30), one obtains

Kk=ωkMkHkTHkMkHkT+Rk1E31

Furthermore, by replacing (31) with the first equation in (30), and taking into account the approximation ωk2ωk, one obtains

ωkPk=ωkMkωkMkHkTHkωkMkHkωkMkHkT+ωkRkE32

After application of the matrix inversion lemma, stating that the relation.

P2=P1P1HTHP1HT+R1HP1

can be represented in an alternative form

P21=P11+HTR1H

the relation (32) reduces to

Pk1=Mk1+HkTRk1Hk;PkMk1+HkTRk1Hk=IE33

where I is an identity matrix. Using (33), the relation (31) can be rewritten as

Kk=PkMk1+HkTRk1HkωkMkHkTHkMkHkT+Rk=ωkPkHkTRk1E34

being identical to the fourth relation, in (30). In accordance with the optimal KF, the derived M-robust estimate KF has the next recursive form.

time update (prediction), (3), (23):

x¯k+1=Fkx̂k+Gkw¯k;Mk+1=FkPkFkT+GkQkGkT;w¯k=EwkE35

measurement update (estimation, filtering), (11), (13), (18), (19), (30):

x̂k=x¯k+Kkεk;εk=ykHkx̂kv¯k;v¯k=EvkE36
Sk=sk2=EεkεkT=HkMkHkT+Rk;ωk=ψεk/skεk/skforεk0andsk01forεk=0and/orsk=0E37
Kk=sk2ωkMkHkT;Pk=IKkHkMk;Kk=ωkPkHkTRk1E38

where, due to the assumption (2), w¯k=v¯k=0.

There is the same ad hoc logic in the M-robustified KF gain calculation, in (34) or (38), and calculation of the KF gain, in (6). Thus, taking into account that the weighted factor ω, in (19) and (37), approximates the slope, or the first derivative, of the ψ nonlinearity (11), one concludes that ωz1 if zΔ, while ωz0 if z>Δ. In this sense, for those measurements being confined to the linear region of ψ non-linearity, indicating the Gaussian distributed observations, value of the factor ω is closed to the unity value, and the expression (34) approximates properly the optimal KF gain, in (6). Furthermore, upon the saturation region of ψ non-linearity, indicating the outliers’ existence, the factor ω is zero-valued, resulting in a low robust gain K, in (34), the value of the latter being produced a low value of the state update, in (18) and (28). This, in turn, reduces the effects of multiple outliers.

The application of the KF needs the state and observation noise statistics, in (1) and (2), to be known in advance. However, these requirements may be not fulfilled in practice, due to modeling errors, inducing structural outliers, as well the presence of innovation and observation outliers, [17, 18, 28, 29, 30]. A suitable approach to designing an adaptive modification of the emphasized M-robust estimate KF, in (35)(38), based on the Huber moving window M-robust parameter estimator of unknown disturbances statistics, is considered in the next chapter.

Advertisement

4. Adaptive M-Robustifying Kalman filtering with unknown noise statistics

In various practical examples, components of the vector-valued measurements may be processed one at a time, so that the observations, in (1), become scalar random variables. Since the true system state is unknown, the observation noise sample, vk, cannot be estimated by (1). However, an intuitive approximation of the observation noise sample, vk, is given by the scalar random variable

rk=ykHkx̂kE39

where x̂k is the M-robust state estimate, in (35)(38). Taking into account (1), (2), and (22), one gets further

rk=Hkxkx̂k+vk=Hkx˜k+vk;r¯k=Erk=Evk=v¯kE40

where x˜k is the robust estimation error, in (17). In addition, taking into account (2), (22), and (40), the variance of the real scalar random variable, rk, is given by

Vrk=Erkr¯krkr¯kT=HkEx˜kx˜kTHkT+Evkr¯k2=HkPkHkT+RkE41

where Pk is the error covariance matrix, in (17). Therefore, if the first and second-order moments, r¯k and Vrk, are somehow estimated, say by r̂k and V̂rk, one can estimate further v¯k and Rk, from (40) and (41), respectively. In this sense, one obtains the following estimates for the observation noise statistics (men value and variance)

v̂k=r̂k;R̂k=V̂rkHkPkHkTE42

where Pk is generated by the M-robustified KF, (35)(38). Here, denotes the absolute value to prevent the negative variance estimates, due to possible numerical errors in applications.

Let us assume further, due to simplicity and clarity, that the state noise sample, wk in (1), is a scalar random variable, as it is frequently fulfilled in tracking applications, [38, 39, 40]. In addition, if the system under consideration is a single input - single output (SISO), and the corresponding state-space model (1) is presented in the controllable canonical form, G in (1) is the n×1 column vector, with zero-valued components, except the last one that is unity valued, [4]. Thus, the random input, or state noise sample, wk, is a real scalar random variable. Moreover, the state noise sample, wk, can be estimated from the state Eq. (1), when the unknown random states, xk+1 and xk, are replaced by the two successive robust estimates, x̂k+1 and x̂k, in (35)(38). A such obtained state noise sample estimate represents a scalar random variable

qk=Tkx̂k+1Fkx̂k;Tk=GkTGk1GkT;GkTGk=1E43

The relation (43) can be rewritten in an alternative form

qk=Tkx̂k+1Fkx̂kxk+1+xk+1E44

which, after replacing (1) with (44) reduces to

qk=Tkx˜k+1Fkx˜k+Gkwk;x˜k=xkx̂kE45

Taking into account (1), (2), and (22), further follows that the unknown mean value, w¯k, of the state noise, wk, is equal to the mean value, q¯k, of the random variable, qk, that is

q¯k=Eqk=TkGkEwk=w¯k;TkGk=1E46

Moreover, variance of the random variable, qk in (43), is defined by

Vqk=Eqkq¯k2=Eqkq¯kqkq¯kTE47

Bearing in mind (1), (2), and (22), and by substituting (45) and (46) into (47), one gets

Vqk=TkPk+1FkEx˜kx˜k+1TGkEwkw¯kx˜k+1TEx˜k+1x˜kTFkT+FkPkFkTEx˜k+1wkw¯kGkT+GkEwkw¯k2GkTTkTE48

Furthermore, by substituting the prediction error, in (20), for the estimation error, in (21), one obtains the estimation error recursion

x˜k+1=IKk+1Hk+1Fkx˜k+IKk+1Hk+1Gkwkw¯kKk+1vk+1v¯k+1E49

from which it follows

FkEx˜kx˜k+1T=FkPkFkTIKk+1Hk+1T;Ex˜k+1wkw¯k=IKk+1Hk+1GkQkE50

Furthermore, by substituting (50) into (48), together with using the expressions for the error covariances, Mk in (23), and, Pk in (30), one gets

Vqk=TkPk+1+FkPkFkTTkT+QkE51

Therefore, if q̂k and V̂qk denote some estimates of the unknown mean value, q¯k in (46), and the variance, Vqk in (51), respectively, one can estimate the unknown first and second-order state noise statistics, w¯k and Qk, by the following equations

ŵk=q̂k;Q̂k=V̂kkTkPk+1+FkPkFkTTkTE52

Relations (42) and (52) are still valid for the KF, in (3)(6), since the expressions for the state prediction, x¯k, prediction error covariance, Mk, state estimation update, x̂k, and estimation error covariance, Pk, are given by the identical equations to the M-robust estimate KF, in (35)(38). Thus, the derived relations, (42) and (52), can be applied to designing not only an adaptive M-robustified KF but also to designing an adaptive linear optimal KF.

Next, to calculate the unknown observation noise statistics, in (42), as well as the unknown state noise statistics, in (52), a scalar constant parameter estimation problem has to be considered. Thus, let us consider a real scalar random variable, , generating the independent and identically distributed (iid) samples, ri, i=kL+1,,k, in (39), being stored in a one-step moving window of the size L at the present stage, k. Using the empirical samples, the mean value, or the location parameter, r¯k=m, and the variance, Vrk=V, related to the random variable, , has to be calculated at each stage, k. Unfortunately, analogously to the linear optimal KF, the commonly used sample mean and sample variance are not robust, [19, 20, 21, 22]. Therefore, the Huber moving window M-robust parameter estimator is proposed as a robust alternative to the sample-mean and variance estimates. The Huber M-robust estimate, r̂k, of the unknown mean value, r¯k, minimizes the robust index of goodness, being given by the experimental sum

JMEr̂k=1Li=kL+1kρrir̂kd̂kE53

where ri is the given sequence of observations, in (39), covering the last L samples related to the current stage, k, and ρ is the Huber’s loss function, in (11), [21]. Thus, the M-robust estimate, r̂k in (53), represents a solution of the nonlinear algebraic equation, obtained from the optimality condition

dJMErdrr=r̂k=i=kL+1kψrir̂kd̂k=0;ψx=xdxE54

where ψ is the saturation nonlinearity, in (11). The scaling factor, d̂k, provides for the scale-invariant characteristic of the M-robust location parameter estimator, in (53) and (54). Although ad hoc, a popular robust estimate, d̂k, is the median of the absolute median deviation (MAD)

d̂k=medianrimedianri/0.6745;i=kl+1,,kE55

being defined on a one-step moving window of the size l [19, 21, 22, 23, 24, 25]. The divisor 0.6745 is taken since upon large enough sample size, l, and if the samples belong to the Gaussian pdf, then d̂k is equal approximately to the true variance, σ2=V, [21]. Furthermore, the proposed procedure of choosing d̂k indicates a suitable choice of the saturation threshold, Δ, in (11). Since, d̂kσ, the free parameter, Δ, is commonly chosen to be some number about 1.5. Such estimate is named the Huber 1.5-M-robust parameter estimator [21]. The Eq. (54) is non-linear, and an iterated numerical algorithm is requested [21, 24, 25]. However, an approximate weighted LS approach results in a one-step suboptimal solution

i=kL+1kωi0rir̂k0;r̂k=r̂L=i=kL+1kωi0rii=kL+1kri;ωi0=ψrir̂0d̂krir̂0d̂k;d̂k0andrir̂01;d̂k=0and/orri=r̂0E56

with r̂0 being an arbitrary initial guess [25, 29, 30]. Particularly, when d̂k=V, the asymptotic variance of M-robust location parameter estimator (53), (54) is given by [21].

Va=limLELr̂Lr¯2=VEψ2rir¯VE2ψrir¯V;r¯=m;Vr=VE57

A natural estimate of Va, in (57), is obtained when the undeterminable expectation is approximated by the average sum, together with the MAD estimator (55) for the scaling factor calculation at each stage, k. Thus, the unknown variance, Vrk in (42), may be estimated by the Huber’s moving window M-robust parameter estimator

V̂rk=d̂k21Li=kL+1kψ2rir̂kd̂k1Li=kL+1Lψrir̂kd̂k2E58

Moreover, if ψ is the linear function, the variance estimate (58) reduces to the sample variance.

The M-robust parameter estimator, in (55), (56), and (58), is used to calculate the nuisance estimates, r̂k and V̂rk, the estimates of the latter are used to calculate the requested observation noise mean value, v̂k, and the variance, R̂k, from (42). The state noise sample estimates, qk in (43), has to be managed simultaneously with the observation noise samples, rk in (39). Thus, an additional one-step sliding frame of the size L has to be used, covering the last L samples, qi, i=kL+1,,k, at each stage, k. Again, the Huber moving window M-robust parameter estimator, in (55), (56), and (58), is used to estimate the nuisance parameters, q̂k and V̂qk, the estimates of the latter are applied to calculating the required state disturbances statistics, ŵk and Q̂k, from (52). The proposed adaptive estimation scheme needs an additional memory and shifting operations upon the sample estimates, ri and qi, within the given sliding data frames of the length L. Thus, the samples rkL and qkLare dropped when the current samples, rk and qk, are stored in the current sliding data frame.

It should be noted that the robust MAD estimator, in (55), may be also used to estimate the variances, V̂rk and V̂qk, but the robust estimator (58) represents a better solution at the expense of increased computational requirements. Another alternative estimator to calculate robustly the variance may be found in the statistical literature [19, 20, 21, 22, 23, 24, 25, 26]. These solutions use primarily some influence function, being non-decreasing near the origin, but decreasing toward zero from the origin, named redescending influence function. However, the underlying M-robust estimators may not have a unique solution, requiring a brief choice of the initial conditions, being generated, for example, by the Huber’s robust estimator. Furthermore, during the filter initial steps, the disturbance sample estimates, (39) and (43), may represent bad indicators of the stochastic surroundings. Therefore, a fading memory method, weighting the successive disturbance sample estimates, rk or qk, by the growing factor

gk=kLkL1kLγ/kγ;k=L,L+1,;limkgk=1E59

can be applied to delay the individual noise sample estimates, rk and qk, over the first γ steps [15, 30]. Moreover, during the filter initialization, the first L noise samples, rk and qk, may be generated by using the Gaussian random number generator, with zero-mean and the corresponding variances, R̂1 and Q̂0, the values of the latter being used to define the fixed scaling factors at the initial data frames. It should be noted that in most applications, the mean values of observation and state noises are zero-valued, and there is no need to estimate these noise statistics. In such a case, the mean value estimates r̂k and q̂k, in (55) and (56), are used only to calculate the variance estimates, V̂k and V̂q, by (58), the estimates of the latter are used to calculate the required observation and state noise variances, using (42) and (52), respectively.

A theoretical convergence analysis of the emphasized M-robustified KF algorithm, either adaptive or not, is complex in the technical sense, owing to both a nonlinear scaled innovation processing and a time-varying multivariable dynamic system model. Moreover, drawing appropriate conclusions to sensitivity on multiple outliers, involving the observation, innovation, and structural ones, as well as choice of the initial conditions and the form of nonlinear influence function, is exceedingly challenging. Therefore, comprehensive Monte Carlo simulations are needed to refine the characteristics of the proposed adaptive M-robust estimate based KF and to bring appropriate conclusions.

Advertisement

5. Simulation results

A requirement for simple filters in real-time applications implies the desirability of uncoupled filtering, where independent tracking is performed in each coordinate (x,y,z) of the Cartesian Coordinates System (CCS). Particularly, three independent KF in the CCS coordinates are utilized, each of them being based on the three-dimensional state (position, velocity, and acceleration) representation, in (1). The first step in simulations is to generate the nominal target trajectory. The kinematic equations of motion, with respect to an inertial reference system, produce the true state variables, in (1), when a desirable acceleration profile, being a piece-wise constant over the sampling intervals, is adopted [40]. The so-obtained target trajectory is used in the presented simulations, and Figure 1 depicts the actual target trajectory projections in each of the CCS planes xOy,yOz and xOz, respectively.

Figure 1.

The actual target trajectory in the CCS planes (x, 0, y), (x, 0, z), (y, 0, z).

Because of a similar behavior of the actual components of three-dimensional state vector in each of the CCS coordinates, only the z-coordinate is represented in Figure 2, where zk, żk, and z¨k are the position, velocity, and acceleration of a target, at the k-th scan. Proceeding from a thick-tailed Gaussian behavior of the position measurement probability distribution function, associated with the target glint noise spikes, inducing innovation and observation outliers, the observation noise sequence, vk, is generated as the samples from the δ-contaminated normal pdf, in (8). The random noise sample, vk, is produced by taking firstly a uniformly distributed sample, u, in the [0,1] interval. Thus, uδ, the sample, vk, is produced by the contaminating normal pdf, N0σ02, in other ways, the sample, vk, belongs to the nominal normal pdf, N0σn2. A typical heavy-tailed glint noise record is given in Figure 3.

Figure 2.

The actual components of the three-dimensional state (position, velocity, and acceleration) in the z-CCS coordinate.

Figure 3.

Typical thick-tailed Gaussian glint noise record δ=0.05σn=1σ0=90.

The position measurements, in (1), are generated at each scan, k, by adding the previously generated observation noise sample, vk, to the true target projection on the CCS coordinate of interest. For example, the noise sample, vk, has to be added to the true zk position, in Figure 2, assuming that the z-CCS coordinate is concerned.

The filter-world equations of motion include only the linear discrete-time state-space representation with only the position measurement, in (1). Thus, the corresponding system matrices take the form

F=FT=1TT2/201T001;G=001;H=000E60

Here, T represents the constant period of sampling, and T=4s is used.

Radar observations commonly involve the range, r, elevation, β, and azimuth, α, angles, because the measurement errors are not coupled in the Spherical Coordinate System (SCS). Moreover, these measurements, when transformed to CCS, are no longer uncoupled, so the use of the three independent and uncoupled three-dimensional state KF, based on the model (1) and (60), in each of the CCS coordinates, may lead to less accurate filtering than it would occur if the coupled nine-dimensional state KF is used. Thus, the emphasized procedure is a good balance between tracking ability and computational requirements, providing also a meaningful test to tracking quality in the impulsive noise vicinity inducing innovation and observation outliers, as well significant errors of dynamic system model, due to a target maneuver, inducing structural outliers. The state-space model, (1) and (60), is named the constant acceleration (CA) one [40].

The implementation of the KF, robustified or not, requires the filter initialization, and a priori knowledge of the state and observation noises statistics. The optimal KF initialization, in (7), is based on the optimal estimates of zero-mean states, in (1), when the observations are not available. Thus, the adopted initial guesses, m0=0 and P0 in (7), are the steady-state values. However, such initialization approach cannot be used for a tracking problem, because the state vector does not reach a steady state covariance matrix, P0. Namely, the cumulative effects of many acceleration perturbations affect the state vector component variances to grow with the time index, k, about their nominal values [39, 40]. Thus, a suboptimal and reasonable ad-hoc initialization can be developed using the two available position sensor measurements, in (1), yielding

x̂00=x̂0=y2y2y1T0;P00=P0=diagσ12σ22σ32E61

Here, uncertainty in the initial three-dimensional state vector, in (1), is reflected in initial values of the covariance matrix, P0 in (61), where σ1, σ2, and σ3 denote the standard deviations of position, velocity, and acceleration in the three-dimensional state space model, (1) and (60). A reasonable choice is σ1=100m,σ2=50m/s, and σ3=10m/s2, [40].

The first-order statistics of state and observation noises are commonly zero-valued that is, v¯=w¯=0 in (35) and (36). The state noise variance, Q, represents a random motion, maneuver, or acceleration entering a system between the sampling intervals. Starting from the state space model, in (1) and (60), the variance Q can be calculated by adopting the first-order Gauss-Markov state noise process, with the exponential correlation coefficient, assuming that Tτm where τm is the target maneuver time constant. Applying further the Singer’s approach, the random target acceleration can be generated from the mixed pdf, representing a combination of the uniform pdf, fu, on the interval AmAm, with Am being the maximal target acceleration in both directions, and the three Dirac impulses Pmδu±Am, P0δu. Here, Pm is the probability of maximal accelerations, ±Am, while P0 is the probability of zero-valued acceleration. Starting from such mixed pdf, one can calculate the random maneuver variance, σm2, from which it follows the state noise variance [40].

Q=Ewk2=2Tσm2/τm;σm2=Am21+4PmP0/3E62

Particularly, by assuming Pm=0.1;P0=0.3;Am=2g/3;T=1s;τm=10s one gets σmg, and Q=0.45g, where the gravity constant g=9.81m/s2.

For the limiting case, Tτm, a suitable simplification to the two-dimensional-state KF becomes valid, since the KF gains adjoin to the acceleration estimates take low values, [29, 40]. However, the state noise must be taken to replace the omitted acceleration term, and such model is called the constant velocity (CV) one. Additionally, the limiting case, Tτm, is not the nominal one, and usually Tτm, [40]. Thus, the variance, Q, may be chosen in an ad-hoc manner, as an average Monte-Carlo value on the tracking interval of interest, [40]. In these simulations the value Q=0.8 is used, being obtained as the Monte-Carlo averaging over 100 runs on the tracking interval of 100 s.

If a radar system measures in the SCS coordinates rαβ, and tracks in the CCS coordinates, a reasonable choice for the range sensor noise standard deviation, σR, is the one percentage of its output at the nominal range, R. Moreover, commonly used standard deviations, σα and σβ, of the angular noises in the azimuth and elevation sensors, respectively, are of the order 1 deg., or 0.017 rad, [40]. By using the transformation from SCS to CCS, one obtains the following observation noise covariance in CCS, [40]

Rs=diagσR2σα2σβ2;Rc=TRsTT;T=cosαcosβrsinαcosβrcosαsinβsinαcosβrcosαcosβrsinαsinβsinβ0rcosβE63

Thus, the position measurement noise variance, in (2), in each of the CCS coordinates is given by the corresponding diagonal component of covariance matrix, Rc, in (63). However, in these simulations, an alternative ad-hoc approach, based on the measurement disturbance representation, in (8) and (9), is used. Assuming the standard deviations, σn=1 and σ0100σn, as well that the observations contain from one to twenty-five outlier percentages, corresponding to the contamination degree from δ=0.01 to δ=0.25, one obtains the observation noise variance value Rk=400, as an average of the different values of the free parameters δ and σ02, in (9).

Performances of the next algorithms are compared: optimal KF (3)(6), designated as A1; adaptive KF, combining the A1 filter with the Huber’s moving window M-robust algorithm to estimate noises moments, in (39), (42), (43), (52), (55), (56), (58) and (59), denoted as A2; M-robust estimate KF, in (35)(38), denoted as A3; and adaptive M-robust estimate KF, combining the filter A3 and the Huber M-robust parameter estimator of unknown noises statistics in A2, denoted as A4. A filter adaptation to the unknown noise statistics requires to adopt the lengths of the corresponding sliding frames. A reasonable ad-hoc choices are L50500, l310 and the forgetting factor, in (59), γ10. Particularly, the presented simulation results are generated by using L=250, l=5, and γ=9. The initial guesses of the noise statistics are chosen as r̂1=0, R̂1=400, q̂0=0 and Q̂0=0.8. These constant values of the noise statistics are also used in the non-adaptive filters A1 and A3.

Simulation results, for each of the CCS coordinates, are analyzed using the cumulative estimation error (CEE) factor of goodness, together with the estimated observation disturbance moments (mean value and variance), where the criterion

CEEk=1ki=1kx̂ixixi;k=L,L+1E64

with being the Euclidian norm. Since the obtained results are similar to each of the CCS coordinates, only the results for the z-coordinate are presented in Figures 4 and 5, while the estimated observation noise mean value and variance are plotted in Figure 6. The commonly used sample variance and mean estimates, generated at moving frame of the length L=250 upon the Gaussian observation noise, are compared with the robustly estimated observation disturbance moments by the algorithm A4 upon the normally distributed observations corrupted by different types of outliers, involving the innovation, observation and structural ones, (Figure 6). It should be noted that the adaptive algorithms A2 and A4 provide slightly better results than the algorithm A1, and obviously better results than the algorithm A3 under the pure normally distributed observations in the presence of the target maneuver (Figure 4). Moreover, the robust algorithms, A3 and A4, provide better results than the linear non-robust ones, A1 and A2, upon the presence of multiple outliers (Figure 5).

Figure 4.

CEE criterion comparison of different estimators in the Gaussian observation noise (δ=0), in the presence of a targeting maneuver.

Figure 5.

CEE criterion comparison of different estimators in the contaminated Gaussian noise (δ=0.1), involving innovation, observation, and structural outliers.

Figure 6.

Estimated measurement noise statistics comparison of sample mean and variance estimates upon Gaussian noise, and the algorithm A4 estimates upon thick-tailed Gaussian noise (δ=0.1): (a) mean value; (b) standard deviation.

Finally, the adaptive robustified algorithm A4 provides the best results, representing an appropriate trade-off between tracking ability and robustness performance, suppressing efficiently outliers with a contamination degree less than thirty percent, (Figures 4 and 5). Also, the estimated noise statistics by the algorithm A4 upon the presence of outliers compare equally with the sample mean and variance estimates upon the absence of outliers, (Figure 6).

Advertisement

6. Conclusion

Since the Kalman filter has a computationally attractive recursive structure for real-time applications, it is widely used to solve the practical problems, involving system parameter identification, dynamic system state estimation, optimal and adaptive control, statistical signal processing, etc. In addition, numerous scientific and industrial measurements are analyzed, in the statistical sense, and the obtained results have indicated that the observations contain from five to ten percent of outliers, on average, due to incomplete measurements, meter and communication errors, errors in mathematical model, etc. In this sense, a thick-tailed Gaussian noise distribution is followed by the two types of impulsive noises, inducing innovation and observation outliers. The third type of outliers, named structural outliers, appears due to modeling errors, such as unmodeled system dynamics, time-varying model bias, computational errors, etc. Particularly, the thick-tailed normally distributed observations noise is presented in a maneuvering target tracking, due to the target glint inducing the innovation and observations outliers. Two characteristics of such data can be stressed. Firstly, there exists a high-intensity spiky character of the observations and, secondly, there is a clear low frequency, named the bright spot wonder, being produced by a slow drift in a target gravity center, due to changes in the target aspect. The low frequency can be efficiently suppressed in the tracking loop, but the glint spikes, representing the observation and innovation outliers, are stuck to the tracking loop. Moreover, a linear dynamic state space model is not adequate during a target maneuver, and such modeling errors induce structural outliers. However, since the Kalman filter is a linear signal processor, its performance can be very poor upon the outliers’ existence.

The basic objective of this manuscript is to design an appropriate Kalman filter modification that performs fairly well upon multiple outliers. It has been demonstrated, by a real-life maneuvering target tracking example, that the proposed adaptive M-robust estimate-based Kalman filter improves significantly the target estimation and tracking quality, being effective in suppressing multiple outliers with contamination degree less than thirty percent. Furthermore, the achieved improvement is at expense of the additional computational efforts, but usually not so demanded to preclude a real-time application. On contrary, the emphasized approximate Kalman filter may be more efficient than the least squares, or the optimal Kalman filter. Therefore, it may be used to solve the numerous engineering problems in which the Kalman filter, or least squares, is required. Furthermore, it may be also used as an alternative to the other adaptive techniques, involving discrete noise levels, variable state dimension, and interacting multiple model approaches, as well the observation distribution estimation. Further investigations may be directed to propagation of the error covariances upon a non-linear form of the state updates, as well as to the convergence, observability, and consistency analysis of the proposed approximate filter.

References

  1. 1. Gelb A, Kasper J, Nash R, Price C, Sutherland A, editors. Applied Optimal Estimation, Analytic Sciences Corporation. Cambridge, MA: MIT Press; 1974. ISBN 9780262570480
  2. 2. Grewal MS, Andrews AP. Kalman Filtering: Theory and Practice Using Matlab. Hoboken, NJ: Wiley; 2015
  3. 3. Heijden F, van der Lei B, Xu G, Ming F, Zou Y, de Ridder D, et al. Classification, Parameter Estimation, and State Estimation: An Engineering Approach Using Matlab. Hoboken, NJ, USA: John Wiley & Sons, Inc; 2017
  4. 4. Kovačević B, Đurović Ž. Fundamentals of Stochastic Signals, Systems and Estimation Theory with Worked Examples. Berlin: Springer; 2011
  5. 5. Verhaegen M, Verdult V. Filtering and System Identification: A Least Squares Approach. Cambridge: Cambridge University Press; 2012
  6. 6. Price C. An analysis of the divergence problem in the Kalman filter. IEEE Transactions on Automatic Control. 1968;13(6):699-702. DOI: 10.1109/tac.1968.1099031
  7. 7. Hanlon P, Maybeck P. Characterization of Kalman filter residuals in the presence of mismodeling. IEEE Transactions on Aerospace and Electronic Systems. 2000;36(1):114-131. DOI: 10.1109/7.826316
  8. 8. Albert AE, Gardner LA. Stochastic Approximation and Nonlinear Regression. Cambridge, MA: The MIT Press; 1967
  9. 9. Saridis G, Nikolic Z, Fu K. Stochastic approximation algorithms for system identification, estimation, and decomposition of mixtures. IEEE Transactions on Systems Science and Cybernetics. 1969;5(1):8-15. DOI: 10.1109/tssc.1969.300238
  10. 10. Stanković S, Kovačević B. Analysis of robust stochastic approximation algorithms for process identification. Automatica. 1986;22(4):483-488. DOI: 10.1016/0005-1098(86)90053-1
  11. 11. Tsypkin, I. Adaptation and Learning in Automatic Systems. New York: Academic Press; 1971
  12. 12. Mendel JM. Adaptive, Learning and Pattern Recognition Systems: Theory and Applications. New York: Acadamic Press; 2012
  13. 13. Masreliez C. Approximate non-Gaussian filtering with linear state and observation relations. IEEE Transactions on Automatic Control. 1975;20(1):107-110. DOI: 10.1109/TAC.1975.1100882
  14. 14. Masreliez C, Martin R. Robust Bayesian estimation for the linear model and robustifying the Kalman filter. IEEE Transactions on Automatic Control. 1977;22(3):361-371. DOI: 10.1109/TAC.1977.1101538
  15. 15. Myers K, Tapley B. Adaptive sequential estimation with unknown noise statistics. IEEE Transactions on Automatic Control. 1976;21(4):520-523. DOI: 10.1109/TAC.1976.1101260
  16. 16. Tsai C, Kurz L. An adaptive robustizing approach to kalman filtering. Automatica. 1983;19(3):279-288. DOI: 10.1016/0005-1098(83)90104-8
  17. 17. Banjac Z, Kovačević B. Robustified Kalman filtering using both dynamic stochastic approximation and M-robust performance index. Tehnicki Vjesnik - Technical Gazette. 2022;29(3):907-914. DOI: 10.17559/tv-20200929143934
  18. 18. Pavlović M, Banjac Z, Kovačević B. Approximate Kalman filtering by both M-robustified dynamic stochastic approximation and statistical linearization methods. EURASIP Journal on Advances in Signal Processing. 2023;2023(1):69. DOI: 10.1186/s13634-023-01030-1
  19. 19. Barnett V, Lewis T. Outliers in Statistical Data. Chichester: Wiley; 2000
  20. 20. Venables WN, Ripley BD. Modern Applied Statistics with S. New York: Springer; 2011
  21. 21. Huber PJ, Ronchetti EM. Robust Statistics. Hoboken, NJ: Wiley; 2009
  22. 22. Wilcox RR. Introduction to Robust Estimation and Hypothesis Testing. Amsterdam: Academic Press, an imprint of Elsevier; 2017
  23. 23. Hampel FR, Ronchetti EN, Rousseeuw PJ, Stahel WA. Robust Statistics: The Approach Based on Influence Functions. Hoboken, NJ: Wiley; 1986
  24. 24. Dutter R. Numerical solution of robust regression problems: Computational aspects, a comparison. Journal of Statistical Computation and Simulation. 1977;5(3):207-238. DOI: 10.1080/00949657708810152
  25. 25. Hogg RV. Statistical robustness: One view of its use in applications today. The American Statistician. 1979;33(3):108. DOI: 10.2307/2683810
  26. 26. de Menezes D, Prata D, Secchi A, Pinto J. A review on robust M-estimators for regression analysis. Computers & Chemical Engineering. 2021;147:107254. DOI: 10.1016/j.compchemeng.2021.107254
  27. 27. Kovačević B, Milosavljević M, Veinović M, Marković M. Robust Digital Processing of Speech Signals. Berlin Heidelberg: Springer; 2017
  28. 28. Boncelet C, Dickinson B. An approach to robust Kalman filtering. In: The 22nd IEEE Conference on Decision and Control. SA, TX, USA; 1983. pp. 304-305. DOI: 10.1109/cdc.1983.269847
  29. 29. Kovačević B, Đurović Ž, Glavaški S. On robust Kalman filtering. International Journal of Control. 1992;56(3):547-562. DOI: 10.1080/00207179208934328
  30. 30. Đurović Ž, Kovačević B. Robust estimation with unknown noise statistics. IEEE Transactions on Automatic Control. 1999;44(6):1292-1296. DOI: 10.1109/9.769393
  31. 31. Chang G, Liu M. M-estimator-based robust Kalman filter for systems with process modelling errors and rank deficient measurement models. Nonlinear Dynamics. 2015;80(3):1431-1449. DOI: 10.1007/s11071-015-1953-0
  32. 32. Gandhi MA, Mili L. Robust Kalman filter based on a generalized maximum-likelihood-type estimator. IEEE Transactions on Signal Processing. 2010;58(5):2509-2520. DOI: 10.1109/tsp.2009.2039731
  33. 33. Zou Y, Chan S, Ng T. Robust M-estimate adaptive filtering. IEE Proceedings - Vision, Image, and Signal Processing. 2001;148(4):289. DOI: 10.1049/ip-vis:20010316
  34. 34. Banjac ZD, Kovacevic BD, Milosavljevic MM, Veinovic MD. Local echo canceler with optimal input for true full-duplex speech scrambling system. IEEE Transactions on Signal Processing. 2002;50(8):1877-1882. DOI: 10.1109/tsp.2002.800415
  35. 35. Banjac Z, Kovačević B, Veinović M, Milosavljević M. Robust least mean square adaptive FIR filter algorithm. IEE Proceedings - Vision, Image, and Signal Processing. 2001;148(5):332-336. DOI: 10.1049/ip-vis:20010594
  36. 36. Kovačević B, Banjac Z, Milosavljević M. Adaptive Digital Filters. Berlin: Springer-Verlag; 2013
  37. 37. Kovačević B, Banjac Z, Kovačević IK. Robust adaptive filtering using recursive weighted least squares with combined scale and variable forgetting factors. EURASIP Journal on Advances in Signal Processing. 2016;2016(1):37. DOI: 10.1186/s13634-016-0341-3
  38. 38. Hewer G, Martin R, Zeh J. Robust preprocessing for Kalman filtering of glint noise. IEEE Transactions on Aerospace and Electronic Systems. 1987;AES-23(1):120-128. DOI: 10.1109/TAES.1987.313340
  39. 39. Woolcock SC. Target Characteristics. London: Technical Editing and Reproduction, Ltd; 1973
  40. 40. Blackman SS, Popoli R. Design and Analysis of Modern Tracking Systems. Norwood, MA: Artech House; 1999

Written By

Branko Kovačević, Zoran Banjac and Tomislav Unkašević

Submitted: 14 December 2023 Reviewed: 20 January 2024 Published: 05 April 2024