Designing a Least Mean Square (LMS) family adaptive algorithm includes solving the well-known trade-off between the initial convergence speed and the mean-square error in steady state according to the requirements of the application at hands. The trade-off is controlled by the step-size parameter of the algorithm. Large step size leads to a fast initial convergence but the algorithm also exhibits a large mean-square error in the steady state and in contrary, small step size slows down the convergence but results in a small steady state error [9,17]. In several applications it is, however, eligible to have both and hence it would be very desirable to be able to design algorithms that can overcome the named trade-off.
Variable step size adaptive schemes offer a potential solution allowing to achieve both fast initial convergence and low steady state misadjustment [1, 8, 12, 15, 18]. How successful these schemes are depends on how well the algorithm is able to estimate the distance of the adaptive filter weights from the optimal solution. The variable step size algorithms use different criteria for calculating the proper step size at any given time instance. For example the algorithm proposed in  changes the time-varying convergence parameters in such a way that the change is proportional to the negative of gradient of the squared estimation error with respect to the convergence parameter. Squared instantaneous errors have been used in  and the squared autocorrelation of errors at adjacent time instances in  to modify the step size. In reference  the norm of projected weight error vector is used as a criterion to determine how close the adaptive filter is to its optimum performance.
More recently there has been an interest in a combination scheme that is able to optimize the trade-off between convergence speed and steady state error . The scheme consists of two adaptive filters that are simultaneously applied to the same inputs as depicted in Figure 1. One of the filters has a large step size allowing fast convergence and the other one has a small step size for a small steady state error. The outputs of the filters are combined through a mixing parameter . The performance of this scheme has been studied for some parameter update schemes [2, 6, 19]. The reference  uses convex combination i.e. is constrained to lie between 0 and 1. The reference  presents a transient analysis of a slightly modified version of this scheme. The parameter is in those papers found using an LMS type adaptive scheme and computing the sigmoidal function of the result. The reference  takes another approach computing the mixing parameter using an affine combination. This paper uses the ratio of time averages of the instantaneous errors of the filters. The error function of the ratio is then computed to obtain .
In  a convex combination of two adaptive filters with different adaptation schemes has been investigated with the aim to improve the steady state characteristics. One of the adaptive filters in that paper uses LMS algorithm and the other one Generalized Normalized Gradient Decent algorithm. The combination parameter is computed using stochastic gradient adaptation. In  the convex combination of two adaptive filters is applied in a variable filter length scheme to gain improvements in low SNR conditions. In  the combination has been used to join two affine projection filters with different regularization parameters. The work  uses the combination on parallel binary structured LMS algorithms. These three works use the LMS like scheme of  to compute .
It should be noted that schemes involving two filters have been proposed earlier [3, 16]. However, in those early schemes only one of the filters have been adaptive while the other one has used fixed filter weights. Updating of the fixed filter has been accomplished by copying of all the coefficients from the adaptive filter, when the adaptive filter has been performing better than the fixed one.
In this Chapter we compute the mixing parameter from output signals of the individual filters. The way of calculating the mixing parameter is optimal in the sense that it results from minimization of the mean-squared error of the combined filter. The scheme was independently proposed in  and . In , the output signal based combination was used in adaptive line enhancer and in  it was used in the system identification application.
We will investigate three applications of the combination: system investigation, adaptive beamforming and adaptive line enhancer. We describe each of the applications in detail and present a proper analysis.
We will assume throughout the Chapter that the signals are complex-valued and that the combination scheme uses two LMS adaptive filters. The italic, bold face lower case and bold face upper case letters will be used for scalars, column vectors and matrices respectively. The superscript
2. Combination of Two Adaptive Filters
Let us consider two adaptive filters, as shown in Figure 1, each of them updated using the LMS adaptation rule
In the above
The desired signal in 1 can be expressed as
where the vector
The outputs of the two adaptive filters are combined according to
We define the
Let us now find
Setting the derivative to zero results in
where we have replaced the Wiener filter output signal
3. System Identification
In several areas it is essential to build a mathematical model of some phenomenon or system. In this class of applications, the adaptive filter can be used to find a best fit of a linear model to an unknown plant. The plant and the adaptive filter are driven by the same known input signal and the plant output provides the desired signal of the adaptive filter. The plant can be dynamic and in this case we have a time varying model. The system identification configuration is depicted in Figure 2. As before
The same basic configuration is also used to solve the echo and noise cancellation problems. In echo cancellation the unknown plant is the echo path either electrical or acoustical and the input signal
In noise cancellation problems the signal
In here we are going to use the combination of two adaptive filters described in the previous Section to solve the system identification problem.
3.2 Excess Mean Square Error
In this section we are interested in finding expressions that characterize transient performance of the combined algorithm i.e. we intend to derive formulae that predict entire course of adaptation of the algorithm. Before we can proceed we need, however, to introduce some notations.
First let us denote the weight error vector of
Then the equivalent weight error vector of the combined adaptive filter will be
The mean square deviation of the combined filter
It follows from (5) that we can express the
In what follows we often drop the explicit time index
We thus need to investigate the evolution of the individual terms of the type
Reformulation the relation (1) as
and subtracting (2) from
We next approximate the outer product of input signal vectors by its correlation matrix
This means in fact that we apply the small step size theory  even if the assumption of small step size is not really true for the fast adapting filter. In our simulation study we will see, however, that the assumption works in practice rather well.
Let us now define the eigendecomposition of the correlation matrix as
and the transformed last term of equation (18) as
Then we can rewrite the equation (18) after multiplying both sides by
We note that the mean of
We now invoke the Gaussian moment factoring theorem to write
The first term in the above is zero due to the principle of orthogonality and the second term equals
As the matrices
We immediately see that the mean value of
as the vector
To proceed with our development for the combination of two LMS filters we note that we can express the MSD and its individual components in (10) through the transformed weight error vectors as
so we also need to find the auto- and cross correlations of
Let us concentrate on the
We now note that most likely the two component filters are initialized to the same value
We then have for the
The sum over
After substitution of the above into (31) and simplification we are left with
which is our result for a single entry to the MSD crossterm vector. It is easy to see that for the terms involving a single filter we get an expressions that coincide with the one available in the literature .
Let us now focus on the cross term
appearing in the EMSE equation (14). Due to the independence assumption we can rewrite this using the properties of trace operator as
Let us now recall that according to (20) for any of the filters
The EMSE of the combined filter can now be computed as
where the components of type
4. Adaptive Sensor Array
In this Chapter we describe how to use the combination of two adaptive filters in an adaptive beamformer. The beamformer we employ here is often termed as Generalized Sidelobe Canceller .
where is the wavelength of the incident wave and
Suppose that the signal impinging the array of
are called the steering vectors of the respective sources. We assume that the source of interest is located at the electrical angle
The block diagram of the Generalized Sidelobe Canceller is shown in Figure 3. The structure consists of two branches. The upper branch is the steering branch, that directs its beam toward the desired source. The lower branch is the blocking branch that blocks the signals impinging at the array from the direction of the desired source and includes an adaptive algorithm that minimizes the mean square error between the output signals of the branches.
The weights in steering branch
i.e. we require the response in the direction of the source of interest
The signal at the output of the upper branch is given by
In the lower branch we have a blocking matrix, that will block any signal coming from the direction
The vector valued signal
The output of the algorithm is
4.1 Signal to Interference and Noise Ratio
The EMSE of the adaptive algorithm can be analysed as it is done in Section 3.1. In this application we are also interested in signal to interference and noise ratio (SINR) at the array output. To evaluate this we first note that the power that signal of interest generates at the array output is according to (40)
To find the interference and noise power we first define the reduced signal vector
The correlation matrix of interference and noise in the signal
It follows from the standard Wiener filtering theory that the minimum interference and noise power at the array output is given by
where the desired signal variance excluding the signal from the source of interest is
and the crosscorrelation vector between the adaptive filter input signal and desired signal excluding the signal from source of interest is
We can now find the eigendecomposition of
and the signal to noise ratio is thus given by
5. Adaptive Line Enhancer
Adaptive line enhancer is a device that is able to separate the input into two components. One of them consists mostly of the narrow-band signals that are present at the input and the other one consists mostly of the broadband noise. In the context of this paper the signal is considered to be of narrow band if its bandwidth is small as compared to the sampling frequency of the system.
We assume that the broadband noise is zero mean, white and Gaussian and that the narrow band component is centred. One is usually interested in the narrow band components and the device is often used to clean narrow band signals from noise before any further processing. The line enhancer is shown in Figure 4. Note that the input signal to the adaptive filter of the line enhancer is delayed by
Let us now find the autocorrelation function of the enhancer output signal
The input signal
We can decompose the impulse response of the adaptive filter into two components. One of them is the optimal Wiener filter for the problem
and the other one,
The output signal can hence be expressed as
Substituting (53) and (55) into (52) and noticing that the cross-correlation between the Wiener filter output and that of the filter defined by weight errors is
because of the adopted independence assumption and because
Developing and grouping terms in the above equation results in
Using the fact that
We now invoke the independence assumption saying that the weight vector
To proceed we need to find the matrix
5.1 Weight error correlation matrix
In this Section we investigate the combination of two adaptive filters and derive the expressions for the crosscorrelation matrix between the output signals of the individual filters
For the problem at hands we can rewrite the equation (18) noting that we have introduced a
For the weight error correlation matrix we then have
The second and third terms of the above equal zero because we have made the usual independence theory assumptions which state, that the weight errors
We now assume that the signal to noise ratio is low so that the input signal is dominated by the white noise process
In steady state, when
Solving the above for
5.2. Second order statistics of line enhancer output signal
As we see from the previous discussion, the correlation matrix of the weight error vector is diagonal. We therefore have that the matrix
As the noise
From (4) we see that the autocorrelation lags of the combination output signal
The autocorrelation matrix of
Thus far we have evaluated the terms
Due to the independence assumption we can rewrite (70) using the properties of trace operator as
We are now ready to find
The power spectrum of the output process
6. Simulation Results
In this Section we present the results of our simulation study.
In order to obtain a practical algorithm, the expectation operators in both numerator and denominator of (7) have been replaced by exponential averaging of the type
6.1 System Identification
We have selected the sample echo path model number one shown in Figure 5 from , to be the unknown system to identify and combined two 64 tap long adaptive filters.
In the Figures below the noisy blue line represents the simulation result and the smooth red line is the theoretical result. The curves are averaged over 100 independent trials.
In the system identification example we use Gaussian white noise with unity variance as the input signal. The measurement noise is another white Gaussian noise with variance . The step sizes are for the fast adapting filter and for the slowly adapting filter. Figure 6 depicts the evolution of EMSE in time. One can see that the system converges fast in the beginning. The fast convergence is followed by a stabilization period between sample times 1000-7000 followed by another convergence to a lower EMSE level between the sample times 8000-12000. The second convergence occurs when the mean squared error of the filter with small step size surpasses the performance of the filter with large step size. One can observe that the there is a good accordance between the theoretical and the simulated curves so that the theoretical and the simulation curves are difficult to distinguish from each other.
The combination parameter is shown in Figure 7. At the beginning, when the fast converging filter gives smaller EMSE than the slowly converging one, is clise to unity. When the slow filter catches up the fast one starts to decrease and obtains a small negative value at the end of the simulation example. The theoretical and simulated curves fit well.
In the Figure 8 we show the time evolution of mean square deviation of the combination in the same test case. Again one can see that the theoretical and simulation curves fit well.
6.2 Adaptive beamforming
In the beamforming example we have used a 8 element linear array with half wave-length spacing. The noise power is
The steady state antenna pattern is shown in Figure 6. One can see that the algorithm has formed deep nulls in the directions of the interferers while the response in the direction of the useful signal is equal to the number of antennas i.e. 8.
The evolution of EMSE in this simulation example is depicted in Figure 10. One can see a rapid convergence at the beginning of the simulation example. Then the EMSE value stabilizes at a certain level and after a while a second convergence occurs. The dashed red line is the theoretical result and the solid blue line is the simulation result. One can see that the two overlap and are indistinguishable in black and white print.
The time evolution of for this simulation example is shown in Figure 11. At the beginning is close to one forcing the output signal of the fast adapting filter to the output of the combination. Eventually the slow filter catches up with the fast one and starts to decrease obtaining at the end of the simulation example a small negative value so that the output signal is dominated by the output signal of the slowly converging filter. One can see that the simulation and theoretical curves for evolution are close to each other.
The signal to interference and noise ratio evolution is show in Figure 12. One can see a fast improvement of SINR at the beginning of the simulation example followed by a stabilization region. After a while a new region of SINR improvement occurs and finally the SINR stabilizes at an improved level. Again the theoretical result matches the simulation curve well making the curves indistinguishable in black and white print.
6.3. Adaptive Line Enhancer
In order to illustrate the adaptive line enhancer application we have used length
The input signal consist of three sine waves and additive noise with unity variance. The sine waves with frequencies 0.1 and 0.4 have amplitudes equal to one and the third sine wave with normalized frequency 0.25 has amplitude equal to 0.5. The spectra of the input signal
In Figure 14 we show the correlation functions of input and output signals in the second simulation example. We can see that the theoretical correlation matches the correlation computed from simulations well.
The evolution of the excess mean square error of the combination together with that of the individual filters is shown in Figure 15. We see the fast initial convergence, which is due to the fast adapting filter. After the initial convergence there is a period of stabilization followed by a second convergence between the sample times 500 and 1500, when the error power of the slowly adapting filter bypasses that of the fast one.
In our final simulation example (Figure 16) we use three unity amplitude sinusoids with frequencies 0.1, 0.2 and 0.4. We have increased the noise variance to 10 so that the noise power is 20 times the power of each of the individual sinusoids. The adaptive filter is
In order to make the LMS type adaptive algorithm work properly one has to select a suitable step size . The step size has to be smaller than
Aboulnasr T. Mayyas K. 1997 A robust variable step-size LMS-type algorithm: Analysis and simulations 45 631 639
Arenas-Garcia J. Figueiras-Vidal A. R. Sayed A. H. 2006 Mean-square performance of convex combination of two adaptive filters 54 1078 1090
Armbruster W. 1992 Wideband Acoustic Echo Canceller with Two Filter Structure, Signal Processing VI, Theories and Applications Vanderwalle J Boite R. Moonen M. Oosterlinck A. Elsevier Science Publishers B.V.
Azpicueta-Ruiz L. A. Figueiras-Vidal A. R. Arenas-Garcia J. 2008a A new least squares adaptation scheme for the affine combination of two adaptive filters Cancun, Mexico 327 332
Azpicueta-Ruiz L. A. Figueiras-Vidal A. R. Arenas-Garcia J. 2008b A normalized adaptation scheme for the convex combination of two adaptive filters Las Vegas, Nevada 3301 3304
Bershad N. J. Bermudez J. C. Tourneret J. H. 2008 An affine combination of two LMS adaptive filters - transient mean-square analysis 56 1853 1864
Fathiyan A. Eshghi M. 2009 Combining several PBS-LMS filters as a general form of convex combination of two filters 9 759 764
Harris R. W. Chabries D. M. Bishop F. A. 1986 Variable step (vs) adaptive filter algorithm 34 309 316
Haykin S. 2002 Adaptive Filter Theory, Fourth Edition, Prentice Hall
ITU-T Recommendation G.168 Digital Network Echo Cancellers 2009
Kim K. Choi Y. Kim S. Song W. 2008 Convex combination of affine projection filters with individual regularization Shimonoseki, Japan 901 904
Kwong R. H. Johnston E. W. 1992 A variable step size LMS algorithm 40 1633 1642
Mandic D. Vayanos P. Boukis C. Jelfs B. Goh, S. I. Gautama T. Rutkowski T. 2007 Collaborative adaptive learning using hybrid filters Honolulu, Hawaii 901 924
Martinez-Ramon M. Arenas-Garcia J. Navia-Vazquez A. Figueiras-Vidal A. R. 2002 An adaptive combination of adaptive filters for plant identification Santorini, Greece 1195 1198
Mathews V. J. Xie Z. 1993 A stochastic gradient adaptive filter with gradient adaptive step size 41 2075 2087
Ochiai K. 1977 Echo canceller with two echo path models 25 589 594
Sayed A. H. 2008 Adaptive Filters John Wiley and sons
Shin H. C. Sayed A. H 2004 Variable step-size NLMS and affine projection algorithms 11 132 135
Silva M. T. M. Nascimento V. H. Arenas-Garcia J. 2010 A transient analysis for the convex combination of two adaptive filters with transfer of coefficients Dallas, TX, USA 3842 3845
Stoica P. Moses R. 2005 Spectral Analysis of Signals Prentice Hall.
Trump T. 2009 An output signal based combination of two NLMS adaptive algorithms Santorini, Greece
Trump T. 2011a analysis Output signal based combination of two NLMS adaptive filters - transient 60 4 258 268
Trump T. 2011b Output statistics of a line enhancer based on a combination of two adaptive filters 1 244 252
Zhang Y. Chambers J. A. 2006 Convex combination of adaptive filters for a variable tap-length LMS algorithm 10 628 631