Open access

Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification and Channel Equalization

Written By

Shihab Jimaa

Submitted: 07 October 2010 Published: 06 September 2011

DOI: 10.5772/16262

From the Edited Volume

Adaptive Filtering

Edited by Lino Garcia

Chapter metrics overview

4,124 Chapter Downloads

View Full Metrics

1. Introduction

Adaptive filtering algorithms have been widely applied to solve many problems in digital communication systems [1- 3]. So far, the Least Mean Square (LMS) and its normalized version (NLMS) adaptive algorithms have been the most commonly adopted approaches owing to the clarity of the mean-square-error cost function in terms of statistical concept and the simplicity for computation. It is known that the NLMS algorithm gives better convergence characteristics than the LMS because it uses a variable step-size parameter in which the variation is achieved due to the division, at each iteration, of the fixed step size by the input power. However, a critical issue associated with both algorithms is the choice of the step-size parameter that is the trade-off between the steady-state misadjustment and the speed of adaptation. Recent studies have thus presented the idea of variable step-size NLMS algorithm to remedy this issue [4-7]. Also, many other adaptive algorithms [8, 9] have been defined and studied to improve the adaptation performance. In this work, the proposed approach of randomizing the NLMS algorithm’s step-size has been introduced in the adaptation process of bothchannel equalisation and system identification, and tested over a defined communication channels. The proposed random step-size approach yields an algorithm with good convergence rate and steady state stability.

The objective of this chapter is analyzing and comparing the proposed random step-size NLMS and the standard NLMS algorithms that were implemented in the adaptation process of two fundamental applications of adaptive filters, namely adaptive channel equalization and adaptive system identification. In particular, we focus our attention on the behavior of Mean Square Error (MSE) of the proposed and the standard NLMS algorithms in the two mentioned applications. From the MSE performances we can determine the speed of convergence and the steady state noise floor level. The key idea in this chapter is that a new and simple approach to adjust the step-size () of the standard NLMS adaptive algorithm has been implemented and tested. The value of is totally controlled by the use of a Pseudorandom Noise (PRN) uniform distribution that is defined by values from 0 to 1. Randomizing the step-size eliminates much of the trade-off between residual error and convergence speed compared with the fixed step-size. In this case, the adaptive filter will change its coefficients according to the NLMS algorithm in which its step-size is controlled by the PRN to pseudo randomize the step size. Also this chaptercovers the most popular advances in adaptive filtering which include adaptive algorithms, adaptive channel equalization, and adaptive system identification.

In this chapter, the concept of using random step-size approach in the adaptation process of the NLMS adaptive algorithm will be introduced and investigated. The investigation includescalculating and plotting the MSE performance of the proposed algorithm in system identification and channel equalization and compares the computer simulation results with that of the standard NLMS algorithm.

The organization of this chapter is as follows: In Section 2 an overview of adaptive filters and their applications is demonstrated. Section 3 describes the standard NLMS and the proposed random step size NLMS algorithms. In Sections 4 the performance analysis of adaptive channel equalization and adaptive system identification are given. Finally the conclusion and the list of references are given in Sections 5 and 6, respectively.

Advertisement

2. Overview of adaptive filters and applications

An adaptive filter generally consists of two distinct parts: a filter, whose structure is designed to perform a desired processing function, and an adaptive algorithm for adjusting the coefficients of that filter. The ability of an adaptive filter to operate satisfactory in an unknown environment and track time variations of input statistics make the adaptive filter a powerful device for signal processing and control applications [1].

Adaptive filters are self learn. As the signal into the filter continues, the adaptive filter coefficients adjust themselves to achieve the desired result, such as identifying an unknown filter or cancelling noise in the input signal. Figure 1 represents the adaptive filter, comprising the adaptive filter and the adaptive weight control mechanism. An adaptive Finite Impulse Response (FIR) filter or Infinite Impulse Response (IIR) filter designs itself based on the characteristics of the input signal to the filter and a signal which represent the desired behavior of the filter on its input. Designing the filter does not require any other frequency response information or specification. To define the self learning process the filter uses, you select the adaptive algorithm used to reduce the error between the output signal y(k) and the desired signal d(k). When the least mean square performance criteria for e(k) has achieved its minimum value through the iterations of the adapting algorithm, the adaptive filter is finished and its coefficients have converged to a solution. Now the output from the adaptive filter matches closely the desired signal d(k). When the input data characteristics changes, sometimes called the filter environment, the filter adapts to the new environment by generating a new set of coefficients for the new data. Notice that when e(k) goes to zero and remains there you achieve perfect adaptation; the ideal result but not likely in the real world.

The ability of an adaptive filter to operate satisfactorily in an unknown environment and track time variations of input statistics make the adaptive filter a powerful device for signal processing and control applications [12]. In fact, adaptive filters have been successfully applied in such diverse fields as communications, radar, sonar, seismology, and biomedical engineering. Although these applications are quite different in nature, however, they have one basic common aspect: an input vector and a desired response are used to compute an estimation error, which is in turn used to control the values of a set of adjustable filter coefficients. The fundamental difference between the various applications of adaptive filtering arises in the way in which the desired response is extract.

In many applications requiring filtering, the necessary frequency response may not be known beforehand, or it may vary with time. (for example; suppression of engine harmonics in a car stereo). In such applications, an adaptive filter which can automatically design itself and which can track system variations in time is extremely useful. Adaptive filters are used extensively in a wide variety of applications, particularly in telecommunications. Despite that adaptive filters have been successfully applied in many communications and signal processing fields including adaptive system identification, adaptive channel equalization, adaptive interference (Noise) cancellation, and adaptive echo cancellation, the focus here is on their applications in adaptive channel equalisation and adaptive system identification.

Advertisement

3. Adaptive algorithms

Adaptive filter algorithms have been used in many signal processing applications [1]. One of the adaptive filter algorithms is the normalized least mean square (NLMS), which is the most popular one because it is very simple but robust. NLMS is better than LMS because the weight vector of NLMS can change automatically, while that of LMS cannot [2]. A critical issue associated with all algorithms is the choice of the step-size parameter that is the trade-off between the steady-state misadjustment and the speed of adaptation. A recent study has presented the idea of variable step-size LMS algorithm to remedy this issue [4].Nevertheless, many other adaptive algorithms based upon non-mean-square cost function can also be defined to improve the adaptation performance. For example, the use of the error to the power Four has been investigated [8] and the Least-Mean-Fourth adaptive algorithm (LMF) results.Also, the use of the switching algorithm in adaptive channel equalization has also been studied [9].

General targets of an adaptive filter are rate of convergenceand misadjustment. The fast rate of convergence allows the algorithm to adapt rapidly to a stationary environment of unknown statistics, but quantitative measure by which the final value of mean-square error (MSE) is averaged over an ensemble of adaptive filters, deviates from the minimum MSE more severely as the rate of convergence becomes faster, which means that their trade-off problem exists.

3.1. NLMS algorithm

The least mean square (LMS) algorithm has been widely used for adaptive filters due to its simplicity and numerical robustness. On the other hand, NLMS algorithmis known that it gives better convergence characteristics than the LMS, because the NLMS uses a variable step-size parameter in which, in each iteration, a step-size fixed parameter is divided by the input power. Depending on the value of the fixed step-size parameter, however, the LMS and NLMS algorithms result in a trade-off between the convergence speed and the mean square error (MSE) after convergence [5].

3.1.1. Adaptive filter

A general form of the adaptive filter is shown in Figure 1, where an input signal u(n) produces an output signal y(n), then the output signal y(n) is subtracted from the desired response d(n) to produce an error signal e(n). The input signal u(n) and error signal e(n) are combined together into an adaptive weight-control mechanism. The weight controller applies a weight adjustment to the transversal filter so as to minimize MSE value [11]. This process is repeated for a number of iterations until the filter reaches to a steady-state. In summary, the purpose of the adaptive systemis to filter the input signal u(n) so that it resembles the desired signal input d(n). The filter could be any type but the most widely used is the N tap FIR filter because of its simplicity and stability.

Figure 1.

Block diagram of adaptive transversal filter

3.1.2. Algorithm’s operation

The weight vector of an adaptive filter should be changed in a minimal manner, subject to a constraint imposed on the updated filter’s output. The NLMS adaptive filter is a manifestation of the principal of minimal disturbance from one iteration to the next [10]. To describe the meaning of NLMS as an equation, let w(n) be the old weight vector of the filter at iteration n and w(n+1) is its updated weight vector at iteration n+1. We may then formulate the criterion for designing the NLMS filter as that of constrained optimization: the input vector u(n) and desired response d(n) determine the updated tap-weight vector w(n+1) so as to minimize the squared Euclidean norm of the change as:

δw(n+1)=w(n+1)w(n)E1

Subject to the constraint

d(n)=wH(n+1)u(n)E2

The method of the lagrange multiplier is used to solve this problem as:

w(n+1)=w(n)+12λu(n)E3

The unknown multiplier, λ, can be obtained by substituting (3) into (2):

λ=2e(n)u(n)2E4

Where e(n) is the error signal and is given by:

e(n)=d(n)wH(n)u(n)E5

Then, combining (3) and (4) to formulate the optimal value of the incremental change, δw(n+1), we obtain:

w(n+1)w(n)=μu(n)2u(n)e(n)E6

Equation (6) can be written as:

w(n+1)=w(n)+μα+u(n)2u(n)e(n)E7

Where the constant αis added to the denominator to avoid that w(n+1) cannot be bounded when the tap-input vector u(n) is too small.

3.1.3. Step-size

The stability of the NLMS algorithm depends on the value of its step-size, and thus its optimization criterion should be found [12]. The desired response has been set as follows:

d(n)=wH(n)u(n)+v(n)E8

Where v(n) is an unknown disturbance. An estimate of the unknown parameter w is calculated from the tap-weight vector w(n). The weight-error vector is given below:

ε(n)=ww(n)E9

Substituting (9) into (7) yields:

ε(n+1)=ε(n)μu(n)2u(n)e(n)E10

To study the stability performance of adaptive filters, the mean-square deviation may be identified [11].

ξ(n)=E[|ε(n)|2]E11

Where E denotes expectation. Substituting (10) into (11) yields:

ξ(n+1)ξ(n)=μ2E[|e(n)|2u(n)2]2μE{Re[ƛu(n)e(n)u(n)2]}E12

where ƛu(n) is the undisturbed error signal defined by:

ƛu(n)=εH(n)u(n)E13

The bounded range of the normalized step-size parameter can be found from (12) as:

0<μ<2Re{E[ƛu(n)e(n)/u(n)2]}E[|e(n)|2/u(n)2]E14

For the case of real-valued data, the following equation can be used:

E[|ƛu(n)|2]=E[|u(n)|2]ξ(n)E15

Substituting (15) into (14) yields:

0<μ<2E[|u(n)|2]ξ(n)E[|e(n)|2]E16
or
0<μ<2μoptE17

where E[|e(n)|2] is the estimation of the error signal power, E[|u(n)|2]is the estimation of the input signal power, ξ(n)is the estimation of the mean-square deviation, and μopt is the optimal step-size parameter.

3.2. The random step-size algorithm

Using the proposed idea of randomizing the step size, the step-size for the NLMS algorithm is changed into a variable one, where the fixed step size is multiplied by PN (pseudo random number generator) being a selection from random numbers of uniform distribution [01] at each iteration time. Formulating the Pseudo-random NLMS algorithm results:

w[n+1]=w[n]+e(n)u(n)PN[n]μα+||u[n]||2E18

Where w(n) is the previous weight of the filter and w(n+1) is the new weight.

The step-size µdirectly affects how quickly the adaptive filter will converge toward the unknown system. If µ is very small, then the coefficients change only a small amount at each update, and the filter converges slowly.With a larger step-size, more gradient information is included in each update, and the filter converges morequickly; however, when the step-size is too large, the coefficients may change too quickly and the filter willdiverge. (It is possible in some cases to determine analytically the largest value of µ ensuring convergence). In summary, within that margin given in (17), the larger the faster the convergence rate is but less stability around the minimum value. On the other hand, the smaller the slower the convergence rate but will be more stable around the optimum value.

Advertisement

4. Performance analysis

4.1. Adaptive channel equalization

Adaptive channel equalizationin digital communication systems is perhaps the most heavily exploited area of application for adaptive filtering algorithms. Adaptive filtering algorithms have been widely applied to solve the problem of channel equalization in digitalcommunication systems. This is because, firstly, the adaptation of tap weights of an equalizer is necessary to perform channel equalization tasks successfully and secondly, the development of an adaptive algorithm that allows fast weight modification, while improving the estimation performance of the equalizer, will enhance the capability of suchequalization systems in real applications.

4.1.1. System model

The block diagram of the considered system is shown in Figure 2 below:

Figure 2.

The block diagram of the considered system

The input signal is Binary phase shift keying which has two phases (0 and ) so the signal is limited between 1 and -1. The general equation for a sum of weighted time delayed Telephone channel impulse responses can be written as

H(z)=h0+h1z1+h2z2+.hnznE19

Two types of channels are considered here, the minimum phase (CH-1) and the non-minimum phase (CH-2) channels, which are given, respectively, below:

H(z)=1.0+0.5z1E20
H(z)=0.5+z1E21

The discrete time model for the adaptive channel equalization considered in this paper is depicted in Figure 3.

Figure 3.

The block diagram of the adaptive channel equalization

In a transversal adaptive filter, the input vector Un and the weight vector Wn at the time of nth iteration are defined as follows:

Un=[un,un1,...,unM+1]TE22
Wn=[w0,w1,...,wM1]TE23

Where un is the filter input and wi, (i=0, 1, …, M-1) is the weight vector which corresponds to the filter length.The filter output is obtained as follows:

yn=WnTUnE24

The error signal e(k), involved in the adaptive process, is defined by

e(k) =d(k)y(k)E25
=d(k) wH(k)u(k)E26

Where w(k) is the tap-weight vector of the adaptive filter assumed to have a transversal structure.

4.2. Adaptive system identification

This section introduces adaptive filters through the application of system identification using the NLMS and the random step-size NLMS algorithms. The adaptive filter adjusts its coefficients to minimize the mean-square error between its output and that of an unknown system.The objective is to change (adapt) the coefficients of an FIR filterto match as closely as possible the response of an unknown system.

4.2.1. System model

Consider the system identification problem illustrated in Figure 4. The Figure shows the discrete time model for the FIR system identification. One common application is to use adaptive filters to identify an unknown system, such as the response of an unknown communications channel. An unknown FIR system with N-point impulse response vector w(n) has an input sequence {u(n)} and an output sequence {d(n)}. The input signal is binary phase shift keying (BPSK) which has two phases (0 and) so the signal is limited between 1 and -1. The general formula for a sum of weighted time delayed channel impulse response can be written as:

H(z)=h0+h1z1+h2z2+.hnznE27

The desired response d(n), providing a frame of reference for the adaptive filter, is defined by:

d(n)=WH(n)U(n)+v(n)E28

Where U(n) is the input vector, which is common to both the unknown system and the adaptive filter and v(n) is the Additive White Gaussian Noise (AWGN) with zero mean and variance 2. First the error signal, e (n), is computed which measures the difference between the output of the adaptive filter and the output of the unknown system. On the basis of this measure, the adaptive filter will change its coefficients in an attempt to reduce the error.

Hence the error signal,e(n), involved in the adaptive process, is defined by:

e(n)=d(n)y(n)E29
=WH(n)U(n)+v(n)WH(n)U(n)E30

where W^(n) is the tap-weight vector of the adaptive filter assumed to have a transversal structure.

Figure 4.

System identification model using linear transversal adaptive filter

Clearly, when e(n) is very small, the adaptive filter response is close to the response of the unknown system. Application of the Wiener filter to this problem involves constructing an estimate y(n) of the observed output d(n) by passing the observed input sequence u(n) through a system modeling filter with impulse response vector w(n). The impulse response of the system model is chosen to minimize the MSE. It is assumed that the adaptive filter has the same number of taps as the unknown system represented by w(n).

4.3. Computer simulation results

4.3.1. Adaptive channel equalization

This section presents the computer simulations results of the performance of the non-linear transversal equalizer adapted by NLMS algorithm and the proposed pseudo-randomized NLMS algorithm [13]. The system was tested over both minimum phase and non-minimum phase channels, which are defined, respectively, as follows:

H1(z)=1.0+0.5z1E31
H2(z)=0.5+z1E32

The digital signal transmitted through the channel was bipolar BPSK with values of 1 and the channel was corrupted by AWGN with SNR = 30dB. The order of the filter was set to 12 for both minimum phase, H1(z), and non-minimum phase, H2(z), channels. The step size for the NLMS algorithm was chosen from (0.01) to (0.1) and the number of transmitted bits equals to 2500 bits. The comparison between the two algorithms, the standard NLMS and the pseudo random step size NLMS, is done by first choosing the best step size that gives fast convergence and then uses this step size for the comparison.

Figure 5, shown below, shows that the step size with a value of 0.05 gives the fast convergence rate over the minimum phase channel (CH-1) while over the non-minimum phase channel (CH-2) the step size with a value of 0.01 gives the fast convergence rate. The comparison also looks at the effects of decreasing the SNR from 25 dB to 5 dB.While Figures 68, shown below, show the performances of the mean square error against the number of iterations for various values of signal-to-noise ratios using non-linear transversal equalizer over the minimum phase channel. From these figures it is clear that the speed of convergence has approximately the same convergence rate for both algorithms but the ground noise floor level decreases when the SNR decreases in the case of using the random step-size NLMS algorithm. The same conclusion has been noticed in the case of using the non-minimum phase channel that defined in (32) above. This is due to that the step-size parameter of the proposed random NLMS algorithm is designed based on utilizing the random distribution which made the error sequence accelerates the level of the noise floor to a much lower value compared to that of the standard NLMS algorithm with a fixed step-size value.Results for CH-2 are not included here due to space limitation.

Figure 5.

MSE performances of the NLMS for various step-sizes over CH-1

Figure 6.

MSE performances of the two algorithms for SNR=15dB over CH-1

Figure 7.

MSE performances of the two algorithms for SNR=10 dB over CH-1

Figure 8.

MSE performances of the two algorithms for SNR=5 dB over CH-1

4.3.2. Adaptive system identification

This section presents the computer simulations results of the performance of fixed and random step-size NLMS algorithms used in system identification [14]. The following two equations represent the coefficient vector of the unknown system (to be identified by the adaptive filter) and its transfer function, respectively.

W*T =[0.1,0.3,0.5,0.3,0.1]E33
H(z)=0.1+0.3z1+0.5z2+0.3z3+0.1z4E34

The unknown system output is disturbed by an uncorrelated zero-mean white Gaussian noise. The digital message applied to the channel is in random bipolar with the values of 1. The channel output is corrupted by a white Gaussian noise with zero mean and a variance of 0.01. The performance is determined by taking an average of 100 independent experimental simulations to demonstrate the mean-square error (MSE) performance of the proposed algorithm. The signal to noise ratio is set to 20 dB. Finally, the simulated systems have been implemented using Matlab codes. Figs. 9 & 10 show the MSE performances of the NLMS algorithm for various fixed step-sizes.

On the other hand Figs. 11-13, compare the MSE performances of the NLMS algorithm in the cases of fixed step-sizes of 0.5, 0.9, and 1.3 against the random step-size NLMS algorithm. It is clearly shown that the MSE performance of the NLMS algorithm using random step-size approach outperforms the fixed step-size algorithm’s performance, especially with large step-sizes.

Figure 9.

MSE performance of the NLMS algorithm for various step-sizes (mu)

Figure 10.

MSE performance of the NLMS algorithm for various step-sizes (mu)

Figure 11.

MSE performances of fixed and randomstep-size NLMS algorithms

Figure 12.

MSE performances for fixed and randomstep-size NLMS algorithms

Figure 13.

MSE performances for fixed and randomstep-size NLMS algorithms

Advertisement

5. Conclusion

The proposed idea of using random step-size approach in the adaptation of the NLMS algorithm has been implemented and tested in the adaptation process of both channel equalization and system identification. The tests measure the MSE performance of using both the standard NLMS and the random step-size NLMS algorithms in the above mentioned applications. An extensive investigation, to determine the NLMS algorithm’s best fixed step size, has been carried out over the defined channels. And a comparison between the performances of using the NLMS with a fixed step-size and a pseudo random step-size approaches has been carried out which shows the trade off between the convergence speed and the noise floor level.

It can be concluded, in the case of adaptive channel equalization, that the performance of using the random step-size outperforms the performance of the fixed step-size by achieving much lower ground noise floor, especially at low signal-to-noise ratios while maintaining a similar convergence rate.

In the case of adaptive system identification, the performance of using the random step-size outperforms the performance of that of fixed step-size NLMS adaptive algorithm and achieved lower ground noise floor, especially at larger step-sizes.

The values of the step-size parameter of the proposed random NLMS algorithm are based on utilizing the random uniform distribution which made the error sequence accelerates the MSE’s level of the noise floor to a much lower value compared to that of the standard NLMS algorithm with a fixed step-size value.

Advertisement

Acknowledgments

The author would like to acknowledge the contributions of Mr. N. Al-Saeedi [13] towards getting some results related to the channel equalization and Dr. T. Shimamura [14] for his contribution towards discussing the idea of using the random step-size approach in system identification.

References

  1. 1. HaykinS.2002Adaptive Filter Theory, Prentice-hall
  2. 2. CowanC. F. N.GrantP. M.1985Adaptive Filters, Prentice Hall, Englewood Cliff.
  3. 3. WidrowB.StearnsS. D.1988Adaptive Signal Processing, Prentice Hall, Englewood Cliffs, NJ.
  4. 4. SulymanA. I.ZerguineA.2003Convergence and steady-state analysis of a variable step-size NLMS algorithm, ELSEVIER Signal Processing, 8312551273
  5. 5. TakekawaH.ShimamuraT.JimaaS. A.2008An efficient and effective variable step size NLMS algorithm”, 42nd Asilomar conference on signals, systems and computers, CA, USA, October 2629
  6. 6. QuinL.BellangerM. G.1996Convergence analysis of a variable step-size normalized adaptive filter algorithm”, Proc. EUSIPCO, Adaptive Systems, PAS.4.
  7. 7. WangY.ZhangC.WangZ.2003A new variable step-size LMS algorithm with application to active noise control“, Proc. IEEE ICASSP, 5573575
  8. 8. WalachE.WidrowB.1984The Least Mean Fourth (LMF) Adaptive Algorithm and its Family" IEEE Transactions on Information Theory, 30(2), 275-283.
  9. 9. JimaaS. A.CowanC. F. N.HoltM. J. J.Adaptive Channel Equalisation Using Least Mean Switched Error Norm Algorithm", IEE 16th Saraga Colloquium on Digital and Analogue Filters and Filtering Systems, London, 9 Dec. 1996
  10. 10. WangY.ZhangC.WangZ.2003A new variable step-size LMS Algorithm with application to active noise control”, Proc. IEEE ICASSP, 573575
  11. 11. ArnantapunpongT.ShimamuraT.JimaaS. A.2010A New Variable Step Size for Normalized LMS Algorithm”, NCSP’102010RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing, Honolulu, Hawaii, USA March 3-5.
  12. 12. S. Haykin. (1996) Adaptive Filter Theory, Prentice- hall, 3rd Edition, 1996
  13. 13. JimaaS. A.Al SaeediN.Al ArajiS.ShubairR. M.2008Performance Evaluation of Random Step-Size NLMS in Adaptive channel equalization, IFIP Wireless days conference, Dubai, UAE, November 2008.
  14. 14. JimaaS. A.ShimamuraT.2010Convergence Evaluation of a Random Step-Size NLMS Adaptive Algorithm in System Identification, The 10th IEEE International Conference on Signal Processing ICSP 2010, Beijing, China, Oct.2428

Written By

Shihab Jimaa

Submitted: 07 October 2010 Published: 06 September 2011