Statistical-Based Approaches for Noise Removal

Image restoration methods are used to improve the appearance of an image by the application of a restoration process based on a mathematical model to explain the way the image was distorted by noise. Examples of types of degradation include blurring caused by motion or atmospheric disturbance, geometric distortion caused by imperfect lenses, superimposed interference patterns caused by mechanical systems, and noise induced by electronic sources.


Introduction
Image restoration methods are used to improve the appearance of an image by the application of a restoration process based on a mathematical model to explain the way the image was distorted by noise.Examples of types of degradation include blurring caused by motion or atmospheric disturbance, geometric distortion caused by imperfect lenses, superimposed interference patterns caused by mechanical systems, and noise induced by electronic sources.
Usually, it is assumed that the degradation model is either known or can be estimated from data.The general idea is to model the degradation process and then apply the inverse process to restore the original image.In cases when the available knowledge does not allow to adopt a reasonable model for the degradation mechanism it becomes necessary to extract information about the noise directed by data and then to use this information for restoration purposes.The knowledge about the particular generation process of the image is application specific.For example, it proves helpful to know how a specific lens distorts an image or how mechanical vibration from a satellite affects an image.This information can be gathered from the analysis of the image acquisition process and by applying image analysis techniques to samples of degraded images.
The restoration can be viewed as a process that attempts to reconstruct or recover a degraded image using some available knowledge about the degradation mechanism.Typically, the noise can be modeled with either a Gaussian, uniform or salt and pepper distribution.The restoration techniques are usually oriented toward modeling the type of degradation in order to infer the inverse process for recovering the given image.This approach usually involves the option for a criterion to numerically evaluate the quality of the resulted image and consequently the restoration process can be expressed in terms of an optimization problem.
The special filtering techniques of mean type prove particularly useful in reducing the normal/uniform noise component when the mean parameter is close to 0. In other words, the effects determined by the application of mean filters are merely the decrease of the local www.intechopen.comImage Restoration -Recent Advances and Applications 20 variance corresponding to each processed window, and consequently to inhibit the variance component of the noise.The AMVR algorithm (Adaptive Mean Variance Removal) allows the removal of the normal/uniform noise whatever the mean of the noise is (Cocianu, State, & Vlamos, 2002).Similar to MMSE (Minimum Mean Square Error) filtering technique (Umbaugh, 1998) the application of the AMVR algorithm requires that the noise parameters and some additional features are known.
The multiresolution support set is a data structure suitable for developing noise removal algorithms.(Bacchelli & Papi, 2006;Balster et al., 2003).The multiresolution algorithms perform the restoration tasks by combining, at each resolution level, according to a certain rule, the pixels of a binary support image.Some others use a selective wavelet shrinkage algorithm for digital image denoising aiming to improve the performance.For instance Balster (Balster, Zheng & Ewing, 2003) proposes an attempt of this sort together with a computation scheme, the denoising methodology incorporated in this algorithm involving a two-threshold validation process for real time selection of wavelet coefficients.
A new solution of the denoising problem based on the description length of the noiseless data in the subspace of the basis is proposed in (Beheshti & Dahleh, 2003), where the desired description length is estimated for each subspace and the selection of the subspace corresponding to the minimum length is suggested.
In (Bacchelli & Papi, 2006), a method for removing Gaussian noise from digital images based on the combination of the wavelet packet transform and the PCA is proposed.The method leads to tailored filters by applying the Karhunen-Loeve transform in the wavelet packet domain and acts with a suitable shrinkage function on these new coefficients, allowing the noise removal without blurring the edges and other important characteristics of the images.
Wavelet thresholding methods modifying the noisy coefficients were proposed by several authors (Buades, Coll & Morel, 2005;Stark, Murtagh & Bijaoui, 1995).The attempts are based on the idea that images are represented by large wavelet coefficients that have to be preserved whereas the noise is distributed across the set of small coefficients that have to be canceled.Since the edges lead to a considerable amount of wavelet coefficients of lower values than the threshold, the cancellation of these wavelet coefficients may cause small oscillations near the edges resulting spurious wavelets in the restored image.

Principal Component Analysis (PCA) and Independent Component Analysis (ICA)
We assume that the signal is represented by a n-dimensional real-valued random vector X of 0 mean and covariance matrix Σ.The principal directions of the repartition of X are the directions corresponding to the maximum variability, where the variability is expressed in terms of the variance.The value 1 T X  is referred as the first principal component of X.
  11 ,..., k L   the linear subspace orthogonal on the linear subspace generated by the first The value T k X  is referred as the k-th principal component of the signal X.
Note that the principal directions 1 ,..., n   of any signal are an orthogonal basis of R n , and The fundamental result is given by the celebrated Karhunen-Loeve theorem: Theorem.Let X be a n-dimensional real-valued random vector such that 0 EX  and   , T Cov X X .If we denote by 12 ... n

 
the eigen values of Σ, then, for any k, 1 kn , the k-th principal direction is an eigen vector of Σ associated to k  .
A series of approaches are based on the assumption that the signal results as a mixture of a finite number of hidden independent sources and noise.This sort of attempts are usually referred as techniques of Independent Component Analysis type.The simplest model is the linear one, given by X=AS+ η , where A is an unknown matrix (mixing matrix), S is the ndimensional random vector whose components are independent and

T n
η    is a random vector representing the noise.The problem is to recover the hidden sources being given the signal X without knowing the mixing matrix A.
For simplicity sake, the noise model is of Gaussian type, that is  ~  0, N   .Then, if we denote VA S  , then, for any vector w  R n , TTT wX wV w .Consequently, the non-Gaussianity of T wVcan be maximized on the basis of T wX if we use an expression that vanishes the component T w  .
The kurtosis (the fourth-order cumulant) corresponding to a real-valued random variable Y is defined as The non-Gaussianity can be also measured using the Shannon neg-entropy (mutual information).Being given the computational difficulty of evaluating the exact expression of neg-entropy, usually an approximation of it is used instead, for instance the approximation proposed in (Hyvarinen, Karhunen & Oja, 2001) , where G is a non-polynomial function and  ~  0,1 N .
Usually the maximization of non-Gaussianity is performed on the pre-processed signal version X  , applied in order to whiten the original clean signal.

 
, T Cov X X   and the covariance matrix  corresponding to the observed signal X is assumed to be estimated from data.Then  are independent and the covariance matrix of    .Usually, for simplicity sake, the matrix B is assumed to be orthogonal.

The use of concepts and tools of multiresolution analysis for noise removal and image restoration purposes
The multiresolution based algorithms perform the restoration tasks by combining, at each resolution level, according to a certain rule, the pixels of a binary support image.The values of the support image pixels are either 1 or 0 depending on their significance degree.At each resolution level, the contiguous areas of the support image corresponding to 1-value pixels are taken as possible objects of the image.The multiresolution support is the set of all support images and it can be computed using the statistically significant wavelet coefficients.
Let j be a certain multiresolution level.Then, for each pixel   , x y of the input image I, the multiresolution support at the level j is

 
;, , 1 MIjxy  I contains significant information at the level j about the pixel (x,y).
If we denote by  be the mother wavelet function, then the generic evaluation of the multiresolution support set results by computing the wavelet transform of the input image using  followed by the computation of

 
;, , M I j x y on the basis of the statistically significant wavelet coefficients for each resolution level j and for each pixel (x,y).
The computation of the wavelet transform of an one dimensional signal can be performed using the algorithm "À Trous" (Stark, Murtagh & Bijaoui, 1995).The algorithm can be extended to perform this computation in case of two-dimensional signals as, for instance, image signals Using the resolution levels 1,2,..., p , where p is a selected level, the "À Trous" algorithm computes the wavelet coefficients according to the following scheme (Stark, Murtagh & Bijaoui, 1995).


. Accordingly, the significance level of the wavelet coefficients is given by the rule: Using the significance level, we set to 1 the statistically significant coefficient and respectively we set to 0 the non-significant ones.The restored image I  is, where g is defined by .

Information-based approaches in image restoration
The basics of the informational-based method for image restoration purposes are given by the following theoretical results (State, Cocianu & Vlamos, 2001).
Lemma 1 Let X be a continuous n -dimensional random vector and , where is the differential entropy (Shannon), and f is the density function of X .
Lemma 2 Let X be a continuous n -dimensional normally distributed random vector, where , where is the regression function of .

The image restoration method based on scatter matrices and on bounds on the probability of error
In statistical discriminant analysis, within-class, between-class and mixture scatter matrices are used to formulate criteria of class separability.
In case we need to discriminate between m classes ,1 , shows the scatter of samples around their class expected vectors and it is typically given by the expression Very often, the a priori probabilities are taken and each prototype is computed as the weighted mean of the patterns belonging to the respective class.
The between-class scatter matrix is the scatter of the expected vectors around the mixture mean as


where 0  represents the expected vector of the mixture distribution; usually 0 The mixture scatter matrix is the covariance matrix of all samples regardless of their class assignments and it is defined by mwb SSS   .Note that all these scatter matrices are invariant under coordinate shifts.
In order to formulate criteria for class separability, these matrices should be converted into a number.This number should be larger when the between-class scatter is larger or the within-class scatter is smaller.Typical criteria are , where Jk  is a measure of overall class separability as well as well as a measure of the amount of information discriminating between these classes.In other words, ,1 , 2 k Jk  can be taken as measuring the effects of the noise removing filter expressing a measure of the quantity of information lost due to the use of the particular filter.SS  (Cocianu, State & Vlamos, 2004).
The probability of error is the most effective measure of a classification decision rule usefulness, but its evaluation involves integrations on complicated regions in high dimensional spaces.When a closed-form expression for the error probability can not be obtained, we may seek either for approximate expressions, or upper/lower bounds for the error probability.
Assume that the design of the Bayes classifier is intended to discriminate between two pattern classes and the available information is represented by mean vectors i  , 1,2 i  and the covariance matrices i  , 1,2 i  corresponding to the repartitions of the classes respectively.The Chernoff upper bounds of the Bayesian error (Fukunaga, 1990) , the integration can be carried out to obtain a closed-form expression for s  , that is The upper bound 1 2 Bhattacharyya distance and it is frequently used as a measure of the separability between two repartitions.Using straightforward computations, the Bhattacharyya distance can be written as,   respectively, that is the first term expresses the class separability due to the mean-difference while the second one gives the class separability due to the covariance difference.
The Bhattacharyya distance can be used as criterion function as well to express the quality of a linear feature extractor of matrix nxm AR  .
When 12  =  , 1 2 , then the value of the Bhattacharyya distance in the transformed space T YA X  is given by, The critical points of J   , mA are the solutions of the equation where Suboptimal solutions can be identified as the solutions of the system or equivalently, Obviously the criterion function J is invariant with respect to non-singular transforms and, using standard arguments, one can prove that Since there is no known procedure available to optimize the criterion J when 12   and 12    , a series of attempts to find suboptimal feature extractors have been proposed instead (Fukunaga, 1990) 3. Noise removal algorithms

Minimum mean-square error filtering (MMSE), and the adaptive mean-variance removal algorithm (AMVR)
The minimum mean-square error filter (MMSE) is an adaptive filter in the sense that its basic behavior changes as the image is processed.Therefore an adaptive filter could process differently on different segments of an image.The particular MMSE filter may act as a mean filter on some windows of the image and as a median filter on other windows of the image.
The MMSE filter allows the removal of the normal/uniform additive noise and its computation is carried out as ,, , , the ratio of noise to local variance.As the value of the ratio increases, implying primarily noise in the window, the filter returns primarily the value of the local average.As this ratio decreases, implying high local detail, the filter returns more of the original unfiltered image.Consequently, the MMSE filter adapts itself to the local image properties, preserving image details while removing noise.(Umbaugh,1998).
The special filtering techniques of mean type prove particularly useful in reducing the normal/uniform noise component when the mean parameter is close to 0. In other words, the effects determined by the application of mean filters are merely the decrease of each processed window local variance and consequently the removal of the variance component of the noise.
The AMVR algorithm allows to remove the normal/uniform noise whatever the mean of the noise is.Similar to MMSE filtering technique in application of the AMVR algorithm, the noise parameters and features are known.Basically, the AMVR algorithm works in two stages, namely the removal of the mean component of the noise (Step 1 and Step 2), and the decrease of the variance of the noise using the adaptive filter MMSE.The description of the AMVR algorithm is (Cocianu, State & Vlamos, 2002)  Step 2. Compute X , the sample mean estimate of the initial image X, by averaging the Step 3. Compute the estimate X of X using the adaptive filter MMSE, www.intechopen.com Statistical-Based Approaches for Noise Removal

29
Output The image X .

Information-based algorithms for noise removal
Let us consider the following information transmission/processing system.The signal X representing a certain image is transmitted through a channel and its noise-corrupted version  X  is received.Next, a noise-removing binomial filter is applied to the output  X is submitted to a restoration process producing X , an approximation of the initial signal X .In our attempt (Cocianu, State & Vlamos, 2004) we assumed that there is no available information about the initial signal and let 11  , 22  be their covariance matrices.We consider the working assumption that the 2  rc  -dimensional vector , where is the regression function of and It is well known (Anderson, 1958) that minimizes the variance and maximizes the correlation between  and  X  are independent, the whole information carried by As a particular case , using the conclusions established by the lemmas 1 and 2 ( § 2.3), we can conclude that . Frequently enough it happens that the matrices   Since the aim is to restore as much as possible the initial image X, we have to find out ways to improve the quality of in the same time preventing the introduction additional noise. Obviously, Step 2. For each row 1 ir   , compute .
Step 3.For each row 1 ir   , compute Step 4. Compute the rows , where  is a noise-preventing constant conveniently determined to prevent the restoration of the initial noise.By experimental arguments, the recommended range of  is   1.5, 5.5 .
Note that since the regression function can be written as, the correction term used at Step 4 is precisely the sample mean estimation of the The idea of our attempt is to use the most informative features discriminating between  The heuristic scatter matrices-based algorithms (HSBA) for image restoration (Cocianu, State & Vlamos, 2004) The idea of our attempt is to use the most informative features discriminating between where Step 3. Compute Step 3. Compute Step 7. Compute , where  is a noise-preventing constant conveniently determined to prevent the restoration of the initial noise, 0<  <1.c.The variant of the HSBA when 2 w SS  and 1 m SS  (Cocianu, State & Vlamos, 2004) In case we take 2 w SS  and 1 m SS  we obtain a variant of the HSBA similar to the variant (b).In our approach, for each row 1 ir   of the processed images, the most informative features used in getting the correction term are determined using the matrix Our image restoration algorithm based on the Bhattacharyya distance can be described as follows.
The HBA for image restoration (Cocianu, State & Vlamos, 2004) Input : The sample of noise corrupted versions of the rc  -dimensional image X and the number k of desired features.
Step 1. Compute the sample Step 3. Compute ii ii Step 7. Compute where  is a noise-preventing constant conveniently determined to prevent the restoration of the initial noise, 0<  <1.

Wavelet-based denoising
The multiresolution support provides a suitable framework for noise filtering and image restoration by noise removal.Briefly, the idea is to determine a set of statistically significant wavelet coefficients from which the multiresolution support is extracted, that is the procedure is mainly based on an underlying statistical image model governing the whole process.The multiresolution support is the basis of subsequent filtering process.
We extend the MNR algorithm to the algorithm GMNR to allow the noise removal in more general cases when the noise mean can be any real number, and compare the performances of the resulted method against the most frequently used restoration algorithms (MMSE and AMVR).Briefly, the MNR algorithm is described as follows (Stark, Murtagh & Bijaoui, 1995).The parameter k used in Step 2 controls the width of the confidence interval, its value being set to a value around 3.
Input: The image 0 X , the number of resolution levels p and the heuristic threshold k.
Step 1. Compute the sequence of image variants   1, j jp X  using a discrete low-pass filter h and get the wavelet coefficients by applying the "À Trous" algorithm Step 2. Select the significant coefficients where Step 3. Use the filter g defined by Output The restored image X  .
In the following, the algorithm GMNR is an extension of the MNR algorithm aiming to get the multiresolution support set in case of arbitrary noise mean, and to use this support set for noise removal purposes (Cocianu, State, Stefanescu, Vlamos, 2004).Let us denote by X the original "clean" image, and  ~  2 , Nm be the additively superimposed noise, that is the image to be processed is YX   .Using the two-dimensional filter  , the sampled variants of X, Y and  result by convoluting them with  respectively, The wavelet coefficients of 0 c computed by the algorithm "À Trous" are, The noise mean can be inhibited by applying the following "white-wall" type technique.
Step 1. Get the set of images Step 4. Compute an approximation of the original image 0 I using the multiresolution filtering based on the statistically significant wavelet coefficients.

A combined noise removal method based on PCA and shrinkage functions
In the following, the data X is a collection of image representations modeled as a sample 0 X coming from a multivariate wide sense stationary stochastic process of mean μ and covariance matrix Σ , each instance being affected by additively superimposed random noise.In general, the parameters μ and Σ can not be assumed as been known and they are estimated from data.The most frequently used model for the noise component η is also a wide sense multivariate stationary stochastic process of Gaussian type.Denoting by n the dimensionality of the image representations, the simplest noise model is the "white" model, that is , where for any t≥0, Consequently, the mathematical model for the noisy image versions is, The aim is to process the data X using estimates of μ , Σ and 2  to derive accurate approximations of 0 X .
The data are preprocessed to get normalized and centered representations.The preprocessing step is needed to enable the hypotheses that In the following, we combine the above described estimation process with a compression/decompression scheme, in order to remove the noise in a feature space of less dimensionality.For given The model-free version of CSPCA is a learning from data method that computes estimates of the first and second order statistics on the basis of a series of n-dimensional noisy images 12 N X , X ,..., X ,... (State, Cocianu, Sararu, Vlamos, 2009).Also, the estimates of the eigen values and eigen vectors of the sample covariance matrix are obtained using first order approximations derived in terms of perturbation theory.The first and second order statistics are computed in a classical way, that is Using staightforward computation, the following recursive equations can be derived 11 1 11 Denoting by Σ , in case the eigen values of N Σ are pairwise distinct, using arguments of perturbation theory type, the recursive equations for the eigen values and eigen vectors can be also derived (State, Cocianu, Vlamos, Stefanescu, 2006)   Assume that the information is represented by Let us assume that the L gray levels of the initial image X are affected by noise of Gaussian type  ~  0, N  , and denote by  an orthogonal nn  matrix whose columns are unit eigen vectors of  , where n is the dimension of the input space.If  is known, the matrix  can be computed by classical techniques, respectively in cases when  is not known, the columns of  can be learned adaptively by PCA networks (Rosca, State, Cocianu, 2008).
We denote by Y the resulted image, YX   .The images are represented by RxC matrices, they are processed by blocks of size 1 RC  , 1 ,2 Cp C pC   .In the preprocessing step, using the matrix  , the noise is removed by applying the MNR algorithm to the decorrelated blocks of ' T YY  .
The restoration process of the image Y using the learned features is performed as follows (State, Cocianu, 2007) Step 1. Compute the image Y' on the basis of the initial image by de-correlating the noise component ,, , , where  is a matrix of unit eigen vectors of the noise covariance matrix.Then ' , where   12 , ,..., n   are the eigen values of  .
Step 2. Remove the noise '  from the image Y' using its multirezolution support.The image Y" results by labeling each wavelet coefficient of each pixel.

 
,, , "' Step 3. Compute an approximation XX   of the initial image X by applying the linear transform of matrix  to Y", ,, , , "

Conclusions and experimental comparative analysis on the performances of some noise removal and restoration algorithms
In order to evaluate the performance of the proposed noise removal algorithms, a series of experiments were performed on different 256 gray level images.We compare the performance of our algorithm NFPCA against MMSE, AMVR, and GMNR.The implementation of the GMNR algorithm used the masks  2. The aims of the comparative analysis were to establish quantitative indicators to express both the quality and efficiency of each algorithm.The values of the variances in modeling the noise in images processed by the NFPCA represent the maximum of the variances per pixel resulted from the decorrelation process.We denote by U(a,b) the uniform distribution on the interval [a,b]  It seems that the AMVR algorithm proves better performances from the point of view of mean error per pixel in case of uniform distributed noise as well as in case of Gaussian type noise.Also, it seems that at least for 0-mean Gaussian distributed noise, the mask 2 h provides less mean error per pixel when the restoration is performed by the MNR algorithm.
Several tests were performed to investigate the potential of the proposed CSPCA.The tests were performed on data represented by linearized monochrome images decomposed in blocks of size 8x8.The preprocessing step was included in order to get normalized, centered representations.Most of the tests were performed on samples of volume 20, the images of each sample sharing the same statistical properties.The proposed method proved good performance for cleaning noisy images keeping the computational complexity at a reasonable level.An example of noisy image and its cleaned version respectively are presented in Figure 1.The tests performed on new sample of images pointed out good generalization capacities and robustness of CSPCA.The computational complexity of CSPCA method is less than the complexity of the ICA code shrinkage method.
A synthesis of the comparative analysis on the quality and efficiency corresponding to the restoration algorithms presented in section 3.2 is supplied in Table 3.
So far, the tests were performed on monochrome images only.Some efforts that are still in progress aim to adapt and extend the proposed methodology to colored images.Although the extension is not straightforward and some major modifications have to be done, the already obtained results encourage the hopes that efficient variants of these algorithms can be obtained for noise removal in case of colored images too.
The tests on the proposed algorithms were performed on images of size 256x256 pixels, by splitting the images in blocks of smaller size, depending on the particular algorithm.For instance, in case of algorithms MNR and GMNR, the images are processed pixel by pixel, and the computation of the wavelet coefficients by the "A Trous" algorithm is carried out using 3x3 and 5x5 masks.The tests performed on NFPCA, CSPCA, and the model free version of CSPCA processed blocks of 8x8 pixels.The comparison of the proposed algorithm NFPCA and the currently used approaches MMSE and AMVR points out better results of NFPCA in terms of the mean error per pixel.Some of the conclusions are summarized in Table 1 and Table 2, where the noise was modeled using the uniform and normal distributions.As it is shown in Table 1, in case of the AMVR algorithm the mean error per pixel is slightly less than in case of using NFPCA, but the AMVR algorithm induces some blur effect in the image while the use of the NFPCA seems to assure reasonable small errors without inducing any annoying side effects.
The tests performed on new sample of images pointed out good generalization capacities and robustness of CSPCA.The computational complexity of CSPCA method is less than the complexity of the ICA code shrinkage method.The authors aim to extend the work from both, methodological and practical points of view.From methodological point of view, some refinements of the proposed procedures and their performances are going to be evaluated on standard large size image databases are in progress.From practical point of view, the procedures are going to be extended in solving specifics GIS tasks.
So far, the tests were performed on monochrome images only.Some efforts that are still in progress aim to adapt and extend the proposed methodology to colored images.Although the extension is not straightforward and some major modifications have to be done, the already obtained results encourage the hopes that efficient variants of these algorithms can be obtained for noise removal in case of colored images too.
The tests on the proposed algorithms were performed on images of size 256x256 pixels, by splitting the images in blocks of smaller size, depending on the particular algorithm.For instance, in case of the algorithms MNR and GMNR, the images are processed pixel by pixel, and the computation of the wavelet coefficients by the "A Trous" algorithm is carried out using 3x3 and 5x5 masks.The tests performed on NFPCA, CSPCA, and the model free version of CSPCA processed blocks of 8x8 pixels.

X
results by the linear transform of matrix A applied to the sources Sthe sources S are determined by maximizing the non-Gaussianity of XB S    patterns coming respectively from these classes, the within -class scatter matrix www.intechopen.comStatistical-Based Approaches for Noise Removal 25 can be taken as measures of overall class separability.Obviously, both criteria are invariant under linear non-singular transforms and they are currently used for feature extraction purposes[8].When the linear feature extraction problem is solved on the base of either 1 J or 2 J , their values are taken as numerical indicators of the loss of information implied by the reduction of dimensionality and implicitly deteriorating class separability.Consequently, the best linear feature extraction is formulated as the optimization problem classes are represented by the noisy image  X  and the filtered image Note that one of the first two terms in (4) vanishes, when 12    , 12


is the local mean (average in the window , lc W ).Note that since the background region of the image is an area of fairly constant value in the original uncorrupted image, the noise variance is almost equal to the local variance, and consequently the MMSE performs as a mean filter.In image areas where the local variances are much larger than the noise variance, the filter computes a value close to the pixel value corresponding to the unfiltered image data.The magnitude of the original and local means respectively used to modify the initial image are weighted by 2 is, Input The image Y of dimensions RC  , representing a normal/uniform disturbed version of the initial image X,    an unique positive eigenvalue, one of its unit eigenvectors being given by For each row 1 ir   , do Step 3 until Step 7 Compute the row  Xiof the restored image X by correcting the filtered image  is a noise-preventing constant conveniently determined to prevent the restoration of the initial noise, 0<  <1.Note that atStep 4, the computation of matrix of m S and  the matrix having as columns the corresponding unit eigenvectors.According to the algorithm of simultaneous diagonalization(Duda & Hart, 1973), the optimal linear feature extractor is given .., N XX as described in Step 1 of the variant (a) of the HSBA www.intechopen.comImage Restoration -Recent Advances and Applications34Step 2. For each row 1 ir   , do Step 3 until Step 8 Compute the row  Xiof the restored image X by correcting the filtered image Step 1 of the variant (a) of theHSBAStep 2. For each row 1 ir   , do Step 3 until Step 8 Compute the row  Xiof the restored image X by correcting the filtered image contained by the selected features, as columns the first m columns of  and the diagonal matrix having as entries the first m entries of  respectively.www.intechopen.comImageRestoration -Recent Advances and Applications38The noise removal process in the m-dimensional feature space applied to

Fig. 1 .
Fig. 1.The performance of model-free version of CSPCA .com  :   of it being responsible for the initial existing noise  and another component being responsible for the quality degradation.According to our regression-based algorithm, the rows of the restored image X are computed sequentially on the basis of the samples  a part , Get the image I  by averaging the resulted versions, .
Some of the conclusions experimentally derived concerning the comparative analysis of the restoration algorithms presented in the paper against some similar techniques are presented www.intechopen.com

Table 1 .
Comparative analysis on the performance of the proposed algorithms

Table 2 .
Comparative analysis on MNR

Table 3 .
Comparative analysis on the performance of the proposed algorithms The initial noisy imageThe cleaned version of the initial image