Open access peer-reviewed chapter

Pan-sharpening Using Spatial-frequency Method

Written By

Upendra Kumar

Submitted: 23 April 2018 Reviewed: 01 August 2018 Published: 29 May 2019

DOI: 10.5772/intechopen.80637

From the Edited Volume

Satellite Information Classification and Interpretation

Edited by Rustam B. Rustamov

Chapter metrics overview

922 Chapter Downloads

View Full Metrics


Over the years, researchers have formulated various techniques for pan sharpening that attempt to minimize the spectral distortion, i.e., retain the maximum spectral fidelity of the MS images. On the other hand, if the use of the PAN-sharpened image is just to produce maps for better visual interpretation, then the spectral distortion is not of much concern, as the goal is to produce images with high contrast. To solve the color distortion problem, methods based on spatial frequency domain have been introduced and have demonstrated superior performance in terms of producing high spectral fidelity pan-sharpened images over spatial-scale methods.


  • pan sharpening
  • spatial scale
  • spatial frequency analysis
  • discrete wavelet transform
  • non–subsampled contourlet transform
  • pseudo-Wigner distribution
  • urban planning

1. Introduction

Earth resource satellites provide data covering different parts of the electromagnetic spectrum at different spatial, spectral, and temporal resolutions. To utilize these different types of image data effectively, a number of pan-sharpening techniques have been developed [1].

Further, in order to benchmark different image fusion techniques, image quality metrics have been used. There are two types of metrics used to evaluate image quality, namely, subjective (qualitative) and objective (quantitative). The objective of this chapter is to discuss the methodology of some of the prevalent existing techniques, as well as the mathematical representation of some of the standard existing evaluation indicators.


2. Pan-sharpening techniques

Pan sharpening is also known as image fusion, image integration, and multisensor data fusion. Over the years, a large number of pan-sharpening techniques have been developed and have placed into different categorizes. In this study, multiscale transform (MST)-based techniques have been discussed.

2.1 Multiscale transform-based pan-sharpening techniques

In recent years, multiscale transform (MST)-based pan-sharpening techniques have received a lot of attention, since they preserve the spectral fidelity in the pan-sharpened images. Further, it is more suitable for information representation, interpretation, and analysis [2, 3].

Many variations of the multiscale transform-based techniques exist, such as discrete wavelet transform (DWT), stationary wavelet transform (SWT), curvelet transform (CVT), contourlet transform (CT), and Non–subsampled contourlet transform (NSCT) [4]. The next subsections give a descriptive overview and methodology of MST-based pan-sharpening techniques which are selected for this study.

2.1.1 Discrete wavelet transform (DWT)

Before discussing about discrete wavelet transform, first of all, it would be appropriate to discuss in general regarding Fourier transform (FT).

Fourier transform (FT) was first invented by French mathematician and physicist Jean Baptiste Joseph Fourier in 1822. Fourier stated that any periodical function can be represented as a sum of sine and cosine of different frequencies, each multiplied by a different coefficient [5, 6]. Fourier transform converts a signal from the time-amplitude domain to the frequency-amplitude domain. Images are considered as 2-D discrete functions. To use Fourier transform to analyze images, discrete Fourier transform (DFT) is used. FT is a reversible transform, which means the original signal can be recovered through the inverse discrete Fourier transform (IDFT) [7, 8].

However, FT has a drawback, i.e., it does not provide the information about the time at which the particular frequency exists in the signal. Fourier transform only captures the different frequencies in a signal and cannot detect when those frequencies occurred. To overcome this drawback, wavelet transform (WT) was introduced. Wavelet transform (WT) can be more useful than Fourier transform, since it is based on functions that are localized in both space and frequency/scale [9]. Wavelet transform brings a multiresolution framework. With this setting, the signal can be decomposed into components that collect the information at a specified scale, i.e., different frequencies are analyzed with different resolutions [2, 3, 4, 5, 6]. The WT has numerous applications in remote sensing such as image registration, spatial and spectral fusion, feature extraction, speckle reduction, texture classification, and crop phenology detection [7].

Wavelet transform can be broadly classified into two main groups, i.e., continuous wavelet transform (CWT) and discrete wavelet transform (DWT). Since CWT is continuous, as a result, there are an infinite number of scale and translation parameters which leads to an infinite number of possible wavelet functions. To overcome the shortcoming of CWT, DWT was introduced.

In the DWT algorithm, an image can be analyzed by passing it through an analysis filter bank followed by decimation operation. The analysis filter bank consists of low pass and high pass filter at each decomposition stage. When a signal passes through these filters, it splits in to two signals. The low pass filter, which corresponds to an averaging operation, extracts the coarse (average) information of the signal. The high pass filter, which corresponds to a differencing operation, extracts the detail information of the signal such as edges, points, and lines. The output of the filtering operation is then decimated by two, i.e., a 2-D transform is accomplished by performing two separate one-dimensional transform [9, 10, 11, 12]. First of all, the image is filtered along the row and decimated by two, and it is then followed by filtering the subimage along the column and decimated by two.

This operation splits the image into four bands namely one approximation band, which contains coarse information and three detail bands, horizontal, vertical, and diagonal, respectively, which contain information about the salient features of the image such as edges, points, and lines [5, 8]. A J-level decomposition can be performed resulting in 3j+1 different frequency bands. At each level of decomposition, the image is split into high and low frequency components; the low-frequency components can be further decomposed until the desired resolution is reached [13, 14, 15]. The pan-sharpening procedure for the pan sharpening of panchromatic (PAN) and multispectral (MS) images using DWT has been explained in Section 3.1 (Figure 1).

Figure 1.

Decomposition of an image using DWT.

2.1.2 Stationary wavelet transform (SWT)

It is observed that discrete wavelet transform (DWT) is not a shift-invariant transform. Therefore, in order to get rid of this problem, stationary wavelet transform (SWT)-based fusion technique, an extension of DWT scheme, also known as “à trous” algorithm, has been introduced [10, 11]. In the “à trous” algorithm, the downsampling step is suppressed and instead the filter is upsampled by inserting zeros between the filter coefficients (Figure 2).

Figure 2.

Structure of “à trous” filters.

In the SWT algorithm, it uses a two-dimensional filter derived from the scaling function. This produces two images, of which one is an approximation image while the other is a detailed image called the wavelet plane. A wavelet plane represents the horizontal, vertical, and diagonal detail between 2j and 2j1 resolution and is computed as the difference between two consecutive approximations Il1andIl levels. All the approximation images obtained, by applying this decomposition, have the same number of columns and rows as the original image, since filters at each level are upsampled by inserting zeros between the filter coefficients and make the size of the image same [16, 17, 18, 19].

This is a consequence of the fact that the “à trous” algorithm is a nonorthogonal, redundant oversampled transform [19, 20, 21]. The “à trous” decomposition process is shown in Figure 2.

The procedure for the pan sharpening of PAN and MS images using SWT can be summarized as follows (Figure 3):

  1. To generate new panchromatic images, match histograms of PAN image to their corresponding MS image.

  2. Perform the second-level wavelet transform only on the modified PAN image.

  3. The resulting wavelet planes of PAN are added directly to each MS images.

Figure 3.

Methodology adopted for SWT-based pan-sharpening.

The SWT eliminates the shift sensitivity problem at the cost of an overcomplete signal representation. However, it does not resolve the problem of feature orientation. In addition, the discrete wavelet transform (DWT), and stationary wavelet transform (SWT), cannot capture curves and edges of images well. Wavelets perform well only at representing point singularities, i.e., appropriate to represent linear edges, since they ignore the geometric properties of structures and do not exploit the regularity of edges.

For curved edges, the accuracy of edge localization in the wavelet transform is low. So, there is a need for an alternative approach, which has the potential or capability to detect, represent, and process high-dimensional data. In order to solve this problem, multiscale geometric analysis has been further investigated. As a result, Candès and Donoho [22] have proposed the concept of curvelet transform (CVT).

Further, in order to solve the problem of curvelet transform, which is first developed in continuous domain and then does discretization of images or signals of interest, Yang et al. [23] and Do and Vetterli [24] presented a flexible multiresolution, local, and directional image expansion using contour segments, named contourlet transform. However, due to the downsampling and upsampling, the CT lacks shift invariance and thus results in ringing artifacts [16]. To overcome the weakness of wavelets, curvelets, and contourlets, Cunha et al. [25] proposed non–subsampled contourlet transform (NSCT), based on non–subsampled pyramid decomposition (NSPD) and non–subsampled filter bank (NSFB).

2.1.3 Non–subsampled contourlet transform (NSCT) technique

In order to reduce the frequency aliasing of contourlets and enhance directional selectivity and shift invariance, Holschneider and Tchamitchian [17] proposed non–subsampled contourlet transform. This is based on the non–subsampled pyramid filter banks (NSPFBs) and the non–subsampled directional filter banks (NSDFBs) structure. The former provides multiscale decomposition using two-channel non–subsampled 2-D filter banks, while the later provides directional decomposition, i.e., it is used to split band pass subbands in each scale into different directions [25, 26].

As a result, NSCT is shift invariant and leads to have better frequency selectivity and regularity than CT [25, 26, 27, 28]. The scheme of NSCT structure is shown in Figure 4(a). The NSCT structure classifies two-dimensional frequency domain into wedge-shaped directional subband as shown in Figure 4(b).

Figure 4.

Two level NSCT decomposition. (a) NSFB structure that implements the NSCT and (b) the corresponding frequency partition.

In order to provide more practical and flexible solution to the existing problem as stated above, there is a need for an improved or a new fusion technique, which is superior among all the existing pan-sharpening techniques. A new pan-sharpening technique should ideally possess properties of shift invariance, directionality, low computational complexity, and low computational time, applicable to real-time image processing tool, and is also efficient in capturing intrinsic geometrical structures of the natural image along the smooth contours. Moreover, it should perform efficiently under all categories of datasets, such as very high, high, and medium resolution satellite datasets. A spatial frequency-based technique should ideally possess properties, such as shift invariance, directionality, low computational complexity, and low computational time, applicable to real-time image processing tool, and is also efficient in capturing intrinsic geometrical structures of the natural image along the smooth contours [27, 28]. Thus, in order to resolve the existing problems, pan-sharpening method based on joint spatial frequency domain such as pseudo-Wigner distribution has been introduced.


3. Spatial-frequency based pan-sharpening technique

Analysis of non-stationary 2-D signals (image) is a challenging job, as their spectral properties change with time. Such signals cannot be analyzed well by pure spatial domain and frequency domain representations. The joint spatial frequency domain-based image analysis methods, such as Wigner Ville distribution (WVD) and pseudo-Wigner distribution (PWD), have been proven to be a powerful tool for analyzing, understanding, and detection of spatial frequency characteristics of non-stationary images in a more comprehensive manner.

The use of Wigner Ville distribution for image processing was first suggested by [18]. It was shown that WVD is a very efficient and powerful tool for capturing the essential non-stationary image structures [29] and appears as a new promising method for the characterization of local spectral properties of images. The Wigner Ville distribution has many interesting properties related to translation, modulation, scaling, convolution, and localization in spatial frequency space, real-valued function and contains phase information, which motivates its use in the field of image analysis applications. Since WVD suffers with the serious problem of interference that makes the interpretation impossible, thus to resolve the limitation of WVD, pseudo-Wigner distribution (PWD) was introduced.

3.1 Pseudo-Wigner Distribution (PWD) technique

Spatial frequency information of a non-stationary image can be effectively extracted with one of the well-known spatial frequency technique known as pseudo-Wigner distribution (PWD). PWD is ideally suited for representing a nonstationary image in the spatial frequency domain and is carried out by adapting the fast Fourier transform (FFT) algorithm. The significant properties of PWD motivate its use in the field of image processing, especially for the fusion of satellite images [30, 31]. These properties are as follows:

  1. PWD provides a pixel-wise analysis, efficient and powerful tool for capturing the essential nonstationary image structures, as well as for the characterization of local spectral properties of images, which is indispensable for image fusion.

  2. PWD is shift-invariant technique. Shift-invariant property is necessary for a high-quality and effective image fusion. In the absence of shift invariance, artifacts, such as aliasing effect, loss of linear continuity in spatial features, become prevalent in the resulting fused image (Figure 5).

  3. Multidirection, i.e., the window can be tilted in any direction to obtain a directional distribution.

  4. Computation time for PWD is generally small.

Figure 5.

Concept of shift variant and shift invariant.

With reference to Table 1, pseudo-Wigner distribution (PWD) overcomes the shortcomings of the traditional Fourier-based methods, discrete wavelet transform (DWT), stationary wavelet transform (SWT), curvelet transform (CT), contourlet transform (CT), and non–subsampled contourlet transform (NSCT). Consequently, it is not based on a multiscale decomposition procedure as wavelets and contourlets are. Further, one of the most challenging applications that comes across by the remote sensing experts is to fuse MS and PAN images collected from different or same satellite sensor with each other to achieve a pan-sharpened image, without introducing artifacts or inconsistencies; otherwise it may damage the quality of the fused image.

DWTPoor directionality, lack of shift invariance
SWTLimited directional selectivity
NSCTTime-consuming, blocking artifacts

Table 1.

Shortcomings of existing pan-sharpening methods.

Thus, the goal of pan-sharpening is to produce pan-sharpened images with the highest spectral fidelity possible, as the importance of such images in various applications, ranging from land use/land cover classification to road extraction. Therefore, preserving the spectral information of the original MS images in the pan-sharpened images is of great importance [31, 32, 33]. Therefore, an attempt to utilize the concept of pseudo-Wigner distribution (PWD) for the pan-sharpening of high-resolution PAN image with a low-resolution MS image has been introduced.

3.1.1 Mathematical background of pseudo-Wigner distribution

Let us consider an arbitrary 1-D discrete function vn. The PWD of a given array vn of N pixels is given by Eq. (1).


where n and m represent the spatial and frequency discrete variables, respectively, and k is a shifting parameter. Eq. (1) can be interpreted as the discrete Fourier transform (DFT) of the product vn+kvnk. Here, v indicates the complex conjugate of 1-D sequence, v. Wnm is a matrix where every row represents the pixel-wise PWD of the pixel at position n. Further, vn is a 1-D sequence of data from the image, containing the gray values of N pixels, aligned in the desired direction. By scanning the image with a 1-D window of N pixels, i.e., shifting the window to all possible positions over the full image, the full pixel-wise PWD of the image is produced. The window can be tilted in any direction to obtain a directional distribution [34, 35]. Further, the reasons for selecting short 1-D window for PWD analysis are as follows:

  1. It greatly decreases the computational cost.

  2. It allows to obtain a pixel-wise spectral analysis of the data.

The general pan-sharpening procedure adopted for the pan sharpening of PAN and MS images using DWT, NSCT, and PWD [35] techniques can be summarized as follows (Figure 6):

  1. Coregister both the source images and resample the multispectral image to make its pixel size equal to that of the PAN, in order to avoid the problem of misregistration.

  2. Apply DWT/NSCT/PWD to all input coregistered images, one by one, to get their respective coefficients according to the mathematical decomposition procedure related to each one of the techniques, along with upsampling and histogram matching.

  3. The obtained coefficients generated in step (i) from the different input images, i.e., MS and histogram-matched PAN image, are combined according to defined fusion rules to get the fused coefficients.

  4. The fused coefficients are subject to an inverse DWT/NSCT/PWD to construct the fused image. As a result, a new MS image with higher spatial resolution is obtained.

Figure 6.

General methodology adopted for DWT-, NSCT-, and PWD-based pan-sharpening.

As a result, a new multispectral image with higher spatial resolution is obtained. This process is repeated for each individual MS and PAN band pair. Finally, all the new fused bands are concatenated to form a new fused multispectral image.

It may be noted that each MST technique (DWT, NSCT, and PWD) has its unique mathematical properties, which leads to different image decomposition procedure of an image.

3.2 Comparative assessment of various pan-sharpening techniques

Pan-sharpening techniques, belonging to color, statistical, and multiscale transform-based techniques, have been evaluated in terms of certain parameters, such as spectral distortion, shift invariance, directionality, and computational complexity. Comparative assessment of various pan-sharpening techniques has been shown in Table 2.

GradeAbsolute measureRelative measure
1ExcellentThe best in group
2GoodBetter than the average in group
3AverageAverage level in group
4PoorLower than the average level
5Very poorThe lowest in the group

Table 2.

Assessment of image quality by qualitative method.


4. Fusion rules

There are various fusion rules to combine the fused coefficients. Let WAPxy and WBMSxy denote the coefficients for higher spatial resolution PAN image and for the lower spatial resolution MS image, and WFxy denotes the coefficient of the fused image. Using these notations, following fusion rules can be summarized as follows:

  1. Average fusion rule

    The average fusion rule takes the average of the coefficients of the WAPxy, PAN, and WBMSxy, MS images, which is given by Eq. (2).


  2. Maximum fusion rule

    The maximum fusion rule compares the coefficients from the WAPANxy, PAN, and WBMSxy, MS images, and picks the larger magnitudes as the fused coefficients, which is given by Eq. (3).


Here, both the fusion rules are chosen as the basic fusion rule throughout this study, which are explained by Eqs. (2) and (3).


5. Assessment of accuracy for pan-sharpening techniques

Pan-sharpening algorithms are designed to produce good-quality pan-sharpened images. A fused image would be considered perfect quality if the spatial detail missing in the MS image is transferred from the panchromatic image without distorting the spectral content of the multispectral image [26]. Unfortunately, this is not possible. There is a trade-off between enhancement of spatial detail and spectral distortion. A fully spatially enhanced fused image would be the panchromatic (PAN) image itself, while an image free of spectral distortion would be the original multispectral (MS) image [36].

The diversity of datasets has contributed to the development of different types of techniques and procedures for the implementation of image fusion. In order to benchmark different pan-sharpening techniques, image quality metrics have been used, i.e., quality metrics are required to evaluate the quality of the fused images [37, 38]. There are two types of metrics used to evaluate image quality:

  1. Subjective (qualitative)

  2. Objective (quantitative)

5.1 Qualitative evaluation

Qualitative analysis deals with the visual comparison of the original PAN and MS images with that of the fused image, in terms of spectral and spatial distortion. The evaluation results vary depending on the intensity, sharpness, existence of noisy areas, missing spatial detail, and distortions in the geometry of the objects and display conditions of the image. A number of viewers will be shown the images and asked to judge the image quality. These may also vary from observer to observer, i.e., interpretation of image quality may be influenced or varied by personal preference [39, 40]. Therefore, an exact decision cannot be given. Further, these methods are time-consuming, inconvenient, and expensive.

On the basis of expert/observer personal preference, quality of fused image has been ranked in terms of “Grade,” “Absolute Measure,” and “Relative Measure” [41], as shown in Table 2.

5.2 Quantitative evaluation metrics

It is evident that, in most cases, there is slight difference among fusion results, i.e., quantitative evaluation methods sometimes produce results that cannot be sustained by visual inspection. However, there is no universally accepted metric to objectively evaluate the image fusion results. The generated pan-sharpened images are compared from diverse perspectives of image visualization, coherence, structural similarity, and spectral information content.

The well-known full-reference objective metrics are correlation coefficient (CC), root mean square error, peak signal-to-noise ratio [41]. The reason behind selecting these evaluation indicators is that they measure the statistical, structural similarity, and spectral distortion introduced by the pan-sharpening process. The quantitative metrics that are used in this study, as well as the mathematical representation of these measures, have been discussed below.

5.2.1 Root mean square error

Root mean square error (RMSE) is a frequently used measure of the differences between the fused and the original images. RMSE is a good measure of accuracy [41]. Smaller RMSE value represents a greater accuracy measure and is explained by Eq. (4).


where m×n indicates size of the image and FijandRoij indicate the fused image and the original image, respectively.

5.2.2 Peak signal-to-noise ratio

Peak signal-to-noise ratio (PSNR) indices reveal that the radiometric distortion of the fused image is compared to the original image. PSNR can reflect the quality of reconstruction. The larger value of PSNR indicates less amount of image distortion [41] and is given by Eq. (5).


where L is related to the radiometric resolution of the sensor; for example, L is 255 for an 8-bit sensor and 2047 for a 16-bit sensor.

5.2.3 Correlation coefficient

The correlation coefficient (CC) of two images is often used to indicate their degree of correlation. If the correlation coefficient of two images approaches one, it indicates that the fused image and original image match perfectly [40, 41]. High value of the correlation shows that the spectral characteristic of the multispectral image has been preserved well. The correlation coefficient is represented by Eq. (6)


where xij and yij are the elements of the images x and y, respectively, and x¯ and y¯ stand for their mean values.

5.2.4 Spatial correlation coefficient

In order to assess the spatial quality of the fused image quantitatively, procedure proposed by [42] has been adopted. This approach is used to measure the amount of edge information from the PAN image, which is transferred into the fused images. The high spatial resolution information missing in the MS image is present in the high frequencies of the PAN image. The pan-sharpening process inserts the higher frequencies from the PAN image into the MS image. Therefore, the CC between the high pass filtered PAN and the fused images would indicate how much spatial information from the PAN image has been incorporated into the MS image. A higher correlation between the two high pass filtered images implies that the spatial information has been retained faithfully. This CC is called the spatial correlation coefficient (SCC). In order to extract the spatial detail of the images to be compared, following Laplacian filter has been used and is represented by Eq. (7).


The pan-sharpened image which will best preserve the spectral and structural information of the original low resolution MS image is the one that has satisfied the following conditions (Table 3).

MetricIdeal valueError value
Root mean square error (RMSE)0>0
Peak signal-to-noise ratio (PSNR)NA>1
Correlation coefficient (CC)1>1 and <1
Spatial correlation coefficient (SCC)1>1 and <1

Table 3.

The ideal and error value of different quantitative indicators.


6. Summary

This chapter provides the methodology of the proposed approaches for the pan-sharpening of satellite images, along with the discussion of some prevalent existing multisensor pan-sharpening techniques and well-known evaluation indicators.


  1. 1. Abdikan S, Sanli FB, Sunar F, Ehlers M. A comparative data-fusion analysis of multi-sensor satellite images. International Journal of Digital Earth. 2012;7(8):671-687
  2. 2. Miao Z, Shi W, Samat A, Lisini G, Gamba P. Information fusion for urban road extraction from VHR optical satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2016;9:1817-1829
  3. 3. Alparone L, Baronti S, Aiazzi B, Garzelli A. Spatial methods for multispectral pansharpening: Multiresolution analysis demystified. IEEE Transactions on Geoscience and Remote Sensing. 2016;54(5):2563-2576
  4. 4. Khademi G, Ghassemian H. Incorporating an adaptive image prior model into Bayesian fusion of multispectral and panchromatic images. IEEE Geoscience and Remote Sensing Letters. 2018;15(6):917-921
  5. 5. Garguet-Duport B, Girel J, Chassery JM, Pautou G. The use of multi resolution analysis and wavelets transform for merging SPOT panchromatic and multispectral image data. Photogrammetric Engineering and Remote Sensing. 1996;62(9):1057-1066
  6. 6. Gonzalez RC, Woods RE, Eddins SL. Digital Image Processing Using MATLAB (Vol. 624). Upper Saddle River, New Jersey: Pearson-Prentice-Hall; 2004
  7. 7. Gungor O. Multi-sensor multi-resolution image fusion [Ph.D.]. West Lafayette, Indiana: Purdue University; 2008
  8. 8. Vivone G, Alparone L, Chanussot J, Dalla Mura M, Garzelli A, Licciardi GA, et al. A critical comparison among pansharpening algorithms. IEEE Transactions on Geoscience and Remote Sensing. 2015;53(5):2565-2586
  9. 9. Dogra A, Goyal B, Agrawal S. From multi-scale decomposition to non-multi-scale decomposition methods: A comprehensive survey of image fusion techniques and its applications. IEEE Access. 2017;5:16040-16067
  10. 10. Mallat S. A theory for multi-resolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;11(7):674-693
  11. 11. Polikar R. The Wavelet Tutorial. Theory and Applications of Wavelets. 2008
  12. 12. Ranchin T, Wald L. The wavelet transform for the analysis of remotely sensed images. International Journal of Remote Sensing. 1993;14(3):615-619
  13. 13. Roy S, Howlader T, Mahbubur Rahman SM. Image fusion technique using multivariate statistical model for wavelet coefficients. Signal, Image and Video Processing. 2013;7(2):355-365
  14. 14. Vidakovic B, Mueller P. Wavelets for Kids: A Tutorial Introduction. Duke University, Durham, NC: Institute of Statistics and Decision Science, 2007
  15. 15. Pradhan B, Jebur MN, Shafri HZM, Tehrany MS. Data fusion technique using wavelet transform and Taguchi methods for automatic landslide detection from airborne laser scanning data and quickbird satellite imagery. IEEE Transactions on Geoscience and Remote Sensing. 2016;54(3):1610-1622
  16. 16. Starck JL, Murtagh F. Image restoration with noise suppression using the wavelet transforms. Astronomy and Astrophysics. 1994;288:342-348
  17. 17. Holschneider M, Tchamitchian P. Regularite´ local de la function non-differentiable’ the Riemann. In: Lemarie PG, editor. Les Ondelettes en 1989. Berlin, Heidelberg: Springer; 1990. pp. 102-124
  18. 18. González-Audícana M, Otazu X, Fors O. A comparison between Mallat’s and the a’trous discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. International Journal of Remote Sensing. 2005;26(3):595-614
  19. 19. Jiang Q , Jin X, Lee SJ, Yao S. A novel multi-focus image fusion method based on stationary wavelet transform and local features of fuzzy sets. IEEE Access. 2017;5:20286-20302
  20. 20. Medina J, Carrillo I, Upegui E. Spectral and spatial assessment of the TDW wavelet transform decimated and not decimated for the fusion of OrbView-2 satellite images. In: 2018 13th Iberian Conference on Information Systems and Technologies (CISTI). IEEE; 2018
  21. 21. Vetterli M, Kovačević J. Wavelets and Subband Coding. Englewood Cliffs, NJ: Prentice Hall; 1995
  22. 22. Candès EJ, Donoho DL. Curvelet multi resolution representation and scaling laws. In: Proceedings of Eighth SPIE Wavelet Applications in Signal and Image Processing. 2000. p. 4119
  23. 23. Yang Y, Tong S, Huang S, Lin P, Fang Y. A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Access. 2017;5:14898-14913
  24. 24. Do MN, Vetterli M. The contourlet transform: An efficient directional a multi-resolution image representation. IEEE Transactions on Image Processing. 2005;14(12):2091-2106
  25. 25. Cunha AL, Zhou J, Do MN. Non sub sampled contourlet transform: Filter design and applications in de-noising. In: IEEE International Conference on Image Processing. Vol. 1. 2005. pp. 749-752
  26. 26. Yang XH, Jiao LC. Fusion algorithm for remote sensing images based on non sub sampled contourlet transform. Acta Automatica Sinica. 2008;34(3):274-281
  27. 27. Ding S, Zhao X, Xu H, Zhu Q , Xue Y. NSCT-PCNN image fusion based on image gradient motivation. IET Computer Vision. 2017;12(4):377-383
  28. 28. Shabanzade F, Ghassemian H. Combination of wavelet and contourlet transforms for PET and MRI image fusion. In: Artificial Intelligence and Signal Processing Conference (AISP). IEEE; 2017. pp. 178-183
  29. 29. Claasen TACM, Mecklenbrauker WFG. The Wigner distribution—A tool for time–frequency analysis. Philips Journal of Research. 1980;35(3):217-250
  30. 30. Gabarda S, Cristobal G. On the use of a joint spatial-frequency representation for the fusion of multi-focus images. Pattern Recognition Letters. 2005;26(16):2572-2578
  31. 31. Rajput UK, Ghosh SK, Kumar A. Multi-sensor fusion of satellite images for urban information extraction using pseudo-Wigner distribution. Journal of Applied Remote Sensing. 2014;8(1):083-668
  32. 32. Gabarda S, Cristóbal G. Blind image quality assessment through anisotropy. Journal Optical of Society America A. 2007;24(12):B42-B51
  33. 33. Redondo R, Fischer S, Sroubek F, Cristobal G. 2D Wigner distribution based multi-size windows technique for image fusion. Journal of Visual Communication and Image Representation. 2008;19(1):12-19
  34. 34. Redondo R, Sroubek F, Fischer S, Cristóbal G. Multi-focus image fusion using the log-Gabor transform and a multi-size windows technique. Information Fusion. 2009;10(2):163-171
  35. 35. Rajput UK, Ghosh SK, Kumar A. Comparison of fusion techniques for very high resolution data for extraction of urban land-cover. Journal of the Indian Society of Remote Sensing. 2015;45(4):709-724
  36. 36. Vijayaraj V, Younan N, O’Hara C. Quantitative analysis of pan-sharpened images. Optical Engineering. 2006;45(4):46-202
  37. 37. Wald L, Ranchin T, Mangolini M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogrammetric Engineering and Remote Sensing. 1997;63(6):691-699
  38. 38. Chen H, Varshney PK. A human perception inspired quality metric for image fusion based on regional information. Information Fusion. 2007;8(2):193-207
  39. 39. Dhore AD, Veena CS. Evaluation of various pan-sharpening methods using image quality metrics. In: 2015 2nd International Conference on Electronics and Communication Systems (ICECS). IEEE; 2015. pp. 871-877
  40. 40. Shi W, Zhu CO, Tian Y, Nichol J. Wavelet-based image fusion and quality assessment. International Journal of Applied Earth Observation and Geo-information. 2005;6(3):241-251
  41. 41. Karathanassi V, Kolokousis P, Ioannidou S. A comparison study on fusion methods using evaluation indicators. International Journal of Remote Sensing. 2007;28(10):2309-2341
  42. 42. Zhou J, Civco DL, Silander JA. A wavelet transform method to merge land sat TM and SPOT panchromatic data. International Journal of Remote Sensing. 1998;19(4):743-757

Written By

Upendra Kumar

Submitted: 23 April 2018 Reviewed: 01 August 2018 Published: 29 May 2019