Open access

Image Fusion for Remote Sensing Applications

Written By

Leila Fonseca, Laercio Namikawa, Emiliano Castejon, Lino Carvalho, Carolina Pinho and Aylton Pagamisse

Submitted: 23 November 2010 Published: 24 June 2011

DOI: 10.5772/22899

From the Edited Volume

Image Fusion and Its Applications

Edited by Yufeng Zheng

Chapter metrics overview

10,712 Chapter Downloads

View Full Metrics

1. Introduction

Remote Sensing systems, particularly those deployed on satellites, provide a repetitive and consistent view of the Earth (Schowengerdt, 2007). To meet the needs of different remote sensing applications the systems offer a wide range of spatial, spectral, radiometric and temporal resolutions. Satellites usually take several images from frequency bands in the visual and non-visual range. Each monochrome image is referred to as a band and a collection of several bands of the same scene acquired by a sensor is called multispectralimage (MS). A combination of three bands associated in a RGB (Red, Green, Blue) color system produce a color image.

The color information in a remote sensing image by using spectral band combinations for a given spatial resolution increases information content which is used in many remote sensing applications. Otherwise, different targets in a single band may appear similar which makes difficult to distinguish them. Different bands can be acquired by a single multispectral sensor or by multiple sensors operating at different frequencies. Complementary information about the same scene can be available in the following cases (Simone et al., 2002):

  1. Data recorded by different sensors;

  2. Data recorded by the same sensor operating in different spectral bands;

  3. Data recorded by the same sensor at different polarization;

  4. Data recorded by the same sensor located on platforms flying at different heights.

In general, sensors with high spectral resolution, characterized by capturing the radiance from different land covers in a large number of bands of the electromagnetic spectrum, do not have an optimal spatial resolution, that may be inadequate to a specific identification task despite of its good spectral resolution (González-Audícana, 2004). On a high spatialresolution panchromatic image (PAN), detailed geometricfeatures can easily be recognized, while the multispectralimages contain richer spectral information. The capabilities of the images can be enhanced if theadvantages of both high spatial and spectral resolutioncan be integrated into one single image. The detailedfeatures of such anintegrated image thus can be easilyrecognized and will benefit many applications, such asurban and environmental studies (Shi et al., 2005).

With appropriate algorithms it is possible to combine multispectral and panchromatic bands and produce a synthetic image with their best characteristics. This process is known as multisensor merging, fusion, or sharpening (Pohl &Genderen, 1998; Zhang, 2004;, Wald 2002). It aims to integrate the spatial detail of a high-resolution panchromatic image (PAN) and the color information of a low-resolution multispectral (MS) image to produce a high-resolution MS image (hybrid product). The result of image fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition.

The hybrid product should offer the highest possible spatial information content while still preserving good spectral information quality.It is known that the spatial detailed informationof PAN image is mostly carried by its high-frequency components, while the spectral information of MS image ismostly carried by its low-frequency components. If the high-frequency components of the MS image are simply substitutedby the high-frequency components of the panchromatic image, the spatial resolution is improved but with the loss of spectralinformation from the high-frequency components of MS image (Guo et al., 2010; Li et al. 2002; Zhou et al., 1998).

To produce hybrid images with good quality some aspects should be considered during the fusion process(Schowengerdt, 2007; Fonseca et al., 2008):

  1. The PAN and MS images should be acquired at nearby dates. Several changes may occur during the interval of acquisition time: variations in the vegetation depending on the season of the year, different lighting conditions, construction of buildings, or changes caused by natural catastrophes (e.g. earthquakes, floods and volcanic eruptions);

  2. The spectral range of PAN image should cover the spectral range of all multispectral bands involved in the fusion process to preserve the image color. This condition can avoid the color distortion in the fused image;

  3. The spectral band of the high resolution image should be as similar as possible to that of the replaced low resolution component in the fusion process;

  4. The high resolution image should be globally contrast matched to the replaced component to reduce residual radiometric artifacts;

  5. The PAN and MS images must be registered with a precision of less than 0.5 pixel, avoiding artifacts in the fused image.

Some of these factors are less important when the fused images are from regions of the spectrum with different remote sensing phenomenologies. For example, there is no reason to assume radiometric correlation between the images in the fusion of low-resolution thermal or radar images with multispectral visible imagery (Schowengerdt, 2007).

The merging process becomes more difficult in those cases where the ratio between the spatial resolutions of both images is greater than 4 due to the registration and resampling processes. Lingetal. (2008) showed that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

Most image processing systems such as Environment for Visualizing Images - ENVI (Research System, 2011), SPRING (SPRING, 2011; Câmara et al., 1996) and ERDAS (ERDAS, 2011) have an image fusion module. Also, some image fusion algorithms have been implemented using open software such as TerraLib, which is a Geographic Information Systems (GIS) classes and functions library available from the Internet as open source, allowing a collaborative environment and its use in the development of multiple GIS tools (TerraLib, 2011).

Based on the problems aforementioned, we present a brief review about fusion techniques and fusion evaluation methods, and also a discussion about the use of image fusion techniques in three remote sensing applications, which will be illustrated through case studies. Each case study presents results applied to real data and problems in remote sensing such as for inland water analysis, disaster and urban studies. Two of them use hybrid images generated from CBERS-2B images that are freely available on internet (INPE, 2011).

The chapter is organized in five sections: Section 2 briefly describes the most traditional fusion methods, Section 3 describes some techniques for fused image quality assessment, Section 4 presents three case studies that illustrate the application of image fusion inthe remote sensing area, finally section 5 concludes the work.

Advertisement

2. Fusion methods

Ideally, image fusion techniques should allow combination of images with different spectral and spatial resolution keeping the radiometric information (Pohl and Genderen, 1998). Huge effort has been put in developing fusion methods that preserve the spectral information and increase detail information in the hybrid product produced by fusion process.

Methods based on IHS transform (Choi, 2006; Schetselaar, 1998; Silva et al., 2008; Tu et al., 2001a, 2001b, Tu et al., 2004; Tu et al., 2007) and Principal Components Analysis (PCA) (Chavez, 1989) probably are the most popular approaches used to enhance the spatial resolution of multispectral images with panchromatic images. However, both methods suffer from the problem that the radiometry on the spectral channels is modified after fusion. This is because the high-resolution panchromatic image usually has spectral characteristics different from both the intensity and the first principal components (Li et al., 2002). More recently, new techniques have been proposed such as those that combine wavelet transform with IHS model and PCA transform to manage the color and details information distortion in the fused image (Cao et al., 2003; González-Audícana et al., 2004; Simone et al., 2002).

Below, we present the basic theoryof the fusion methods based on IHS, PCA, arithmetic operators, and Wavelet Transform(WT), which are the most traditional techniques used in remote sensing applications.

Advertisement

2.1 IHS color model

IHS method consists on transforming the R,G and B bands of the multispectral image into IHS components, replacing the intensity component by the panchromatic image, and performing the inverse transformation to obtain a high spatial resolution multispectral image (Schowengerdt, 2007; Carper et al., 1990).

The three multispectral bands, R, Gand B, of a low resolution image are first transformed tothe IHS color space as (Carper et al., 1990):

IV1V2=1313131616-2612-120RGBE1
H=tan-1V2V1E2
S=V12+V22E3

where I, H, S components are intensity, hue and saturation, and V1and V2 are the intermediate variables. Fusion proceedsby replacingcomponent I with the panchromatic high-resolutionimage information, after matching its radiometric information with the component I (Figure 1). The fused image, which hasboth rich spectral information and high spatialresolution, is thenobtained by performing the inverse transformation fromIHS back to the original RGB space as

RGB=11612116-12112-260IV1V2E4

Although the IHS method has been widely used, themethod cannot decompose an image into differentfrequencies in frequency space such as higher or lowerfrequency. Hence the IHS method cannot be used toenhance certain image characteristics (Shi et al., 2005). Besides, the color distortion of IHS technique is often significant. To reduce the color distortion, the PAN image is matched to the intensity component before the replacement or the hue and saturation components are stretching before the reverse transform. Ling et al. (2007) also propose a method that combines a standard IHS transform with FFT filtering of both thepanchromatic image and the intensity component of the original multispectral image to reduce color distortion in the fused image.

Figure 1.

Block scheme of the IHS fusion method.

2.2. Principal Components Analysis (PCA)

The fusion method based on PCA is very simple (Chavez &Kwakteng, 1989; Schowengerdt, 2007; Zhang, 1999). PCA is a general statistical technique that transformsmultivariate data with correlated variables into one withuncorrelated variables. These new variables are obtainedas linear combinations of the original variables. PCAhas been widely used in image encoding, image datacompression, image enhancement and image fusion. In the fusion process, PCA method generates uncorrelated images (PC1, PC2, …,PCn, where n is the number of input multispectral bands). The first principal component (PC1) is replaced with the panchromatic band, which has higher spatial resolution than the multispectral images. Afterwards, the inverse PCA transformation is applied to obtain the image in the RGB color model as shown in Figure 2.

In PCA image fusion, dominant spatial information and weak color information is often a problem (Zhang, 2002). The first principal component, which contains maximum variance, is replaced by PAN image. Such replacement maximizes the effect of panchromatic image in the fused product.One solution could be stretching the principal component to give a spherical distribution.Besides, the PCA approach is sensitive to the choice of area to be fused. Other problem is related to the fact that the first principal component can be also significantly different from the PAN image. If the grey values of the PAN image are adjusted to the grey values similar to PC1 component before the replacement, the color distortion is significantly reduced.

Figure 2.

Block scheme of the PCA fusion method.

2.3. Arithmetic combination

In accord to Zhang (2002), different arithmetic combinations such as Brovey Transform, Synthetic Variable Ratio (SVR) and Ratio Enhancement (RE) techniques have been employed for fusing multispectral and panchromatic images (Rahman &Csaplovics, 2007).

In the Brovey method, given the multispectral MSi (i=1,2,3) and PAN images, the fused imageFUSi, for each band, is obtained as

FUSi=MSiai=1nMSixPANE5

The Brovey Transform was developed to provide contrast in features such shadows, water and high reflectance areas. Consequently, the Brovey Transform should not be used if preserving the original scene radiometry is important. However, it is good to produce RGB images with a higher degree of contrast and visually appealing images (ERDAS, 2011).

Other arithmetic methods such as SVR and RE are similar and involve more computations for the simulated image (Chavez et al., 1991).

2.4. Wavelet Transform (WT)

In the fusion methods based on wavelet transform (Mallat, 1989), the images are decomposed into pyramid domain, in which coefficients are selected to be fused (Garguet-Duport et al., 1996). The two source images are first decomposed using wavelet transform. Wavelet coefficients from MS approximation subband and PAN detail subbands are then combined together, and the fused image is reconstructed by performing the inverse wavelet transform (Figure 3).Since the distribution of coefficients in the detail subbands have meanzero, the fusion result does not change the radiometry ofthe original multispectral image (Li et al., 2002).The simplest method is based on the selection of the higher value coefficients, but various other methods have been proposed in the literature (Amolins et al., 2007; Chen et al., 2005; Chibani & Houacine, 2000, 2003 ; Choi et al., 2005; Garzelli & Nencini, 2005;Ioannidou& Karathanassi, 2007; Li et al., 2002; Li et al. 2005; Lillo-Saavedra et al., 2005; Pajares & de la Cruz, 2004; Shi et al., 2005; Zhou et al., 1998).

The schemes used to decompose the images are based on decimated (Mallat, 1989) and undecimated algorithms (Lang et al., 1995, González-Audicana et al., 2005). In the decimated algorithm, the signal is down-sampled after each level of transformation. In the case of a two-dimensional image, down-sampling is performed by keeping one out of every two rows and columns, making the transformed image one quarter of the original size and half the original resolution (Amolins et al., 2007). In the lower level of decomposition, four images are produced, one approximation image and three detail images. The decimated algorithm is not shift-invariant, which means that it is sensitive to shifts of the input image. The decimation process also has a negative impact on the linear continuity of spatial features that do not have a horizontal or vertical orientation. These two factors tendto introduce artifacts when the algorithm is used in applicationssuch as image fusion (Amolins et al., 2007).

On the other hand, the undecimated algorithmaddresses the issue of shift-invariance. It does so bysuppressing the down-sampling step of the decimatedalgorithm and instead up-sampling the filters by insertingzeros between the filter coefficients.The undecimated algorithm isredundant, meaning some detail information may be retainedin adjacent levels of transformation. It also requiresmore space to store the results of each level oftransformation and, although it is shift-invariant, it doesnot resolve the problem of feature orientation (González-Audícana et al., 2005; Garzelli& Nencini, 2005).

Most methods based on wavelet transform exploits the context dependency by thresholding the local correlation coefficient between the images to be merged, to avoid injection of spatial details that are not likely to occur in the high spatial image (Choi et al., 2005; Li et al., 2005; Lillo-Saavedra& Gonzalo, 2006;Song et al., 2007; Ventura et al., 2002; Yang et al., 2007). These techniques seem to reduce the color distortion problem and to keep the statistical parameters invariable.

Zhou et al. (1998) compared a fusion method based on wavelet transform with IHS, PCA and Brovey transform to merge Landsat TM and SPOT panchromatic image. They conclude thatwith the wavelet merging method it is easy to control the trade-off between the spectral information from a low spatial-high spectral resolution sensor and the spatial structure from a high spatial-low spectral resolution sensor. They also showed that simultaneous best spectral and spatial quality can only be achieved with wavelet transform methods compared with the other approaches.The main drawback consists on the selection of the coefficients to be merged.

In accord to Zhang (2002), although the color distortion is reduced in the WT fusion methods, the colors seem not being smoothly integrated into the spatial features. Besides, some researchers have reported the loss of spectral content of small objects.

Pajares & de la Cruz (2004) conclude that when the images are smooth,without abrupt intensity changes, the wavelets work appropriately,improving the results of the classical methods. This has been verified with smooth images and also with medical images, where no significant changesare present. In this case, the type of images (remote sensing,medical) is irrelevant.

Other researchers have proposed alternative methods,which presentsome improvements, especially for holding spectral information, texture information, and contour information(Chai et al., 2010; Guo et al., 2010; Jing &Cheng, 2010; Miao et al., 2011; Yang & Jiao, 2008). Miao et al. (2011) stated that detail information can be easilycaught when the images are decomposed by shearlet transform in any scale and any direction. Guo et al. (2010) proposed an approach based on Expectation Maximization (EM) and Covariance Intersection (CI) models for image fusion. The ideal MS and PAN images are estimated by EM along with the covariance matrices of the estimation error. Then, CI is applied to combine the two images

Figure 3.

Block scheme of the WT fusion method.

and provide a consistent estimate of the high-resolution MS image. Comparing with WT and PCA methods, the proposed EM–CI method preserves more significant spectral information at the cost of slightly lower improvement on spatial quality.

Advertisement

3. Methods for fused image quality assessment

Some researchers have evaluated different image fusion methods using different image quality measures (Alparone et al., 2004; Alparone et al., 2007; Amolins et al., 2007; Chavez et al., 1991; González-Audícana et al., 2005; Guo et al., 2010; Laporterie-Dejean et al., 2005;Marcelino et al., 2003; Nikolakopoulos, 2005; Wald, 2000; Wang & Bovik, 2002). Generally, the goodness of an image-fusion method can be evaluated by comparing the resulting merged image with a reference image, which is assumed to be ideal. This comparison can be based on spectral and spatial characteristics, and can be done both visually and quantitatively. Unfortunately, the reference image is not always available in practice, thus, it is necessary to simulate it or to perform a quantitative and blind evaluation of the fused images.

For assessing quality of animage after fusion, some aspects must be defined. These include, for instance, spatial and spectral resolution,quantity of information, visibility, contrast, or details of features of interest (Shi et al., 2005). Quality assessment is application dependant so that different applications may require different aspects of image quality.

Generally, imageassessment methods can be divided into two classes:qualitative (or subjective) and quantitative (or objective) methods.Qualitative methods involve visual comparison between a reference image and the fused image whereas quantitative analysis involves quality indicators that measures spectral and spatial similarity between multispectral and fused images. Some of them will be briefly described below.

3.1. Qualitative analysis

This section is based on Shi et al. (2005). According to prior assessment criteria or individualexperiences, personal judgment or even grades can begiven to the quality of an image. The interpreter analyzes the tone, contrast, saturation, sharpness, and texture of the fused images. A final overall qualityjudgment can be obtained by, for example, a weightedmean based on the individual grades. This is the socalledthe meanopinion score (MOS) method (Wei et al., 1999). The qualitative method mainly includes absolute and therelative measures (Table 1).This method depends on the specialist’s experiencesor bias and some uncertainty is involved.Qualitative measures cannot be represented byrigorous mathematical models, and their techniqueis mainly visual (Shi at al., 2005).

GradeAbsolute
measure
Relative measure
1ExcellentThe best in group
2GoodBetter than the average level in group
3FairAverage level in group
4PoorLower than the average level
5Very poorThe lowest in the group

Table 1.

Objective method for image quality assessment (Shi et al.,2005)

3.2. Quantitative analysis

Some quality indicators include (a) average grey value, for representing intensity of an image, (b) standard deviation, information entropy, profile intensity curve for assessing details of fused images, and (c) bias and correlation coefficient for measuring distortion between the original image and fused image in terms of spectral information.

LetFi and Ri=(i=1,,N)be the Nbands of the fused and reference images, respectively. The following indicators are used to determine the difference in spectral information between each band of the merged and reference images (González-Audicana, 2004, Guo et al., 2010):
  1. Correlation Coefficient (CC) between the reference and the merged image that should be close to 1 as possible;

  2. Difference between the means of the reference and the merged images (DM), in radiance as well as its value relative to the mean of the original. The smaller these difference are, the better the spectral quality of the merged image. Thus, the difference value should be as close to 0 as possible;

  3. Standard deviation of the difference image (SSD), relative to the mean of the reference image expressed in percentage. The lower its value, the better the spectral quality of the merged image.

  4. Universal Image Quality Indicator -UIQI (Wang & Bovik, 2002):

UIQI=4aFiRiuaFiuaRiaFi2+aRi2aFi2+aRi2E6

where aFiRiis the covariance between the band of reference image and the band of fused image, aand aare the mean and standard deviation of the images. The higher UIQI index, the better spectral quality of the merged image. Wang & Bovik (2002) suggest the use of moving windows of different sizes to avoid errors due to index spatial dependence.

To estimate the global spectral quality of the merged image, one can use the following parameters:

  1. The relative average spectral error index (RASE) characterizes the average performance of the method for all bands:

  2. Relative global dimensional synthesis error (ERGAS) (Wald, 2000):

ERGAS=100hl1Nai=1NDM2Ri+SSD2Riai2E8

where h and l are the resolution of the high and low spatial resolution images, respectively, and μi is the mean radiance of each spectral band involved in the fusion process.DM and SSD are defined above in the text. The lower the values of RASE and ERGAS indexes, the higher the spectral quality of the merged images.

A good fusion method must allow the addition of a high degree of the spatial detail of the PAN image into the MS image. Visually the details information can be observed. However, the spatial quality of the merged images can be measured using the procedure proposed by Zhou et al. (1998):

  1. The PAN and merged images are filtered using the Laplacian Filter

  2. Calculate the correlation between the filtered merged image and the filtered PAN image. The high correlation value indicates that the spatial information of the PAN image has been injected into the MS image in the fusion process.

Guo et al. (2010) use the average gradient index (AG) for spatial quality evaluation. AG describes the changing feature of image texture and the detailed information. Larger values of the AG index correspond to higher spatial resolution. The AG index of the fused images at each band can be computed by

AG=1KLam=1Kan=1L?Fm,nem2+?Fm,nen22E10

where K and L are the number of lines and columns of the fused image F.

Other methods for assessing fusion quality have been proposed (Liu et al., 2008; Chen and Varshney, 2007;Zheng & Chin; 2009; Zheng et al., 2008; Chen & Blum, 2009; Wang et al., 2008). Liu et al. (2008) proposed two metrics based on a modified structural similarity measure (FSSIM)scheme and the local cross-correlation between the feature maps of the fused and input images. A similarity map with the fused image is generated for each input image. Then, the larger value at each location is retained for overall assessment. The second metric is implemented by computing the local cross-correlation between the phase congruency maps of the fused and input images. The index value is obtained by averaging the similarity or cross-correlation value in each predefined region. These metrics provide an objective quality measure in the absence of a reference image.

Chen & Varshney (2007) proposed a new quality metric for image fusion that does not require a reference image.It isbased on local information given by a set of localized windows and by the difference in the frequency domain filtered by a contrast sensitivity function.Thecalculationisvery simple and it is also applicable to different input modalities. The proposed metricis used to evaluate different fusion algorithms based on wavelet, averaging and Laplacian pyramid. The fusion performance is tested against several circumstances including: absence of noise, different window sizes, presence of additive Gaussian noise and for six sets of test images. In all tests, the fusion method based on wavelet transform outperformed the others.

Zheng & Chin (2009) have developed a structural similarity quality metric for image fusion which treats complementary and redundant regions in the original images. This objective quality evaluation also takes into account the amount of important information in the source images that can be transferred into the fused image. Comparisons with other standard objective quality metrics show that the proposed metric correlates well with subjective quality evaluation of the fused images, especially for input images where the complementary information and the redundant information can be well distinguished.They evaluate four image fusion methods based on arithmetic, PCA, andmulti-resolution (MR) techniques using standard objective metrics. The results show that the current structural similarity quality metric agrees with the subjective evaluation and three of the other standard structural metrics.

Chen & Blum (2009) propose a new perceptual image fusion quality assessment method motivated by human vision modeling. Generally, it is not possible to obtain an ideal image to be taken as a reference for fusion evaluation. Therefore, they measure the information present in the input images,which is transferred to the fused image to improve the fused image quality. For this, they filter the input images using a specified contrast sensitivity function; compute the local contrast; calculate the contrast preservation; generate a saliency map; and calculate the global quality measure.

Zhang (2008) has evaluated seven fusion quality metrics and the results showed that there was inconsistency between visual and quantitative image fusion quality analysis. Alparone et al. (2004) have got similar results. This inconsistency has proven that not all metrics produce reliable measurements for image fusion evaluation.

Advertisement

4. Applications for remote sensing imagery fusion

The availability of high spectral and spatial resolution images is desirable when undertaking identification studies in areas with complex morphological structure such as urban areas, heterogeneous forested areas or agricultural areas with a high degree of plot subsivision (González-Audicana et al., 2004). When this kind of images is not available one can produce images with higher spatial resolution using image fusion techniques.

Therefore, in this section we present three case studies in remote sensing applications to illustrate the use of fusion techniques for monitoring remaining forest, identifying landslide scars, and classifying intra-urban land cover. The first two applications use images acquired from CBERS-2B (CBERS, 2011) that are freely distributed on internet (INPE, 2011).

4.1. Monitoring of remaining forest using CBERS-2B images

An application that is still underused by the remote sensing community is the monitoring of remaining forest, which has an important role in ecological balance. However, traditional images of low and medium spatial resolution are not adequate for mapping forest fragments which occur along drainage channels and their boundaries.

Within this context, this study aims to evaluate a hybrid CBERS-2B image to map the remaining forestvegetation at Ibitinga, Brazil. This scene presents phytoplankton blooms on water areas and land use changes due to sugar cane plantation. CBERS-2B, launched in September 2007, has a high resolution panchromatic camera (HRC - High Resolution Camera), with spatial resolution of 2.7 m, a multispectral camera (CCD) with 20 meter spatial resolution, and a Wide Field Imager (WFI), with 260 m spatial resolution (CBERS, 2011).

To identify forest fragments we generate a hybrid product of 2.5 m spatial resolution from CCD and HRC images, acquiredon08/22/2008. The input images are shown inFigure 4. To evaluate the results from fused CBERS-2B images we used the Quickbird (QB) image of 09/01/2008, resampled to 2.5 m of spatial resolution. Table 2 presents the characteristics of HRC, CCD and QB sensors.

The CBERS-2B images are pre-processed using restoration(Fonseca et al., 1993), noise filtering and orthorectification procedures. Afterwards, the images are fused and classified for mapping the remaining forest in the Ibitinga Resevoir. Figure 5 illustrates the hybrid CBERS-2B and QB images for purpose of comparison.

CharacteristicsHRC-CBERS 2BCCD – CBERS 2BQuickbird
Multispectral bands
(µm)
0.50 - 0,80 (Pan)0.51 - 0.73 (Pan)
0.45 - 0.52 (Blue)
0.52 - 0.59 (Green)
0.63 - 0.69 (Red)
0.77 - 0.89 (IR)
0.45 – 0.90 (Pan)
0.45 – 0.52 (Blue)
0.52 – 0.60 (Green)
0.63 – 0.69(Red)
0.76 – 0.90 (IR)
Spatial Resolution2.7 x 2.7 m20 x 20 m0.61 m (nadir)
2.44 m (nadir)
Swath width27 km (nadir)113 km (nadir)16.5 km (nadir)
20.8 km (off- nadir)
Quantization8 bits8 bits11 bits

Table 2.

Data characteristics.

Figure 4.

CBERS-2B images: (a) filtered CCD image to reduce striping effects; (b) high resolution HRC image.

The hybrid product CBERS-2B and QB image are classified using maximum-likelihood method (SPRING, 2011).A total of 67 samples were selected: 33 for "Forest ", 12 for "bare soil", 11 for "vegetation 1" and 11 samples for "vegetation 2". Theses classes were grouped to produce only two classes of interest ("forest" and "non-forest"), and the water body area was excluded in the thematic maps. In the thematic maps (Figure 6), green and beige colors represent forest and non-forest areas, respectively.

Figure 5.

Image Quickbird (a) and hybrid image producedby merging CBERS-2B CCD and HRCimages (b), with 2.5 m spatial resolution.

Table 3 shows the overall accuracy and Kappa values for both classifications. The visual and quantitative analysis show that the results are quite similar. However, we observed that in some regions, the forest area was underestimated in the map produced by CBERS-2B product. The classification results differ mainly in the linear features and in the targets contours. Besides, the map obtained from the QB image shows isolated spots, particularly in areas of “high vegetation” (Figure 6a), not present in the map produced by CBERS-2B (Figure 6b).

Thematic mapsOverall accuracyKappa
value
Hybrid CBERS-2B0.930.83
QB0.930.84

Table 3.

Thematic map assessement.

Finally, the evaluation of hybrid products CBERS-2B for mapping of fragments of tree patches indicated that CBERS-2B images, after pre-processing and fusion processes, have potential for those applications in which QB images have been used.

Figure 6.

Thematic maps produced for (a) QBimage,and (b)CBERS-2B hybrid image. Forest and non-forest are represented by green and beige colours, respectively.

4.2. Image fusion techniques to identify landslide scars

Landslide is a fast mass movement responsible for the shape of montainous landscapes. These mass movements include a wide range of ground movement, such as rock falls, deep slopes and shallow debris flows. Although the action of gravity is the primary reason for the occurrence of this fenomenon, there are other contributing factors to start landslides such as lithology and structure, slope gradient and slope morphology, slope aspect, land-use type, etc (Dai & Lee, 2002).

The landslide mapping consists on the identification of erosion scars (loss of vegetation cover and soil horizons) on hillslope, using aerial photographies and and satellite images (Temesgen et al., 2001; Marcelino et al., 2003). Remote Sensing is a fundamental tool to detect, classify and monitor landslides because it allows one to obtain historical data series at a relatively low cost. Besides, various image processing techniques can be used to enhance the features and, thus, their identification is facilitated.

Considering this fact, we analyze two fusion methods for improving the interpretability of the CBERS-2B images to identify the scars of a landslide occurred in January, 2010, after heavy rains, which killed more than 20 people (BBC, 2010). The region covers an area of the Ilha Grande Island, Brazil (Figure 7). Hybrid images produced by image fusion techniques can be used to measure the extent of the landslide scar automatically or by a human interpreter.

The CCD and HRC images used in the methodology wereacquired on February 21, 2010. The original CCD (RGB color composition: band 3 in red, band 4 in green and band 2 in blue) and HRC images are presented in Figure 8. We can observe the island Ilha Grande in the center of the image marked with a rectangle.The CCD images cover an area between longitude 44 38´ west and longitude 43 47´ west, and latitude 22 42´ south and latitude 23 50´ south; HRC image covers an area between longitude 44 15´ west and longitude 44 2´ west, and latitude 22 57´ south and latitude 23 14´ south.

Figure 7.

Landslide in Ilha Grande Brazil (BBC, 2010).

As the spatial resolution difference between CCD and HRC is large, firstly, we resample the CCD images to 10 meter spatial resolution by applying the restoration procedure (Fonseca et al., 1993). The restoration filter takes into account thespatial response of each sensor to resample and restore the image in a single processing step. Afterwards, the restored image (10 meter resolution) was resampled to 2.5 meters by a bilinear interpolation in order to match the pixel size of the HRC image.

Figure 8.

CBERS-2B images acquired on February 21, 2010: (a) Color composition of CBERS-2B CCD images (b) HRC image.

The resampled CCD images and the HRC images were registered using control points and affine geometric transformation. Figure 9 presents a portion of the registered images, with the HRC image in gray levels and a strip of the corresponding region on the resampled CCD image, in order to demonstrate the quality of the registration procedure.

Figure 9.

CBERS-2B image registration: strip of CCD color image (R4G3B2) superimposed on HRC image.

Next, the registered CBERS-2B images were merged using IHS and PCA methods. A small portion around the landslide area of each original image and fused imagewas used in the fusion evaluation procedure. The original and fused images are displayed inFigure 10, with the landslide area shown on the right side images.

To evaluate the detail information injected into the hybrid image, we calculated the correlation between the original PAN image and the luminance component of the fused images. The fused images were converted from RGB to YIQ color space, where the Y luminance is calculated by the linear combination of the red, green, and blue components (Foley et al., 1993). Figure 11 shows the HRC and luminance images of the fused images.

The correlation values obtained for IHS and PCA fusion methods were 0.9982 and 0.9167, respectively. This indicates that fused image produced by IHS method is more similar to the PAN image in respect to the detail information. By visual analysis (Figure 11), we observe that the appearance of the luminance IHS image is quite similar to the PAN image.

To quantitatively evaluate the fusion results the UIQI metric (Wang & Bovik, 2002) was calculated for each band and their values are presented in Table 4. The values indicate that mean UIQI is almost the same for both methods PCA and IHS. Band 2 showed better result for PCA while UIQI values for Band 3 and Band 4 were higher for IHS than for PCA. Despite of these results, visually we observed significant color distortion in the landslide scar area in the IHS hybrid image. This indicates that PCA hybrid image is more adequate for analyzing the landslide in this case.

Figure 10.

Fused images: (a) original CCD image after restoration and resampling to 2.5 meter pixel size; (b)IHS fusion, and (c)PCA fusion.

UIQIMean UIQI
FusionBand 2Band 3Band 4
HSL0.590.890.850.77
PCA0.770.840.800.78

Table 4.

UIQI index obtained for the fused images.

Figure 11.

HRC (a) and luminance images obtained fromIHS (b) andPCA (c)fused images.

4.3. Intra-urban land cover classification from high-resolution images

Intra-urban land cover classification of high spatial resolution images provides a useful set of information for urban management and planning (Meinel et al., 2001). With this type of data, it is possible to generate information for many applications, such as analysis of urban micro-climate and urban greening maps amongst others. The usage of automatic methods to classify high spatial resolution images faces the challenge of processing images with wide intra- and inter-classes spectral variability.

This section presents a case study for intra-urban land cover classification of Quickbird imagery for the city of São José dos Campos – SP, southeast of Brazil, which is based on researches of Almeida et al. (2007) and Pinho et al. (2008). The total and urban areas of the São José dos Campos municipality cover about 1,099.60 and 298.99 square kilometers, respectively. The selected region is in the southern part of the urban area and contains a great variety of intra-urbanland cover classes.

The QB images (Ortho-ready Standard 2A) used in this experiment consist of: a panchromatic image (0.6 m) and a multispectral image (2.4 m) with 4 bands (blue, green, red, and infrared) (Table 2). The images acquired on May 17, 2004 have an off-nadir incidence angle of 7.0o and a radiometric resolution of 11 bits. Figure 12 shows the panchromatic and multispectral images.

The hybrid images are segmented before the classification process. The segmentation approach selected is based on region growing and a multi-resolution procedure, in which the similarity measure depends on scale since segmentation parameters are weighted by the objects size (Baatz, 2000). The user defined four segmentation parameters: scale, weight for each spectral band, weight for color and shape, and weight for smoothness and compactness. Figure 13 shows segmentation results for three different scales of processing.

The fusion method used here is based on PCA since it has shown good results in urban analysis with high resolution images (Novack et al., 2008). The processing resulted in four images with spectral information similar to those of the original bands (blue, green, red and infrared) and spatial resolution equal to that of the panchromatic image (0.6 m). Figure 14 shows a small region of the panchromatic, multispectral, and fused images.

The classification phase was carried out using the decision tree method. The following attributes were selected in the training phase: brightness, hue channel mean, means of bands, belonging to super-object Block, maximum value in band 1, NDVI (Vegetation Index), ratio between bands 3 and 1, ratio between band 2 and the mean of all others, and difference mean for band 1. Figure 15 shows the original multispectral image and the resultant classification.

Figure 12.

Quickbird satellite scene acquired on 05/17/2004: (a) panchromatic image (0.6 m), and (b) multispectral image (2.4 m), (band 3 in red, band 2 in green and band 1 in blue).

Figure 13.

Segmentation results for three different scales of processing.

Figure 14.

A small region of the (a) panchromatic image (0.6 m), (b) multispectral image (2.4 m), and (c) fused image (0.6 m).

Figure 15.

Intra-urban classification: (a) original color image, and (b) thematic map.

The visual analysis of the classification indicates confusion between Ceramic Roof and Bare Soil classes while other classes are fairly well separated. Figure 16 illustrates the confusion between these classes in a small region.Quantitative classification accuracy assessment using error matrix indicates a good classification with Kappa value of 0.57. A conditional producer Kappa indicates lower values for Ceramic Roof and Bare Soil classes as expected from the visual analysis.

Figure 16.

Portion of the (a) true color image, and (b) thematic map showing the confusion between Ceramic Roof and Bare Soil classes.

Advertisement

5. Conclusion

Due to the advances in satellite technology, a great amount of image data has been available and has been widely used in different remote sensing applications. Thus, image data fusion has become a valuable tool in remote sensing to integrate the best characteristics of each sensor data involved in the processing.

To provide guidelines about the use of fusion techniques, we presented a brief review about fusion image techniques and fusion assessment methods that is illustrated with three case studies in remote sensing applications. Since there are a lot of fusion methods proposed in the literature only a few examples, mainly those applied for merging satellites images, were discussed in this work.

Indeed, there is not a unique method that is adequate for every data and application. The fusion quality often depends upon the user’s experience, the fusion method, and upon the data set being fused. The objective of a fusion process is to generate a hybrid imagewith the highest possible spatial information content while still preserving good spectral information quality. Unfortunately, this task is not easy. One solution proposed in the literature is to combine different fusion methods in a single framework.

Despite the great number of fusion possibilities the most traditional methods such as PCA and IHS are still very used in remote sensing applications. This can be explained by the fact that most image processing systems have them implemented, and in many applications they have provided good results. Therefore, even if you have many fusion options it may be worth to test and evaluate some of them for the application of interest. Besides, the assistance of an interpreter in the fusion process is fundamental to guarantee the good quality of the final product

Advertisement

Acknowledgments

The authors would like to thank Imagem Soluções Inteligência Geográfica (www.img.com.br) for providing Quickbird images, and INPE for supporting our work.

References

  1. 1. AlmeidaC. M.SouzaI. M. E.AlvesC. D.PinhoC. M. D.PereiraM. N.FeitosaR. Q. 2007 Multilevel Object-Oriented Classification of Quickbird Images for Urban Population Estimates. In: 15th ACM International Symposium on Advances in Geographic Information Systems (ACM GIS 2007), 2007, Seattle.
  2. 2. AlparoneL.AiazziB.BarontiA.GarzelliA.NenciniP. 2004 A Golbal Quality Measurement of Pan-Sharpened Multispectral Imagery. IEEE Geoscience and Remote Sensing Letters, 1 4 (October 2004), 313317 . 0154-5598X
  3. 3. AlparoneL.WaldL.ChanussotJ.ThomasC.GambaP.BruceL. M. 2007 Comparison of Pansharpening Algorithms Outcome of the 2006 GRS-S Data-Fusion Contest. IEEE Transactions on Geoscience and Remote Sensing, 45 10 (October 2007), 30123021 , 0196-2892
  4. 4. AmolinsK.ZhangY.DareP. 2007 Wavelet based image fusion techniques- An introduction, review and comparison. ISPRS Journal of Photogrammetry and Remote Sensing, 62 4 (September 2007), 249263 , 0924-2716
  5. 5. BaatzA. 2000 Multiresolution Segmentation-an optimization approach for high quality multi-scale image segmentation. Angewandte Geographische Informationsverarbeitung XII, Wichmann-Verlag, Heidelberg, 12 1223 .
  6. 6. BBC. 2010 January 2). Deaths from Brazil Ilha Grande resort mudslide reach 26. In : BBC News, February 25, 2011, Available from: news.bbc.co.uk/2/hi/8438096.stm
  7. 7. CaoW.LiB.ZhangY. 2003 A remote sensing image fusion method based on PCA transform and wavelet packet transform. Proceedings of the 2003 International Conference on Neural Networks and Signal Processing, 2 976981 , Toulouse, France, 2003.
  8. 8. CarperW.LillesandT.KieferR. 1990 The use of intensity-hue-saturation transformations for merging spot panchromatic and multispectral image data. Photogrammetric Engineering and Remote Sensing, 56 4 (April 1990), 459467 , 0099-1112
  9. 9. CBERS 2011 CBERS, China-Brazil Earth Resources Satellite. April 5, 2011, Available from: http://www.cbers.inpe.br
  10. 10. ChaiY.LiH. F. .QuJ. F. 2010 Image fusion scheme using a novel dual-channel PCNN in lifting stationary wavelet domain. Optics Communications, Elsevier, 283 n. 19, 35913602 .
  11. 11. ChavezP. S.KwaktengA. Y. 1989 Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogrammetric Engineering and Remote Sensing, 55 3 (March 1989), 339348 , 0099-1112
  12. 12. ChavezP. S.SidesS. C.AndersonJ. A. 1991 Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogrammetric Engineering and Remote Sensing, 57 3 March 1991), 295303 , 0099-1112
  13. 13. ChenT.ZhangJ.ZhangY. 2005 Remote sensing image fusion based on ridgelet transform. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 2 2005, 11501153 .
  14. 14. ChenH.VarshneyP. K. 2007 A human perception inspired quality metric for image fusion based on regional information. Information Fusion, 8 2 (April 2007), 193207 , 1566-2535
  15. 15. ChenY.BlumR. S. 2009 A new automated quality assessment algorithm for image fusion. Image and Vision Computing, 27 10 14211432 , 0262-8856
  16. 16. ChibaniY.HouacineA. 2000 On the use of the redundant wavelet transform for multisensor image fusion.Procedings of the 7th IEEE International Conference on Electronics, Circuits and Systems, 1 442445 .
  17. 17. ChibaniY.HouacineA. 2003 Redundant versus orthogonal wavelet decomposition for multisensor image fusion. Pattern Recognition, 36 4 879887 , 0031-3203
  18. 18. ChoiM.KimR. Y.NamM. R.KimH. O. 2005 Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geoscience and Remote Sensing Letters, 2 2 (April 2005), 136140 , 0154-5598X
  19. 19. ChoiM. 2006 A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter. IEEE Transactions on Geoscience and Remote Sensing, 44 6 (June 2006), 16721682 , 0196-2892
  20. 20. CâmaraG.SouzaR. C. M.FreitasU. M.GarridoJ. 1996 SPRING: Integrating remote sensing and GIS by object-oriented data modeling. Computers & Graphics, 20 3 395403 , 0097-8493
  21. 21. DaiF. C.LeeC. F. 2002 Landslide characteristics and slope instability modeling using GIS, Lantau Island, Hong Kong. Geomorphology, 42 3-4 , 213228 , 0016-9555X
  22. 22. ERDAS Inc. (2011).ERDAS- The Earth to Business Company, 2011 Available from: http://www.erdas.com/Resources/ERDASFieldGuide.aspx
  23. 23. FoleyJ. D.van DamA.FeinerS. K.HughesJ. F. .PhillipsR. L. 1993 Introduction to Computer Graphics. Addison-Wesley, 0-20160-921-5 USA
  24. 24. FonsecaL. M. G.PrasadG. S. S. D.MascarenhasN. D. A. 1993 Combined interpolation-restoration of Landsat images through FIR filter design techniques. International Journal of Remote Sensing, 1366-5901, 14 13 25472561 , 0143-1161
  25. 25. FonsecaL. M. G.CostaM. H. M.KortingT. S.CastejonE.SilvaF. C. 2008 Multitemporal image registration based on multiresolution decomposition. Revista Brasileira de Cartografia, 60 3 (October 2008), 271286 , 0560-4613
  26. 26. Garguet-DuportB.GirelJ.ChasseryJ. M.PautouG. 1996 The use of multiresolution analysis and wavelets transform for merging SPOT panchromatic and multispectral image data. Photogrammetric Engineering and Remote Sensing, 62 9 (September 1996), 10571066 , 0099-1112
  27. 27. GarzelliA.NenciniF. 2005 Interband structure modeling for pan-sharpening of very high-resolution multispectral images. Information Fusion, 6 3 (September 2005), 213224 , 1566-2535
  28. 28. González-AudicanaM.SaletaJ.CatalanR.GarciaR. 2004 Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Transactions on Geoscience and Remote Sensing, 42 6 (June 2004), 12911299 , 0196-2892
  29. 29. González-AudicanaM.OtazuX.ForsO.SecoA. 2005 Comparison between Mallat´s and the ´à trous´ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. International Journal of Remote Sensing, 26 3 595614 , 0143-1161
  30. 30. GuoQ.ChenS.LeungH.LiuS. 2010 Covariance intersection based image fusion technique with application to pansharpening in remote sensing. Information Sciences, 180 18 34343443 , 0020-0255
  31. 31. INPE 2011 INPE image Catalog. February 25, 2011, Available from: http://www.dgi.inpe.br/CDSR/
  32. 32. IoannidouS.KarathanassiV. 2007 Investigation of the Dual-Tree Complex and Shift-Invariant DiscreteWavelet Transforms on Quickbird Image Fusion. IEEE Geoscience and Remote Sensing Letters, 4 4 (January 2007), 166170 , 0154-5598X
  33. 33. JingL.ChengQ. 2009 Two improvement schemes of PAN modulation fusion methods for spectral distortion minimization. International Journal of Remote Sensing, Taylor & Francis Group, 30 8 21192131 , 0143-1161
  34. 34. Laporterie-DejeanF.de BoissezonH.FlouzatG.Lefevre-FonollosaM. J. 2005 Thematic and statistical evaluations of five panchromatic/multispectral fusion methods on simulated PLEIADES-HR images. Information Fusion, 6 3 (September 2005), 193212 , 1566-2535
  35. 35. LiS.KwokJ. T.WangY. 2002 Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Information Fusion, 3 1723 , 1566-2535
  36. 36. LiZ.JingZ.YangX.SunS. 2005 Color transfer based remote sensing image fusion using non-separable wavelet frame transform. Pattern Recognition Letters, 26 13 20062014 , 0167-8655
  37. 37. Lillo-SaavedraM.GonzaloC.ArqueroA.MartinezE. 2005 Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain. International Journal of Remote Sensing, 26 6 12631268 , 0143-1161
  38. 38. Lillo-SaavedraM.GonzaloC. 2006 Spectral or spatial quality for fused satellite imagery: a trade-off solution using the wavelet à trous´ algorithm. International Journal of Remote Sensing, 27 7 14531464 , 0143-1161
  39. 39. LingY.EhlersM.UseryE. L.MaddenM. 2007 FFT-enhanced IHS transform method for fusing high-resolution satellite images. ISPRS Journal of Photogrammetry and Remote Sensing, 61 6 (February 2007), 381392 , 0924-2716
  40. 40. LingY.EhlersM.UseryE. L.MaddenM. 2008 Effects of spatial resolution ratio in image fusion. International Journal of Remote Sensing, 29 7 21572167 , 0143-1161
  41. 41. LiuZ.ForsythD. S.LaganiereR. 2008 A feature-based metric for the quantitative evaluation of pixel-level image fusion. Computer Vision and Image Understanding, 109 1 (January 2008), 5668 , 1077-3142
  42. 42. MarcelinoE. V.VenturaF. N.FormaggioA. R.FonsecaL. M. G.RosaA. N. C. S. 2003 Evaluation of image fusion techniques for the identification of landslide scars using satellite data. Geografia, 28 3 431445 , 0100-7912
  43. 43. MeinelG.NeubertM.RederJ. 2001 The potential use of very high resolution satellite data for urban areas- First experiences with IKONOS data, their classification and application in urban planning and environmental monitoring. In: Jürgens, C. (ed.): Remote sensing of urban areas.Regensburger Geographische Schriften 35, 196205 .
  44. 44. MiaoQ.ShiC.XuP.YangM.ShiY. 2011 A novel algorithm of image fusion using shearlets. Optics Communications, Elsevier, 284 6 2011, 15401547 .
  45. 45. NikolakopoulosK. G. 2005 Comparison of six fusion techniques for SPOT5 data. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 4 28112814 .
  46. 46. NovackT.FonsecaL. M. G.KuxH. J. 2008 Quantitative comparison of segmentation results from ikonos images sharpened by different fusion and interpolation techniques In: GEOBIA- Geo-Object Based Image Analysis Conference, 2008, Calgary, 2008, GEOBIA- Geo-Object Based Image Analysis Conference, Calgary.
  47. 47. PajaresG.de la CruzJ. M. 2004 A wavelet-based image fusion tutorial. Pattern Recognition, 37 9 18551872 , 0031-3203
  48. 48. PinhoC. M. D.SilvaF. C.FonsecaL. M. G.MonteiroA. M. V. 2008 Urban Land Cover Classification from High-Resolution Images Using the C4.5 Algorithm. In: XXI Congress of the International Society for Photogrammetry and Remote Sensing, 2008, Beijing, Vol. XXXVII. Part B7, 695699 .
  49. 49. PohlC.GenderenJ. L. V. 1998 Multisensor image fusion in remote sensing: concepts, methods and applications. International Journal of Remote Sensing, 19 5 823854 , 0143-1161
  50. 50. RahmanM. M. .CsaplovicsE. 2007 Examination of image fusion using synthetic variable ratio(SVR) technique, International Journal of Remote Sensing, 28: 15, 34133424 , 0143-1161
  51. 51. ResearchSystem.Inc 2011 ENVI- Environment for Visualizing Images. In : ENVI- Environment for Visualizing Images, February 25, 2011, Available from: www.ittvis.com/ENVI
  52. 52. SchetselaarE. M. 1998 Fusion by the IHS transform: should we use cylindrical or spherical coordinates? International Journal of Remote Sensing, 19 4 759765 , 0143-1161
  53. 53. Schowengerdt, R.A. (2007).Remote Sensing: Models and Methods for Image Processing 3 edition), Academic Press, 0-12369-407-8 Diego, USA
  54. 54. ShiW.ZhuC.TianY.NicholJ. 2005 Wavelet-based image fusion and quality assessment. International Journal of Applied Earth Observation and Geoinformation, 6 241251
  55. 55. LiS.KwokJ. T.WangY. 2002 Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images.Information Fusion3 1 March 2002, 1723 .
  56. 56. SilvaF. C.DutraL. V.FonsecaL. M. G.KortingT. S. 2007 Urban Remote Sensing Image Enhancement Using a Generalized IHS Fusion Technique. Procedings of the Symposium on Radio Wave Propagation and Remote Sensing, Rio de Janeiro, Brazil, 2007.
  57. 57. SimoneG.FarinaA.MorabitoF. C.serpicoS. B.BruzzoneL. 2002 Image fusion techniques for remote sensing applications. Information fusion, 3 2002, 315 .
  58. 58. SongH.YuS.SongL.YangX. 2007 Fusion of multispectral and panchromatic satellite images based on contourlet transform and local average gradient. Optical Engineering, 46 2 (February 2007), 020502 EOF . doi:10.1117/1.2437125
  59. 59. SPRING. 2011 Georeferencing Information Processing System (SPRING). In : SPRING, Georeferencing Information Processing System, March 25, 2011, Available from www.dpi.inpe.br/spring/english/index.html
  60. 60. TemesgenB.MohammedM. U.KormeT. 2001 Natural hazard assessment using GIS and remote sensing methods, with particular reference to the landslide in the Wondogenet area, Ethiopia. Physics and Chemistry of the Earth, Part C, 26 9 665675 .
  61. 61. TerraLib. 2011 GIS Classes and Functions libraries (TerraLib). In : TerraLib. GIS Classes and Functions libraries,February 25, 2011, Available fromwww.dpi.inpe.br/terralib
  62. 62. TuT. M.SuS. C.ShyuH. C. .HuangP. S. 2001a A new look at IHS-like image fusion methods. Information Fusion, 2 3 177186 , 1566-2535
  63. 63. TuT. M.SuS. C.ShyuH. C.HuangP. S. 2001b Efficient intensity-hue-saturation-based image fusion with saturation compensation. Optical Engineering, 40 5 720 EOF (2001); doi:10.1117/1.1355956
  64. 64. TuT. M.HuangP. S.HungC. L.ChangC. P. 2004 A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geoscience and Remote Sensing Letters, 1 4 (October 2004), 309312 , 0154-5598X
  65. 65. TuT. M.ChengW. C.ChangC. P.HuangP. S.ChangJ. C. 2007 Best tradeoff for high-resolution image fusion to preserve spatial details and minimize color distortion. IEEE Geoscience and Remote Sensing Letters, 4 2 (April 2007), 302306 , 0154-5598X
  66. 66. VenturaF. N.FonsecaL. M. G.SantaRosa. A. N. C. 2002 Remotely sensed image fusion using the wavelet transform. Proceedings of the International Symposium on Remote Sensing of Environment (ISRSE), Buenos Aires, 812 , April, 2002, 4p.
  67. 67. WaldL.(2000 (2000).Quality of high resolution synthesized images: is there a simple criterion. Proceedings of the International Conference on Fusion of Earth Data, 2628 .
  68. 68. Wald, L. (2002).Data fusion: Definitions and Architectures- Fusion of Images of Diferent Spatial Resolutions, Ecole des Mines de Paris, ISBN 2-911762-38-X, Paris, France
  69. 69. WangQ.ShenY.JinJ. 2008 Performance evaluation of image fusion techniques, In: Image Fusion, T. Stathaki, (Ed), Academic Press, 978-0-12372-529-5 Oxford, UK
  70. 70. WangZ.BovikA. C. A.universalimage.qualityindex. I. E. E. IEEE Signal Processing Letters, 9 9 3 (March 2002), 8184 , 1070-9908
  71. 71. WeiZ. G.YuanJ. H.CaiY. L. 1999 A picture quality evaluation method based on human perception. Acta Electronica Sinica, 27 4 7982 , 0372-2112
  72. 72. YangX. H.JingJ. L.LiuG.HuaL. Z.MaD. W. 2007 Fusion of multi-spectral and panchromatic images using fuzzy rule. Communications in Nonlinear Science and Numerical Simulation, 12 7 13341350 .
  73. 73. YangX. H.JiaoL. C.(2008 (2008).Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform. Acta Automatica Sinica, Elsevier, 34 34 3 2008, 274282 .
  74. 74. ZhangY. 1999 A new merging method and its spectral and spatial effects. International Journal of Remote Sensing, 20 10 20032014 , 0143-1161
  75. 75. ZhangY. 2002 Problems in the fusion of commercial high-resolution satellite, Landsat 7 images, and initial solutions.Procedings of the Symposium on Geospatial Theory, Processing and Applications, 34 Part 4, Ottawa, Canada, 2002.
  76. 76. ZhangY. 2004 Understanding image fusion. Photogrammetric Engineering and Remote Sensing, 70 6 (June 2004), 657661 , 0099-1112
  77. 77. ZhangY. 2008 Methods for image fusion quality assessment- a review, comparison and analysis. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008, 11011109 .
  78. 78. ZhengY.QinZ. 2009 Objective Image Fusion Quality Evaluation Using Structural Similarity. Tsinghua Science & Technology, 14 6 (December 2009), 703709 , 1007-0214
  79. 79. ZhouJ.CivcoD. L.SilanderJ. A. 1998 A wavelet transform method to merge Landsat TM and SPOT panchromatic data. International Journal of Remote Sensing, 19 4 743757 , 0143-1161

Written By

Leila Fonseca, Laercio Namikawa, Emiliano Castejon, Lino Carvalho, Carolina Pinho and Aylton Pagamisse

Submitted: 23 November 2010 Published: 24 June 2011