Open access

High-Resolution and Hyperspectral Data Fusion for Classification

Written By

Hina Pande and Poonam S. Tiwari

Submitted: 24 February 2012 Published: 20 November 2013

DOI: 10.5772/56944

From the Edited Volume

New Advances in Image Fusion

Edited by Qiguang Miao

Chapter metrics overview

3,517 Chapter Downloads

View Full Metrics

1. Introduction

Resolution can be defined as the fineness with which an instrument can distinguish between the different values of some measured attribute. In the context of remotely sensed data references are made to four types of resolutions i.e. spatial resolution, spectral resolution, radiometric resolution and temporal resolution. The spatial resolution refers to the area of smallest resolvable element (e.g. pixel); spectral resolution refers to the smallest wavelength which can be detected in the spectral measurement (Lillesand and Kiefer, 2000). Technically these two types of resolution can be inter-related so that one can be improved at the expense of the others. The information content of an image is based on spatial and spectral resolution of an imaging system. To exploit and explore the benefit of enhanced spatial capability and spectral capability in, fusion techniques were developed to merge complementary information. Fusion of multispectral and panchromatic image has been done in past several times by many researchers for different purposes i.e. for feature extraction, 3D modelling (building extraction etc.). “Image fusion is the combination of two or more images to form a new image by using a certain algorithm”. (Pohl and Genderen Van, 1998).

“The “hyper” in the hyperspectral means “over” as in “too many” and refers to the large number of measured wavelength bands” (Shippert, 2008). Hyperspectral imaging in remote sensing was a major breakthrough that opened the avenues of research in various fields like mineralogy mapping for oil exploration, environmental geology, vegetation sciences, hydrology, tsunami-aids, biomass estimation and many more due to its ample spectral information contained in hundreds of co-registered bands.

The fusion of hyperspectral with multispectral image results in a new image which has the spatial resolution of the high resolution image and preserves the spectral characteristics of the hyperspectral image. There are some algorithms used specifically to fuse and classify the Hyperspectral data with the multispectral data. Some of the algorithms are transformation based (e.g. Intensity, Hue, Saturation), wavelet decomposition, neural networks, knowledge-based image fusion, Colour Normalised Transform (CNT), Principal Component Transform (PCT) and the Gram-Schmidt Transform. (Ali Darvishi et al., 2005). Combining hyperspectral and multispectral images can enhance the information content of the image thus help in geospatial data extraction. Fusion of multi-sensor image data has been widely used procedures now-a-days for complementing and enhancing the information content. The present work primarily focuses on the qualitative assessment of the fused image in terms of the spatial and spectral improvement.

The main objective of the present work is the analysis of the high resolution and hyperspectral data fusion using three different approaches (Gram-Schmidt, Principal Component, and Colour Normalised Transform), analyzing the spectral variation due to fusion and its effect on classification and feature extraction.

1.1. Theoretical concepts: Different fusion algorithms

1.1.1. IHS (Intensity Hue Saturation)

According to Chen et al., 2003 in the IHS transformation image fusion, the Intensity (I), the spatial component and the Hue (H) and the Saturation (S), the spectral components of an image are generated from the RGB image. The Intensity (I) component is then substituted by the high resolution panchromatic image to render a new image in RGB, which is referred as the fused image.

1.1.2. Colour Normalised Transform

The Colour Normalised Transform is another fusion technique that uses a mathematical grouping of the colour image and a high resolution image. The Colour Normalised Transform is also named as the Energy Subdivision Transform that employs a high resolution image to sharpen a low resolution image. This algorithm is also called as Brovey Transform. The Brovey Transform algorithm uses a formula that normalises multispectral bands used for a RGB (Red Green Blue) display and multiplies the result by high resolution data to add the intensity or the brightness component of the image. Brovey Transform is used to increase the contrast and intensity in the low and high ends of the histogram and for producing visually appealing images. (Sanjeevi, 2006)

Brovey Transform works as:

DNf = A (w1* DNa + w2* DNb )+ B

DNf = A* DNa* DNb +B

A and B are scaling and additive factors respectively and w1 and w1 weighting parameters. DNf, DNa and DNb refer to digital numbers of the final fused image and the input images a and b respectively.

1.1.3. Wavelets-Transform image fusion

According to Gomez et al., 2001 the wavelet concept is utilized to fuse the two spectral levels of a hyperspectral image with one band of multispectral image. Wavelets generally mean “waves”. Image fusion by Wavelet-based method involves two processing steps: first step consists of extracting the details or the structures. The extracted structures are decomposed into three wavelet coefficients based upon the direction that is the vertical, horizontal and the diagonal. Thus, in combining the high resolution image with a low-resolution image, the high-resolution image is first reference stretched three times, each time to match one of the low-resolution band histograms while, the second step necessitates the introduction of these structures/details into each low-resolution image band through the inverse wavelet transform. Thus, the spectral content from the low-resolution band image is preserved because only the scale structures between the two different resolution images are added. (Sanjeevi, 2008)

1.1.4. Gram-Schmidt Transform

Aiazzi et al., 2006 described that the Gram-Schmidt Transform (GST) is another fusion algorithm which is used to fuse a multispectral image with a panchromatic image. The Gram-Schmidt Transform was invented by Brover and Laben in 1998 and patented by Eastman Kodak. This algorithm works in two modes: “mode1” and “mode2”. The “mode1” takes the pixel average of the multispectral (MS) bands. The spatial quality in “mode1” is better but suffers from the spectral distortions due to the radiometric difference of the average of the MS bands and the panchromatic image. While, in “mode2” the spectral distortions are not present but suffer from poor enhancement and low sharpness.

1.1.5. Principal Component Transform

The Principal Component Transform (PCT) used to enhance a low resolution image using a high resolution data. The PC band1 is replaced with a high resolution band, which is scaled to match the PC band1. Hence, there is almost no distortion in the spectral information in the fused output image.

The mathematical operation that applies a linear transformation, based on an image-specific matrix is as follows:

PC = Wpc * DN

Where Wpc = transformation matrix

PC = transformed data (uncorrelated)

DN = original data

Advertisement

2. Literature review

Pohl and Genderen Van, 1998 proposed that image fusion is a tool to combine the multisource imagery using the advanced image processing techniques. According to Pohl and Genderen Van, 1998, the main objectives of image fusion are to sharpen images, improve geometric corrections, enhance certain features that are not visible in either of the images, replace the defective data, complement the data sets for the improved classification, detect changes using multitemporal data and, substitute the missing information in one of the image with the signals from another source image.

According to Kasetkasem, Arora and Varshney (2004), merging methods are often divided into two categories: first method simultaneously takes into account all bands in the merging process e.g. Hue- Saturation-Value transformation, Principle-Component transformation, Gram-Schmidt transformation technique; the second category deal separately with the spatial information and each spectral band e.g. Brovey transformation, High-Pass-Filter transformation technique.

Ali Darvishi et al., 2005 analysed the capability of the two algorithms that is Gram-Schmidt and the Principal Component transform in the spectral domain. For this purpose two datasets have been taken (Hyperion/ Quickbird-MS and Hyperion/ Spot-Pan). The main objective of the study was the investigation of the two algorithms in the spectral domain and the statistical interpretation of the fused images with the raw Hyperion. The study area was Central Sulawesi in Indonesia. The results of the fusion show that the GST and PCT has almost similar ability in protecting the statistics as compared to the raw Hyperion. The correlation analysis show poor correlation between the raw Hyperion and the fused image bands. The results of the analysis show that the bands located in the high-frequency area of the spectrum better preserve the statistics as compared to the bands located in the low-frequency region. Different statistical parameters like the standard deviation, mean, median, and mode, maximum, minimum values of the raw Hyperion and the two fused images (GST & PCT) were compared for the analysis.

Gomez et al., 2001 has studied the fusion of the hyperspectral data with the multispectral data using the Wavelet-based image fusion. In the present study, two levels of hyperspectral data were used in fusion with one band of multispectral data. The fused image obtained had a RMSE (Root Mean Square Error) of 2.8 per pixel with a SNR (Signal to Noise Ration) of 36 dB. The results show that the fusion of hyperspectral data with the multispectral data produced a composite image of high spatial resolution of the multispectral data with all the spectral characteristics of the hyperspectral data with minimum artifacts. The study concluded that more than two datasets can be fused using the Wavelet transform image fusion technique.

Chen et al., 2003 carried out a study which took the hyperspectral data, AVIRIS ( Airborne Visible/ Infrared Imaging Spectrometer) to fuse with TOPSTAR ( Topographic Synthetic Aperture Radar) which provides the textural information to get a composite image to study the urban scene. The study has been conducted for the urban area of Park city, Utah. The composite image obtained has been superimposed on the DEM (Digital Elevation Model) generated from the TOPSTAR data to get a 3D perspective. The transformed image obtained was interpreted for the visual discrimination among various urban types. This was possible after fusion of AVIRIS and TOPSTAR data using IHS (Intensity Hue Saturation) transform, which resulted in an image having high spatial and spectral resolution. The objective of the study was to study the areas which are at a risk due to the geological hazards like the avalanches, mudflows etc. The fused image was interpreted for information extraction for assessment and mitigation of these hazards in the area. The results of the fusion of the AVIRIS and TOPSTAR data show better enhancement in the urban features. The spectral resolution of the AVIRIS data helped in better discriminating among various urban features like the buildings and the mining tailings. The MNF-transformed bands of the AVIRIS data also improved the discriminability among the various features. The combined use of the HIS fused data, the MNFtransformed bands and the DEM of the area provided for better understanding of the urban features.

Ling et al., 2006 has analysed the results of fusing the high resolution data like the IKONOS and Quickbird using the Fast Fourier Transform - enhanced IHS method. The study aimed at evaluating the ability of the traditional methods like the HIS and the PCA (Principal Component Analysis) in fusing the high resolution data to preserve the colour and spectral information in the fused product. The study integrated the IHS transform with the FFT filtering of both the panchromatic and the intensity component of the multispectral image. The study has been done using the IKONOS and the Quickbird data. The analysis prove that the HIS transform using the FFT filtering improved the results in preserving the high spatial quality and the spectral characteristics.

Advertisement

3. Data used and study area

3.1. Hyperion

Hyperion is an EO-1 (Earth Observation-1) sensor which was developed under NASA’s new millennium program in November, 2000. The level 1 product used in the present study has 242 bands in the range of 355-2577 nm at 10 nm bandwidth (Table 1). Out of these 242 bands only 198 bands are calibrated. The bands which are not calibrated are set to zero.

Sensor Altitude 705Km
Spatial Resolution 30 mt
Radiometric Resolution 16 bit
Swath 7.2 Km
IFOV (mrad) 0.043
No.Of Rows 256
No.of Columns 3128
VNIR Spectral Range 0.45-1.35 µm
SWIR Spectral Range 1.40-2.48 µm
Altitude 681 Km
Inclination 98.20
Repeat Cycle 14 day
Sensor Optical Sensor Assembly
Swath width 11Km
Off Nadir Viewing +-Omnidirectional
Revisit time 1-3 days
Spatial Resolution 1m (PAN), 4m (MSS)
Spectral Bands (µm) 0.45-0.52
0.52-0.60
0.63-0.69
0.76-0.90
0.45-0.90 (PAN)

Table 1.

Technical Specification of the (a) Hyperion and (b)IKONOS (XS and PAN) sensors

3.2. IKONOS (MSS & Panchromatic)

IKONOS was the first commercial high resolution satellite to be positioned into the orbit. The IKONOS (MSS) image has 4 bands (red, green, blue, NIR) with 4m spatial resolution and IKONOS (Pan) has one band (.4-.9 µm) with 1m spatial resolution (Table 1).

3.3. Study area

For the present study datasets of two areas were selected- Dehradun and Udaipur city area.

The city of Dehradun is situated in the south central part of Dehradun district. Dehradun city lies at 30°19' N and 78°20' E. The city is located at an altitude of 640 m above MSL. The lowest altitude is 600 m in the southern board is 38.04 sq. Km. The highest altitude is 1000 m in the northern part of the city. The site where the city is located slopes gently from north to south direction. The northern part of the region is heavily dissected by a number of seasonal streams (Fig 1a). The study strip can be divided into two distinct land cover classes:

  1. The western portion is dominated with varied vegetation of Sal, Teak, Bamboo, etc.

  2. The southern part with the urban and some patches of vegetation.

The urban pattern in Dehradun city is rather scattered and irregular. The northern part again consists of varied LULC classes like the crop fields, fallow land, urban, grassland, shrubs, and vegetation of mix type. A seasonal river named Tons flows from North East to South West direction. Geomorphically, the northern part of the region is occupied by piedmont fan of post-Siwalik Dun gravels called the Donga fan. Donga fan is a region in the Dehradun city that consists of the varied LULC classes.

Udaipur, Rajasthan, India has been selected as the second study area. Rajasthan is one of the mineral rich states of India. This north-western state of India occupies a place of pride in production and marketing of metallic and non metallic minerals in India. The Aravalli range, one of the oldest mountain ranges of the world runs along the NE–SW direction for more than 720 km, covering nearly 40, 000 km2. The study area (Longitude 73° 32′ 58″ to 73° 49′ 35″ E and Latitude 24° 08′ 18″ to 24° 59′ 53″ N) covers an area of about 750 km2 of this main block of the Aravalli range corresponding to path and row number 146/40 corresponding to full scene of Hyperion (Fig 1 b).

Figure 1.

Study area (a) Dehradun (b) Udaipur

Advertisement

4. Methodology

The methodology adopted was chosen to analyze the performance of the hyperspectral and high-resolution data fusion for classification. The major objective of the study was to comparatively evaluate the three algorithms i.e. the GS (Gram-Schmidt), PC (Principal Component) and the CN (Colour Normalised) Transform on the fusion of Hyperion data with the high-spatial resolution IKONOS (mss) data. The fused images were analysed for pros and cons of the spectral domain image fusion models. For analyzing the spectral variation due to fusion, major land cover areas were identified. The original Hyperion spectra over these landcover areas was compared visually with the fused spectra over same area. The analysis was carried out visually and statistically by comparing spectral profiles of different features with the original Hyperion profiles. (Fig 2).Overall classification accuracy has been used to evaluate the Hyperion data, multispectral IKONOS and the fused data for the two study areas. The methodology is divided in 3 broad steps (Fig 2):

4.1. Pre-processing stage

The Hyperion Level 1R product used was having many bad lines and columns in the different bands. Thus radiometric correction for removal of bad columns was performed by calculating the average of the DN values of the adjacent columns. Atmospheric correction techniques have been developed in order to allow the retrieval of pure ground radiances from the target materials. The haziness in atmosphere accounts for the reduced radiation from Sun reaching the Earth surface causing blurriness in the image. Due to this reason, the atmospheric correction for the Hyperion image was considered important in the present study. In the present work, the FLAASH model in ENVI 4.5 is chosen which is a first-principles atmospheric correction modelling tool for retrieving spectral reflectance from hyperion. The spectral subsets for the Hyperion data have been created in the same wavelength range as that of the IKONOS i.e. 400-900 nm and so band number 12 to 55 are used. So, in total, the Hyperion image file has been reduced from resized 117 bands to 36 bands. Co-registration of hyperion image has been done within IKONOS MSS. The amount of RMS in the registration processes was about 0.823 pixels.

Figure 2.

Methodology for comparative evaluation of fusion algorithms on classification accuracy

4.2. Image fusion

Image fusion is the phenomenon of combination of one or more images using an algorithm to acquire a composite image which caters with better and enhanced spatial and spectral information. After developing the spectral subsets for the Hyperion image its individual (R, G, B and NIR) band has been fused with the R, G, B and NIR band of IKONOS data utilizing Principal component Transformation, Gram-Schmidt Transformation (GST) and Color Normalized Transformation. The merging of the spectral subsets of the Hyperion image file (R, G, B and NIR bands) with the IKONOS (R, G, B and NIR bands) produced four separate. These separate images were then stacked to get one single 36 band image which achieved the spatial resolution of IKONOS and the spectral characteristics of the Hyperion image (Fig 3 & 4).

Figure 3.

Figure 3: Hyperion and IKONOS MSS fused images for part of Dehradun area (a: Fusion using CN Transform, b: Fusion using PC Transform, c: Fusion using GS Transform)

Figure 4.

Hyperion and IKONOS MSS fused images for part of Udaipur area (a: Fusion using CN Transform, b: Fusion using PC Transform, c: Fusion using GS Transform)

4.3. Spectra comparision

The spectral profiles of the various land cover classes present in the scene area like vegetation, bare soil, crop land, fallow land etc. and three fused products have been compared with the Hyperion.

4.3.1. Vegetation

In the Hyperion image, we observe that in the beginning there is a slow rise in the curve starting from the wavelength of 500 nm to a value of more than 250 and there exists a short peak in the blue region. At a wavelength of 700 nm there is a sharp rise in the curve that reaches to a value of 2000 and then there exists some small peaks in the NIR region which establishes that vegetation is best discriminated in this region. In the CN fused image, the outcomes of the spectral profile is almost similar with only one remarkable difference that is in the reflectance value. The vegetation here shows a rise in reflectance value only up to 450-750. In the PC fused image, the results are different with respect to Hyperion and CN fused image. The spectral profile of the GS fused image is almost similar to the PC fused image (Fig 5).

Figure 5.

Spectral profiles for Sal Vegetation

4.3.2. Building

In the Hyperion image, the curve starts rising slowly from the blue region up to a value of 1000 and then suddenly at the green edges the slope of the curve increases and there is a linear rise in the curve up to a value of 2500. In the NIR end small peaks are observed. Similar sort of consequences are observed in the CN fused image but the value is limited only up to 800. The results of the PC fused image are somewhat different. The building feature seen above is enhanced with values ranging more than 4000. The rise starts from the blue region that reaches to a peak at 1625. In between green and the red region of the spectrum, the rise is continuous. The curve is flattened at a value of 1500 but suddenly the curve rises at 680 nm with a steep slope that reaches a maximum value of 3900. Then after the red region, small peaks are observed. The spectrum observed for the GS fused image is same as the PC fused image (Fig 6).

Figure 6.

Spectral profiles for Bulding with terracotta roof

4.3.3. Bare soil

The spectral profile of the bare soil shows variations in the Hyperion image. The curve rises with a high slope and one can observe short peaks at wavelengths of 580 nm, 680 nm and highest rise at 820 nm. The rise in the curve is not uniform as there are some dumpy peaks and dips in the curve. Only, between blue and the green region, the rise is somewhat linear with one peak at 580 nm. In between the green and the red region, one enhanced peak can be observed at 680 nm but near to the red region at a value of about 3360 there is a flat dip. In the NIR region, some undulations are present in the curve with a peak at a value of about 3680 at 820 nm. In the CN fused image, one can observe a number of dips and peaks in the curve. The curve initially rises and meets a pointed peak with a value near to 500 at 515 nm. Then, in between the blue and the green region, the curve rises with a steep slope with two peaks at a value of 612.5 and 662.5 at 580 nm and 650 nm. There is a small dip after the green end. In between the green and the red region, there is a peak at a value of about 637.5 at 680 nm. After the rise, there is a flat dip at the red region but after this dip the curve rises again in the NIR region with some flat peaks and a sharp dip at a value of 637.5 at 875 nm wavelength. In the PC and the GS fused image, the range of the reflectance value is from 2000-4500. The spectral profile of the bare soil in the PC fused image shows some remarkable outcomes. The curve first rises slowly till the green end and then there is a sudden and sharp fall in the curve till it reaches a value below 2000 at 630 nm. The dip in the curve remains constant till it reaches 2000 at 690 nm but after that the curve rises sharply with a steep slope till the value of about 4750 at the red end. After the red end, in the NIR region the curve is runs almost flat with a little dip at 4500 at 875 nm. The result of the spectral profile of the bare soil in the GS fused image is almost similar to the profile in the PC fused image with minor differences at certain points. Initially, the curve rises and a small peak is encountered at a value of about 2250 at 520 nm wavelength. The curve rises again in the same way as in the PC fused image. The curve runs almost flat below the value of about 2000 in between 630-695 nm. After this the curve rises sharply with a steep slope till the red region. In the NIR region, the outcomes are almost comparable as the PC fused image (Fig 7).

Figure 7.

Spectral profiles for Bare soil

4.3.4. Fallow land

In the Hyperion image, we observe a continuous rise in the curve up to a value of 3625. For fallow land, the rise starts from wavelength of 500 nm and the slope of the curve is not that much steep. The rise in the curve is continuous with no noteworthy dips. In the CN fused image, the events are similar to the Hyperion but there is a difference in the range of the reflectance value up to which the curve rises. The range of the values in the CN fused image is limited to 400. In the PC fused image, the curve starts rising from the wavelength of 500 nm up to 750 and then there is a small dip. The dip is not remarkable and then again the curve rises. The curve rises linearly with a steep slope in between green and the red region up to value of 2500 and then again in the NIR region (beyond 750 nm) small pronounced peaks are present. The spectral profile of the fallow land in the GS fused image is almost comparable in the PC fused image (Fig 8).

Figure 8.

Spectral profiles for Fallow Land

4.3.5. Dry river

The spectral profile of river in the Hyperion image show lots of undulations i.e. a number of peaks and dips are observed. The curve rises sharply until it reaches a value of about 1820 at 520 nm. At 520 nm there is a pointed peak and then there is a small dip at about 530 nm. Again, the curve rises till it reaches a value of about 2050 at 590 nm. Then, suddenly the curve falls down till it reaches a value of about 1900 at 665 nm. After the fall in the curve, the curve again starts rising till it reach the red end. In the NIR region, the curve consists of two pointed peaks at 775 nm and 600 nm. After this the curve again drops to a value of about 1965 at 890 nm. In the CN fused image, the curve rises gently up to low values. There is a flattened peak at the blue region and then after the blue region the curve drops down almost flat till 690 nm. After 690 nm the curve rises sharply till the red region is encountered at 660 nm. In the NIR region, the curve runs almost flat with wide contiguous bands. In the PC fused image, the curve starts at the value of about 1500 in the blue region. The curve runs parallel to the ground till the value of about 1750 at 620 nm. After this the curve suddenly drops down and the dip encountered in the region between the blue and the red end is almost flat. At 690 nm, suddenly there is a steep rise in the curve till it reaches a value of about 3375 at 775 nm. After this in the NIR region, the curve again runs flat with small flattened peaks. The spectral profile of river in the GS fused image is almost similar to the profile in the PC fused image (Fig 9).

Figure 9.

Spectral profiles for Dry River Bed

4.3.6. Land with grass

In the Hyperion image, the curve rises slowly with almost flattened slope till it reaches a value of 1500 at a wavelength of 700 nm but then after that the curve rises linearly with a steep slope. This slope is between green and the red region. At the red region, this sharp rise in the curve is slowed down and then after that in the NIR region the curve rises slowly with one enhanced peak at 4500 at a wavelength of 875 nm. In the CN fused image, the curve rises slowly with a stumpy slope till it reaches 475 at a wavelength of 775 nm till the red region of the spectrum. In the NIR region, we observe some peaks in the curve. In the PC fused image, the curve shows some dips. Initially, the curve rises slowly till 1350 at a wavelength of 610 nm (approximated) then after the green end, the curve shows some variations. After the green end, the curve rises suddenly with a high slope till it reaches a value of 2750 at a wavelength of 680 nm. After 680 nm, the curve shows a decrease in the values till it reaches 2375 at the red region. In the NIR region, again the curve rises with some small peaks. The spectral profile of the grounds with grass in the GS fused image show almost the same outcomes as the PC fused image (Fig 10).

Figure 10.

Spectral profiles for Ground with grass

4.4. Classification

In the context of the present work, the original and the three fused datasets were classified by SAM (Spectral Angle Mapper) method of supervised classification. The Spectral Angle Mapper Classification (SAM) is an automated method for directly comparing image spectra to a reference or an endmember. This method treats both spectra as vectors and calculates the spectral angle between them. This method is insensitive to illumination since the SAM algorithm uses only the vector direction and not the vector length. The result of the SAM classification is an image showing the best match at each pixel. The selection of the classification algorithm was also based on the characteristics of the image and the training data. The SAM decision rule of classification classified the image into 9 classes i.e. vegetation type1, vegetation type 2, river, shrubs, urban features, grassland, fallow land, bare soil, and crops.

After the classification was performed the classification accuracy has been computed for the IKONOS, Hyperion and the three merged images (Fig 11 & 12). Samples of each of the class from different locations in the Dehradun and Udaipur city have been collected for accuracy assessment.

Figure 11.

Classified Product for Hyperion and IKONOS MSS fused images for part of Dehradun area (a: Fusion using CN T, b: Fusion using PC T, c: Fusion using GS T)

Figure 12.

Classified images (Udaipur)

Advertisement

5. Results and discussions

Although many studies focus on the development of fusion techniques, fewer studies concentrate on the development of image assessment methods. This study concentrates on statistical measures and classification accuracy for fusion performance. Statistical evaluation procedures have the advantage that they are objective, quantitative, and repeatable. The correlation coefficients between the original hyperion bands and the equivalent fused bands and the other three parameters ie Mean, Standard Deviation, and Median were calculated.

The statistical parameters for various fused products were plotted along with the raw Hyperion image. The graph depicts that there is no noticeable change in the statistics in the original Hyperion and fused products. PCT fused and the GST fused images demonstrate some comparable values for the mean, maximum, minimum and standard deviation but roughly have the same ability in preserving the statistics. The CNT fused image show very low values than the raw Hyperion image (Fig 13 a & b).

After evaluating the spectral profiles we observe that although the range of values in the CN fused image is not comparable to the Hyperion but in most of the cases the shape of the profile closely matches with the profile of the feature in Hyperion. So, we can infer that spectrally the CN (Colour Normalised) approach better preserves the spectral characteristics in the fused image. In terms of the visual discreteness or the spatial characteristics of the various LULC classes in the fused images, GS (Gram-Schmidt) and the PC (Principal Component) transform are best suitable if compared to Hyperion while if compared to IKONOS there is almost no gain in the spatial quality.

For performance analysis of fusion the classified images were analyzed using reference data form ground. The results of classification of the PCT and GST fused image are almost similar though for CN fused image results are deteriorated because of the artificial pixels that hinder in the classification process (Fig 11 & 12). The overall classification accuracy was calculated for the IKONOS, Hyperion and the three merged products. It was observed that the accuracy is improving in PCT fused image and GST fused image while deteriorating in CNT fused Image (Table 2).

The comparison of the separability analysis done to the original data sets and the three fused products, show that the separability for some of the classes increases after fusion and hence the classification accuracy achieved is higher (Fig 14). The classified images show some black pixels not belonging to any of the specified classes. Such pixels are left unclassified as they did not match with the pixel spectrum of any of the land cover class specified, or they are exhibiting a large angular difference (greater than.1 radians) between the known and the unknown pixel spectrum.

Data Product Overall Accuracy Achieved (Dehradun) Overall Accuracy Achieved (Udaipur)
IKONOS 75.86% 79.72%
HYPERION 68.15% 63.14%
PCT FUSED IMAGE 80.23% 83.34%
GST FUSED IMAGE 81.12% 80.23%
CNT FUSED IMAGE 65.14% 68.57%

Table 2.

Classification Accuracy

Figure 13.

(a & b): Statistical Comparision of fused images (a: Dehradun, b: Udaipur)

Figure 14.

Class seperability analysis for original and Fused images

References

  1. 1. Aarthy, R.S. and Sanjeevi, S., 2007. Spectral studies of lunar equivalent rocks-A prelude to lunar mineral mapping. Indian society of Remote Sensing, 35 (2): 141- 152.
  2. 2. Aiazzi, B., Alparone, L., Baronti, S. and Selva, M., 2006. MS + Pan Image Fusion by an Enhanced Gram-Schmidt Spectral Sharpening, Italy. www.igik.edu.pl/earsel2006/abstracts/data_fusion/aiazzi_baronti.pdf (Last Accessed on Nov., 2007)
  3. 3. Ali Darvishi, B., Kuppas, M. and Erasmi, S., 2005. Hyper-spectral/High resolution Data fusion: Assessing the quality of EO1 - Hyperion/spot-Pan and Quickbird-MS fused Images in spectral Domain
  4. 4. Ali Darvishi, B., Kuppas, M. and Erasmi, S., 2005. Hyper-spectral/High resolution Data fusion: Assessing the quality of EO1 - Hyperion/spot-Pan and Quickbird-MS fused Images in spectral Domain. (http://www.ipi.uni-hannover.de/fileadmin/institut/pdf/073-darvishi.pdf)
  5. 5. Alparone L., Baronti S., Garzelli A., Nencini F. (2004), Landsat ETM+ and SAR Image Fusion Based on Generalized Intensity Modulation, IEEE Transactions on Geoscience and Remote Sensing, Vol. 42, No. 12, pp. 2832-2839
  6. 6. Chavez, P.S., Sides, S.C. and Anderson, J.A. (1991): Comparison of three different methods to merge multi-resolution and multi-spectral data: Landsat TM and SPOT Panchromatic, Photogrammetric Engineering and Remote Sensing, 57 (3): 295-303.
  7. 7. Chen, C.-M., Hepner, G.F. and Forster, R.R., 2003. Fusion of Hyperspectral and radar data using the HIS transformation to enhance urban surface features. ISPRS Journal of Photogrammetry and Remote Sensing, 58 (2003): 19-30.
  8. 8. Gomez, B.R., Jazaeri, A. and Kafatos, M., 2001. Wavelet-based hyperspectral and multispectral image fusion. www.scs.gmu.edu/~rgomez/fall01/fusionpaper.pdf (Last Accessed on Nov., 2007)
  9. 9. Kasetkasem, T., Arora, M. K., and Varshney, P. K., "An MRF Model Based Approach for Sub-pixel Mapping from Hyperspectral Data, " Advanced Image Processing Techniques for Remotely Sensed Hyperspectral Data, ed. P. K. Varshney and M. K. Arora, Chapter 11, pp. 279-307, Springer Verlag, 2004.
  10. 10. Li J., Luo J., Ming D., Shen Z. (2005), A New Method for Merging IKONOS Panchromatic and Multispectral Image Data, Geoscience and Remote Sensing Symposium IGARSS, Vol. 6, pp. 3916-3919
  11. 11. Lillesand, M.T. and Kiefer, W.R., 2000. Remote sensing and image interpretation. John Wiley and sons, New york.
  12. 12. Ling, Y., Ehlers, M., Usery, l.E. and Marguerite, M., 2006. FFT-enhanced IHS transform method for fusing high-resolution satellite images. ISPRS Journal of photogrammetry and Remote Sensing, 61 (2007): 381-392.
  13. 13. Photogrammetric Engineering and Remote Sensing, Vol. 57, No.3, pp. 295–303. Photogrammetry & Remote Sensing, Vol. 58, pp. 19-30
  14. 14. Pohl C., Van Genderen J. L. (1998), Multisensor image fusion in remote sensing: Concepts, methods and applications (Review article), International Journal of Remote Sensing, Vol. 19, No. 5, pp. 823-854
  15. 15. Sanjeevi, S., 2006.Chapter on Multisensor Image Fusion. Lecture notes on Advanced Image Processing . Photogrammetry and remote sensing Division Indian institute of remote sensing, Dehradun
  16. 16. Sanjeevi, S., 2008. Chapter on Multisensor Image Fusion. Lecture notes on Advanced Image Processing . Photogrammetry and remote sensing Division Indian institute of remote sensing, Dehradun
  17. 17. Shippert, P., 2008. Introduction to Hyperspectral Image Analysis. http://satjournal.tcom.ohiou.edu/pdf/shippert.pdf (Last Accessed on Nov., 2007)

Written By

Hina Pande and Poonam S. Tiwari

Submitted: 24 February 2012 Published: 20 November 2013