Open access

Fusion of Multisource Images for Update of Urban GIS

Written By

D. Amarsaikhan and M. Saandar

Submitted: 08 October 2010 Published: 24 June 2011

DOI: 10.5772/16293

From the Edited Volume

Image Fusion and Its Applications

Edited by Yufeng Zheng

Chapter metrics overview

3,428 Chapter Downloads

View Full Metrics

1. Introduction

Image fusion is used for many purposes. Very often it is used to produce an image with an improved spatial resolution. The most common situation is represented by a pair of images where the first acquired by a multispectral sensor has a pixel size greater than the pixel size of the second image acquired by a panchromatic sensor. Combining these images, fusion produces a new multispectral image with a spatial resolution equal to the panchromatic one. In addition, image fusion introduces important distortion on the pixel spectra which in turn improve the information content of remote sensing (RS) images (Teggi et al. 2003). Over the years, different fusion methods have been developed for improving spatial and spectral resolutions of RS data sets. The techniques most encountered in the literature are the intensity-hue-saturation (IHS) transform, the Brovey transform, the principal components analysis (PCA) method, the Gram-Schmidt method, the local mean matching method, the local mean and variance matching method, the least square fusion method, the wavelet-based fusion method, the multiplicative and the Ehlers Fusion (Karathanassi et al. 2007, Ehlerset al. 2008). Most fusion applications use modified approaches or combinations of these methods.

In case of RS data sets, three different fusions such as fusion of optical data with optical data, fusion of microwave data with microwave data and fusion of optical and microwave data sets can be conducted. For several decades, fusion of multiresolution optical images has been successfully used for the improvement of information contents of images for visual interpretation as well as for the enhancement of land surface features. Many studies have been conducted on the improvement of spatial resolution of multispectral images by the use of the high frequencies of panchromatic images, while preserving the spectral information (Mascarenhaset al.1996, Saraf 1999, Teoh et al. 2001, Teggiet al. 2003, Gonzalezet al. 2004, Colditz et al. 2006, Deng et al. 2008, Li and Leung 2009). A number of authors have attempted to successfully fuse the interferometric or multifrequency SAR images (Soh and Tsatsoulis 1999, Verbyla 2001, Baghdadi et al. 2002, Costa 2005, Palubinskas and Datcu 2008). Unlike the fusion of optical images, most fusions of the synthetic aperture radar (SAR) data sets have attempted to increase the spectral variety of the classes.

Over the years, the fusion of optical and SAR data sets has been widely used for different applications. It has been found that the images acquired at optical and microwave ranges of electro-magnetic spectrum provide unique information when they are integrated (Amarsaikhan et al. 2007). Now image fusion based on the integration of multispectral optical and multifrequency microwave data sets is being efficiently used for interpretation, enhancement and analysis of different land surface features. As it is known, optical data contains information on the reflective and emissive characteristics of the Earth surface features, while the SAR data contains information on the surface roughness, texture and dielectric properties of natural and man-made objects. It is evident that a combined use of the optical and SAR images will have a number of advantages because a specific feature which is not seen on the passive sensor image might be seen on the microwave image and vice versa because of the complementary information provided by the two sources (Amarsaikhan et al. 2004, Amarsaikhan et al. 2007). Many authors have proposed and applied different techniques to combine optical and SAR images in order to enhance various features and they all judged that the results from the fused images were better than the results obtained from the individual images (Wang et al. 1995, Pohl and Van Genderen 1998, Ricchetti 2001, Herold and Haack 2002, Amarsaikhan and Douglas 2004, Westraet al. 2005, Ehlerset al. 2008, Saadi and Watanabe 2009, Zhang 2010). Although, many studies of image fusion have been conducted for derivation of new algorithms for the enhancement of different features, still little research has been done on the influence of image fusions on the automatic extraction of different thematic information within urban environment.

For many years, for the extraction of thematic information from multispectral RS images, different supervised and unsupervised classification methods have been applied (Storvik et al. 2005, Meher et al. 2007). Unlike the single-source data, data sets from multiple sources have proved to offer better potential for discriminating between different land cover types. Many authors have assessed the potential of multisource images for the classification of different land cover classes (Munechika et al. 1993, Serpico and Roli 1995, Benediktsson et al. 1997, Hegarat-Mascle et al. 2000, Amarsaikhan and Douglas 2004, Amarsaikhan et al. 2007). In RS applications, the most widely used multisource classification techniques are statistical methods, Dempster–Shafer theory of evidence, neural networks, decision tree classifier, and knowledge-based methods (Solberg et al. 1996, Franklinet al. 2002, Amarsaikhan et al. 2007).

The aim of this study is a) to investigate and evaluate different image fusion techniques for the enhancement of spectral variations of urban land surface features and b) to apply a knowledge-basedclassification method for the extraction oflandcover informationfrom the fused images in order to update urban geographical information system (GIS). The proposed image fusion includes two different approaches such as fusion of SAR data with SAR data (ie, SAR/SAR approach) and fusion of optical data with SAR data (ie, optical/SAR approach), while the knowledge-based method includes different rules based on the spectral and spatial thresholds. For the actual analysis, multisource satellite images with different spatial resolutions as well as some GIS data of the urban area in Mongolia have been used.

Advertisement

2. Test site and data sources

As a test site, Ulaanbaatar, the capital city of Mongolia has been selected.Ulaanbaatar is situated in the central part of Mongolia, on the Tuul River, at an average height of 1350m above sea level and currently has about 1 million inhabitants. The city is surrounded by the mountains which are spurs of the Khentii Mountain Ranges. Founded in 1639 as a small town named Urga, today it has prospered as the main political, economic, business, scientific and cultural centre of the country.

The city is extended from the west to the east about 30km and from the north to the south about 20km. However, the study area chosen for the present study covers mainly the central and western parts and is characterized by such classes as built-up area, ger (Mongolian national dwelling) area, green area, soil and water. Figure 1 shows ASTER image of the test site, and some examples of its land cover.

Figure 1.

ASTER image of the selected part of Ulaanbaatar (B1=B, B3=G, B2=R).1-built-up area; 2-ger area; 3-green area; 4-soil; 5-water.The size of the displayed area is about 8.01kmx6.08km.

In the present study, for the enhancement of urban features, ASTER data of 23 September 2008, ERS-2 SAR data of 25 September 1997 and ALOS PALSAR data of 25 August 2006 have been used. Although ASTER has 14 multispectral bands acquired in visible, near infrared, middle infrared and thermal infrared ranges of electro-magnetic spectrum, in the current study, green (band 1), red (band 2) and near infrared (band 3) bands with a spatial resolution of 15m have been used. ERS-2 SAR is a European RS radar satellite which acquires VV polarized C-band data with a spatial resolution of 25m. ALOS PALSAR is a Japanese Earth observation satellite carrying a cloud-piercing L-band radar which is designed to acquire fully polarimtric images. In the present study, HH, VV and HV polarization intensity images of ALOS PALSAR have been used.

Advertisement

3. Co-registration of multisource images and speckle suppression of the SAR images

At the beginning, the ALOS PALSAR image was rectified to the coordinates of the ASTER image using 12 ground control points (GCPs) defined from a topograpic map of the study area. The GCPs have been selected on clearly delineated crossings of roads, streets and city building corners. For the transformation, a second-order transformation and nearest-neighbour resampling approach were applied and the related root mean square error (RMSE) was 0.94 pixel. Then, the ERS-2 SAR image was rectified and its coordinates were transformed to the coordinates of the rectified ALOS PALSAR image. In order to rectify the ERS-2 SAR image, 14 more regularly distributed GCPs were selected from different parts of the image. For the actual transformation, a second-order transformation was used. As a resampling technique, the nearest-neighbour resampling approach was applied and the related RMSE was 0.98 pixel.

Figure 2.

The comparison of the ALOS PALSAR images, specklesuppressed by 3x3 size local region (a), lee-sigma (b), frost (c) and gammamap (d) filters.

As the microwave images have a granular appearance due to the speckle formed as a result of the coherent radiation used for radar systems; the reduction of the speckle is a very important step in further analysis. The analysis of the radar images must be based on the techniques that remove the speckle effects while considering the intrinsic texture of the image frame (Ulaby et al. 1986,Amarsaikhan and Douglas 2004, Serkan et al. 2008). In this study, four different speckle suppression techniques such as local region, lee-sigma, frost and gammamap filters (ERDAS 1999) of 3x3 and 5x5 sizes were applied to the ALOS PALSAR image and compared in terms of delineation of urban features and texture information. After visual inspection of each image, it was found that the 3x3 gammamap filter created the best image in terms of delineation of different features as well as preserving content of texture information. In the output images, speckle noise was reduced with very low degradation of the textural information. The comparison of the specklesuppressed images is shown in Figure 2.

Advertisement

4. Image fusion

The concept of image fusion refers to a process, which integrates different images from different sources to obtain more information, considering a minimum loss or distortion of the original data. In other words, the image fusion is the integration of different digital images in order to create a new image and obtain more information than can be separately derived from any of them (Pohl and Van Genderen 1998, Ricchetti 2001, Amarsaikhan et al. 2009a). In the case of the present study, for the urban areas, the radar imagesprovide structural information about buildings and street alignment due to the double bounce effect, while the optical image provides the information about the spectral variations of different urban features. Moreover, the SAR images contain multitemporal changes of land surface features and provide some additional information about soil moisture condition due to dielectric properties of the soil. Over the years, different data fusion techniques have been developed and applied, individually and in combination, providing users and decision-makers with various levels of information. Generally, image fusion can be performed at pixel, feature and decision levels (Abidi and Gonzalez 1992, Pohl and Van Genderen 1998). In this study, data fusion has been performed at a pixel level and the following rather common and more complex techniques were compared: (a) multiplicative method, (b) Brovey transform, (c) the PCA, (d) Gram-Schmidt fusion, (f) wavelet-based fusion, (f) Elhers fusion. Each of these techniques is briefly discussed below.

Multiplicative Method:This is the most simple image fusion technique. It takes two digital images, for example, high resolution panchromatic and low resolution multispectral data, and multiplies them pixel by pixel to get a new image (Seetha et al. 2007). It can be formulated as follows:

Red=Low Resolution Band1*High Resolution Band1E1
Green=Low Resolution Band2*High Resolution Band2E2
Blue=Low Resolution Band3*High Resolution Band3E3

Brovey transform:This is a simple numerical method used to merge different digital data sets. The algorithm based on a Brovey transform uses a formula that normalises multispectral bands used for a red, green, blue colour display and multiplies the result by high resolution data to add the intensity or brightness component of the image (Vrabel 1996). The formulae used for the Brovey transform can be described as follows:

Red=Band1i=1nBandn*High Resolution BandE4
Green=Band2i=1nBandn*High Resolution BandE5
Blue=Band3i=1nBandn*High Resolution BandE6

PCA: The most common understanding of the PCA is that it is a data compression technique used to reduce the dimensionality of the multidimensional datasets or bands (Richards and Jia, 1999). The bands of the PCA data are noncorrelated and are often more interpretable than the source data. The process is easily explained if we consider a two dimensional histogram which forms an ellipse. When the PCA is performed, the axes of the spectral space are rotated, changing the coordinates of each pixel in spectral space. The new axes are parallel to the axes of the ellipse. The length and direction of the widest transect of the ellipse are calculated using a matrix algebra. The transect, which corresponds to the major axis of the ellipse, is called the first principal component of the data. The direction of the first principal component is the first eigenvector, and its length is the first eigenvalue. A new axis of the spectral space is defined by this first principal component. The second principal component is the widest transect of the ellipse that is perpendicular to the first principal component. As such, the second principal component describes the largest amount of variance in the data that is not already described by the first principal component. In a two-dimensional case, the second principal component corresponds to the minor axis of the ellipse (ERDAS 1999).

In n dimensions, there are n principal components. Each successive principal component is the widest transect of the ellipse that is orthogonal to the previous components in the n-dimensional space, and accounts for a decreasing amount of the variation in the data which is not already accounted for by previous principal components. Although there are n output bands in a PCA, the first few bands account for a high proportion of the variance in the data. Sometimes, useful information can be gathered from the principal component bands with the least variances and these bands can show subtle details in the image that were obscured by higher contrast in the original image (ERDAS 1999).

To compute a principal components transformation, a linear transformation is performed on the data meaning that the coordinates of each pixel in spectral space are recomputed using a linear equation. The result of the transformation is that the axes in n-dimensional spectral space are shifted and rotated to be relative to the axes of the ellipse. To perform the linear transformation, the eigenvectors and eigenvalues of the n principal components must be derived from the covariance matrix, as shown below:

D=D100DnE7
E*Cov*ET=D(4)

Where:

E=matrix of eigenvectors

Cov=covariance matrix

T=transposition function

D=diagonal matrix of eigenvalues in which all non-diagonal elements are zeros and D is computed so that its non-zero elements are ordered from greatest to least, so that D1>D2>D3…>Dn.

Gram-Schmidt fusion method: Gram-Schmidt process is a procedure which takes a non-orthogonal set of linearly independent functions and constructs an orthogonal basis over an arbitrary interval with respect to an arbitrary weighting function. In other words, this method creates from the correlated components non- or less correlated components by applying orthogonalization process (Karathanassi et al. 2008).

In any inner product space, we can choose the basis to work. It often simplifies the calculations to work in an orthogonal basis. Let us suppose that K = {v1, v2,…, vn} is an orthogonal basis for an inner product space V. Then it is a simple matter to express any vector wV as a linear combination of the vectors in K:

ϧ=w,v1v12v1+w,v2v22v2++w,vnvn2vnE8

Given an arbitrary basis {u1, u2,…, un} for n-dimensional inner product space V, the Gram-Schmidt algorithm constructs an orthogonal {v1, v2,…, vn} for V and the process can be described as follows:

v1= u1(6a)

v2= u2-projectw1 u2 = u2-u2,v1v12v1 (6b)

v3= u3-projectw2 u3 = u3-u3,v1v12v1-u3,v2v22v2 (6c)

v4= u4-projectw3 u4 = u4-u4,v1v12v1-u4,v2v22v2-u4,v3v32v3 (6d)

Where:

w1-space spanned by v1

projectw1 u2 is the orthogonal projection of u2 on v1

w2-space spanned by v1 and v2

w3-space spanned by v1, v2 and v3.

This process continues up to vn. The resulting orthogonal set {v1, v2,…, vn} consists of n-linearly independent vectors in V and forms an orthogonal basis for V.

Generally, orthogonalization is important in diverse applications in mathematics and other applied sciences because it can often simplifiy calculations or computations by making it possible, for instance, to do the calculation in a recursive manner.

Wavelet-based fusion:The wavelet transform decomposes the signal based on elementary functions, that is the wavelets. By using this, an image is decomposed into a set of multi-resolution images with wavelet coefficients. For each level, the coefficients contain spatial differences between two successive resolution levels. The wavelet transform can be expressed as follows:

WT(f)a,b=1a-+f(t)Ϥt-badtE9

Where:

a-scale parameter

b-translation parameter.

Practical implementation of the wavelet transform requires discretisation of its translation and scale parameters. In general, a wavelet-based image fusion can be performed by either replacing some wavelet coefficients of the low-resolution image by the corresponding coefficients of the high-resolution image or by adding high resolution coefficients to the low-resolution data (Pajares and Cruz, 2004). In the present study, ‘Wavelet Resolution Merge’ tool of Erdas Imagine was used and the algorithm behind this tool uses biorthogonal transforms. Processing steps of the wavelet-based image fusion are as follows:

  • Decompose a high resolution panchromatic image into a set of low resolution panchromatic images with wavelet coefficients for each level.

  • Replace low resolution panchromatic images with multispectral bands at the same spatial resolution level.

  • Perform a reverse wavelet transform to convert the decomposed and replaced panchromatic set back to the original panchromatic resolution level.

Elhers fusion: This is a fusion technique used for the spectral characteristicspreservation of multitemporal and multi-sensor data sets. The fusion is based on an IHS transformation combined with filtering in the Fourier domain and the IHS transform is used for optimal colour separation. As the spectral characteristics of the multispectral bands are preserved during the fusion process, there is no dependency on the selection or order of bands for the IHS transform (Ehlers 2004).

The IHS method uses three positional parameters such as Intensity, Hue and Saturation. Intensity is the overall brightness of the scene and deviod of any colour content. Hue is the dominant wavelength of the light contributing to any color. Saturation indicates the purity of colour. In this method, the H and S components contain the spectral information, while the I component represents the spatial information (Pohl and Van Genderen 1998, Ricchetti 2001).The transformation from red, green, blue (RGB) colour space to IHS space is a nonlinear, lossless and reversible process. It is possible to vary each of the IHS components without affecting the others. It is performed by a rotation of axis from the first orthogonal RGB system to a new orthogonal IHS system. The equations describing the transformation to the IHS (Pellemans et al. 1993) can be written as follows:

xyz=13230-2313000112012010-12012RGBE10
I=
(x+y+z)Im(H,S)E11
H=
tan-1-zxE12
S=
cos-1yx+y+z/Km(H)E13

Where:

Im(H,S)-maximum intensity permitted at a given H and co-latitudeKmH-maximum co-latitude permitted at a given H.

Unlike the standard approach, the Elhers fusion is extended to include more than 3 bands using multiple IHS transforms until the number of bands is fulfilled. A subsequent Fourier transform of the intensity component and the panchromatic image allows an adaptive filter design in the frequency domain.

By the use of the fast Fourier transform (FFT) techniques, the spatial components to be enhanced or suppressed can be directly accessed. The intensity spectrum is filtered with a low pass filter (LP) whereas the panchromatic spectrum is filtered with an inverse high pass filter (HP). After filtering, the images are transformed back into the spatial domain with an inverse FFT and added together to form a fused intensity component with the low-frequency information from the low resolution multispectral image and the high-frequency

Figure 3.

Steps to implement the Elhers fusion.

information from the panchromatic image. This new intensity component and the original hue and saturation components of the multispectral image form a new IHS image. As the last step, an inverse IHS transformation produces a fused RGB image (Ehlers et al. 2008). This procedure can be illustrated as shown in Figure 3.

4.1. Comparison of the fusion methods using SAR/SAR approach

Generally, interpretation of microwave data is based on the backscatter properties of the surface features and most SAR image analyses are based on them. Below the backscatter characteristics of the available five classes have been described. In case of two urban classes (ie, built-up and ger areas), at both L-band and C-band frequencies the backscatter would contain information about street alignment, building size, density, roofing material, its orientation, vegetation and soil, that is it would contain all kinds of scattering. Roads and buildings can reflect a larger component of radiation if they are aligned at right angles to the incident radiation. Here, the intersection of a road and a building tends to act as a corner reflector. The amount of backscatter is very sensitive to street alignment. The areas of streets and buildings aligned at right angles to the incident radiation will have very bright appearance and non-aligned areas will have darker appearance in the resulting image. Volume and surface scattering will also play an important role in the response from urban areas. Therefore, these classes will have higher backscatter return resulting in bright appearances on the images.

In the study site, green area consists of some forest andvegetated surface.In the case of forest, at L-band frequencythe wavelength will penetrate to the forest canopy and will cause volume scattering to be derived from multiple-path reflections among twigs, branches, trunks and ground, while at C-band frequency only volume scattering from the top layer can be expected, because the wavelength is too short to penetrate to the forest layer. The vegetated surface will act as mixtures of small bush, grass and soil and the backscatter will depend on the volume of either of them. Also plant geometry, density and water content are the main factors influencing the backscatter coming from the vegetation cover. As a result, green areas will have brighter appearance on the image. The backscatter of soil depends on the surface roughness, texture, existing surface patterns, moisture content, as well as wavelength and incident angle. The presence of water strongly affects the microwave emissivity and reflectivity of a soil layer. At low moisture levels there is a low increase in the dielectric constant. Above a critical amount, the dielectric constant rises rapidly. This increase occurs when moisture begins to operate in a free space and the capacity of a soil to hold and retain moisture is directly related to the texture and structure of the soil. As can be seen, soil will have brighter appearance if it is wet and dark appearance if it is dry. Water should have the lowest backscatter values and dark appearance at both frequencies because of its specular reflection that causes less reflection towards the radar antenna.

Figure 4.

Comparison of the fused images of ALOS PALSAR and ERS-2 SAR:(a) the image obtained by multiplicative method; (b) Brovey transformed image;(c) PC image (red=PC1, green=PC2; blue=PC3); (d) the image obtained byGram-Schmidt fusion;(e) the image obtained by wavelet-based fusion; (f) the image obtained by Elhers fusion.

As can be seen from figure 4, the images created by the multiplicative method, Brovey transform and Gram-Schmidt fusion have very similar appearances. On these images, the built-up and ger areas have either similar (figure 4b) or mixed appearances (figure 4a, d). The green area has similar appearance as the built-up area. This means that the backscatter from double bounce effect in the built-up area has similar power as the volume and diffuse scattering from the green area. Moreover, it is seen that on all images (except the PC image), soil and water classes have dark appearances because of their specular reflection (though in some areas wet soil has increased brightness). As the original bands have been transformed to the new principle components, it is not easy to recognize the available classes on the image created by the PCA (figure 4c). On the PC image, the two urban classes, some roads aligned at right angles to the radar antenna as well as some areas affected by radar layover have magenta-reddish appearances, while other classes form different mixed classes. On the image created by the wavelet-based fusion (figure 4e), it is not possible to distinguish much detail. On this image, the two urban classes and green area as well as soil and water classes have similar appearances. Furthermore, it is seen that the image created by the Elhers fusion (figure 4f) looks similar to the image created by the Gram-Schmidt fusion, but has more light appearances. Overall, it is seen that the fused SAR images cannot properly distinguish the available spectral classes.

4.2. Comparison of the fusion methods using optical/SAR approach

Initially, the above mentioned fusion methods have been applied to such combinations as ASTER and HH, HV and VV polarization components of PALSAR as well as ASTER and ERS-2 SAR. Then, to obtain good colour images that can illustrate spectral and spatial variations of the classes of objects on the images, the fused images have been visually compared. In the case of the multiplicative method, the fused image of ASTER and PALSAR HH polarization (figure 5a) demonstrated a better result compared to other combinations, while in the case of Brovey transform the combination of ASTER and ERS-2 SAR (figure 5b) created a good image. On the image obtained by the multiplicative method, the built-up and ger areas have similar appearances, however, the green area, soil and water classes have total separations. Likewise, on the image obtained by the Brovey transform, the built-up and ger areas have similar appearances, whereas the green area and soil classes have total separations. Moreover, on this image, a part of the water class is mixed with other classes.

PCA has been applied to such combinations as ASTER and ERS-2 SAR, ASTER and PALSAR, and ASTER, PALSAR and ERS-2 SAR. When the results of the PCA were compared, the combination of ASTER, PALSAR and ERS-2 SAR demonstrated the best result than the other two combinations.The result of the final PCA is shown in table 1. As can be seen from table 1, PALSAR HH polarisation and ERS-2 SAR have very high negative loadings in PC1 and PC2. In these PCs, visible bands of ASTER also have moderate to high loadings.

This means that PC1 and PC2 contain the characteristics of both optical and SAR images. Although, PC3 contained 7.0% of the overall variance and had moderate to high loadings of ASTER band1, PALSAR HH polarisation and ERS-2 SAR, visual inspection revealed that it contained less information related to the selected classes. However, visual inspection of PC4 that contained 5.6% of the overall variance, in which VV polarisation of PALSAR has a high loading, revealed that this feature contained useful information related to the textural difference between the built-up and ger areas. The inspection of the last PCs indicated that they contained noise from the total data set. The image obtained by the PCA is shown in figure 5c.As can be seen from figure 5c, although the PC image could separate the two urban classes, in some parts of the image, it created a mixed class of green area and soil.

Figure 5.

Comparison of the fused optical and SAR images:(a) the image obtained by multiplicative method (ASTER and PALSAR-HH); (b) Brovey transformed image (ASTER and ERS-2 SAR); (c) PC image (red=PC1, green=PC2; blue=PC3); (d) the image obtained byGram-Schmidt fusion (ASTER and ERS-2 SAR); (e) the image obtained by wavelet-based fusion (ASTER and ERS-2 SAR); (f) the image obtained by Elhers fusion (ASTER and PALSAR-VV).

In the case of the Gram-Schmidt fusion, the integrated image of ASTER and ERS-2 SAR (figure 5d) demonstrated a better result compared to other combinations. Although, the image contained some layover effects available on the ERS-2 image, looked very similar to the image obtained by the multiplicative method. In the case of the wavelet-based fusion, the fused image of ASTER and ERS-2 SAR (figure 5e) demonstrated a better result compared to other combinations, too. Also, this image looked better than any other images obtained by other fusion methods. On this image, all available five classes could be distinguished by their spectral properties. Moreover, it could be seen that some textural information has been added for differentiation between the classes: built-up area and ger area. In the case of the Elhers fusion, theintegrated image of ASTER and PALSAR VV polarization (figure 5f) demonstrated a better result compared to other combinations. Although, this image had a blurred appearance due to speckle noise, still could very well separate green area, soil and water classes. Figure 5 shows the comparison of the images obtained by different fusion methods.

PC1PC2PC3PC4PC5PC6PC7
ASTER band10.330.440.420.350.440.390.17
ASTER band20.500.370.34-0.34-0.38-0.33-0.32
ASTER band30.020.070.11-0.09-0.32-0.190.91
PALSAR HH-0.770.340.47-0.140.06-0.15-0.08
PALSAR HV0.14-0.07-0.06-0.490.73-0.400.13
PALSAR VV0.02-0.010.010.690.08-0.71-0.04
ERS-2 SAR0.07-0.730.670.01-0.010.02-0.01
Eigenvalues8873.34896.71159.7934.6459.2147.781.7
Variance (%)53.629.67.05.62.80.890.51

Table 1.

Principal component coefficients from ASTER, PALSAR and ERS-2 SAR images.

Advertisement

5. Evaluation of features and urban land cover classification

5.1. Evaluation of features using supervised classification

Initially, in order to define the sites for the training signature selection, from the multisensor images, two to four areas of interest (AOI) representing the selected five classes (built-up area, ger area, green area, soil and water) have been selected through thorough analysis using a polygon-based approach. The separability of the training signatures was firstly checked in feature space and then evaluated using transformed-divergence (TD)separability measure (table 2). The values of TD separability measure range from 0 to 2000 and indicate how well the selected pairs are statistically separate. The values greater than 1900 indicate that the pairs have good separability (ERDAS 1999, ENVI 1999). After the investigation, the samples that demonstrated the greatest separability were chosen to form the final signatures. The final signatures included 2669 pixels for built-up area, 592 pixels for ger area, 241 pixels for green area, 1984 pixels for soil and 123 pixels for water.

In general, urban areas are complex and diverse in nature and many features have similar spectral characteristics and it is not easy to separate them by the use of ordinary feature combinations. For the successful extraction of the urban land cover classes, reliable features derived from different sources should be used. In many cases, texture features derived from the occurrence and co-occurrence measures are used as additional reliable sources (Amarsaikhan et al. 2010). However, in the present study, the main objective was to evaluate the features obtained by the use of different fusion approaches. Therefore, for the classification the following feature combinations were used:

  1. The features obtained by the use of the multiplicative method using SAR/SAR approach

  2. The features obtained by the use of the multiplicative method using optical/SAR approach

  3. The features obtained by the use of the Brovey transform using SAR/SAR approach

  4. The features obtained by the use of the Brovey transform optical/SAR approach

  5. The PC1, PC2 and PC3 of the PCA obtained using SAR/SAR approach

  6. The PC1, PC2 and PC4 of the PCA obtained using optical/SAR approach

  7. The features obtained by the use of the Gram-Schmidt fusion using SAR/SAR approach

  8. The features obtained by the use of the Gram-Schmidt fusion using optical/SAR approach

  9. The features obtained by the use of the wavelet-based fusion using SAR/SAR approach

  10. The features obtained by the use of the wavelet-based fusion using optical/SAR approach

  11. The features obtained by the use of the Elhers fusion using SAR/SAR approach

  12. The features obtained by the use of the Elhers fusion using optical/SAR approach

  13. The combined features of ASTER and PALSAR

  14. The combined features of ASTER and ERS-2 SAR

  15. The combined features of ASTER, PALSAR and ERS-2 SAR.

Builtup areaGer areaGreen areaSoilWater
Builtup area0.00078719878442000
Ger area7870.000199917062000
Green area198719990.00019032000
Soil844170619030.0002000
Water20002000200020000.000

Table 2.

The separabilitiesmeasured by TD separability measure.

For the actual classification, a supervised statistical maximum likelihood classification (MLC) has been used assuming that the training samples have the Gaussian distribution (Richards and Xia 1999).The final classified images are shown in figure 6(1–15). As seen from figure 6(1–15), the classification results of the SAR/SAR approach give the worst results, because there are high overlaps among classes: built-up area, ger area, soil and green area. However, these overlaps decrease on other images for the classification of which SAR as well as optical bands have been used. As could be seen from the overall classification results (table 3), although the combined use of optical and microwave data sets produced a better result than the single source image, it is still very difficult to obtain a reliable land cover map by the use of the standard technique, specifically on decision boundaries of the statistically overlapping classes.

For the accuracy assessment of the classification results, the overall performance has been used. This approach creates a confusion matrix in which reference pixels are compared with the classified pixels and as a result an accuracy report is generated indicating the percentages of the overall accuracy (ERDAS 1999). As ground truth information, different AOIs containing 12578 purest pixels have been selected.

Figure 6.

Comparison of the MLC results for the selected classes (cyan-built-up area; dark cyan-ger area; green-green area; sienna-soil; blue-water). Classified images of: 1-multiplicative method using SAR/SAR approach, 2-multiplicative method using optical/SAR approach, 3-Brovey transform using SAR/SAR approach, 4-Brovey transform optical/SAR approach, 5- PCA using SAR/SAR approach, 6- PCA using optical/SAR approach, 7- Gram-Schmidt fusion using SAR/SAR approach, 8- Gram-Schmidt fusion using optical/SAR approach, 9- wavelet-based fusion using SAR/SAR approach, 10- wavelet-based fusion using optical/SAR approach, 11-Elhers fusion using SAR/SAR approach, 12- Elhers fusion using optical/SAR approach, 13- features of ASTER and PALSAR, 14- features of ASTER and ERS-2 SAR, 15- features of ASTER, PALSAR and ERS-2 SAR.

AOIs were selected on a principle that more pixels to be selected for the evaluation of the larger classes such as built-up area and ger area than the smaller classes such as green area and water. The overall classification accuracies for the selected classes are shown in table 3.

The bands (features) used for the MLCOverall accuracy (%)
Multiplicative method using SAR/SAR approach46.12
Multiplicative method using optical /SAR approach78.17
Brovey transform using SAR/SAR approach41.57
Brovey transform optical/SAR approach74.34
PCA using SAR/SAR approach71.83
PCA using optical/SAR approach81.92
Gram-Schmidt fusion using SAR/SAR approach40.86
Gram-Schmidt fusion using optical/SAR approach74.08
Wavelet-based fusion using SAR/SAR approach65.78
Wavelet-based fusion using optical/SAR approach76.26
Elhers fusion using SAR/SAR approach51.72
Elhers fusion using optical/SAR approach60.08
ASTER and PALSAR79.98
ASTER and ERS-2 SAR78.43
ASTER, PALSAR and ERS-2 SAR80.12

Table 3.

The overall classification accuracy of the classified images.

5.2. Knowledge-based classification

In years past, knowledge-based techniques have been widely used for the classification of different RS images. The knowledge in image classification can be represented in different forms depending on the type of knowledge and necessity of its usage. The most commonly used techniques for knowledge representation are a rule-based approach and neural network classification (Amarsaikhan and Douglas 2004). In the present study, for separation of the statistically overlapping classes, a rule-based algorithm has been constructed. A rule-based approach uses a hierarchy of rules, or a decision tree describing the conditions under which a set of low-level primary objects becomes abstracted into a set of the high-level object classes. The primary objects contain the user-defined variables and include geographical objects represented in different structures, external programmes, scalars and spatial models (ERDAS 1999).

The constructed rule-based algorithm consists of 2 main hierarchies. In the upper hierarchy, on the basis of knowledge about reflecting and backscattering characteristics of the selected five classes, a set of rules which contains the initial image classification procedure based on a Mahalanobis distance rule and the constraints on spatial thresholds were constructed. The Mahalanobis distance decision rule can be written as follows:

MDk=(xi­ mk)t Vk-1(xi­ mk)(12)

Where:

xi-vector representing the pixel

mk-sample mean vector for class k

Vk-sample variance-covariance matrix of the given class.

It is clear that a spectral classifier will be ineffective if applied to the statistically overlapping classes such as built-up area and ger area because they have very similar spectral characteristics in both optical and microwave ranges. For such spectrally mixed classes, classification accuracies can be improved if the spatial properties of the classes of objects could be incorporated into the classification criteria. The spatial thresholds can be determined on the basis of historical thematic spatial data sets or from local knowledge about the site. In this study, the spatial thresholds were defined based on local knowledge about the test area.

Figure 7.

Classification result obtained by the knowledge-based classification (cyan-built-up area;dark cyan-ger area; green-green area; sienna-soil; blue-water).

In the initial image classification, for separation of the statistically overlapping classes, only pixels falling outside of the spatial thresholds and the PC1, PC2 and PC4 of the PCA obtained using optical/SAR approach, were used. The pixels falling outside of the spatial thresholds were temporarily identified as unknown classes and further classified using the rules in which other spatial thresholds were used. As can be seen from the pre-classification analysis, there are different statistical overlaps among the classes, but significant overlaps exist among the classes: built-up area, ger area and soil. In the lower hierarchy of the rule-base, different rules for separation of these overlapping classes were constructed using spatial thresholds. The image classified by the constructed method is shown in Figure 7.

Figure 8.

The flowchart of the constructed knowledge-based classification.

For the accuracy assessment of the classification result, the overall performance has been used, taking the same number of sample points as in the previous classifications. The confusion matrix produced for the knowledge-based classification showed overall accuracy of 90.92%. In order to allow an evaluation of the class by class results, confusion matrices of the overall classification accuracies of the classified images using the knowledge-based classification and the best supervised classification (ASTER, PALSAR and ERS-2 SAR) are given in table 4a,b. As could be seen from figure 7 and table 4a,b, the result of the classification using the rule-based method is much better than result of the standard method. The flowchart of the constructed rule-based classification procedure is shown in Figure 8.

Classified dataReference data
Builtup areaGer areaGreen areaSoilWater
Builtup area5212002670
Ger area18713050770
Green area180911610
Soil2979812637710
Water00011237
Total5714140310374187237
Overall accuracy=90.92% (11436/12578)

Table 4. (a)

Classified dataReference data
Builtup areaGer areaGreen areaSoilWater
Builtup area49024071898028
Ger area56798019520
Green area981686800
Soil1090132313617
Water380019192
Total5714140310374187237
Overall accuracy=80.12% (10078/12578)

Table 4. (b)

Table 4. Comparison of the detailed overall classification accuracies of the classified images using the knowledge-based classification and supervised classification (ASTER, PALSAR and ERS-2 SAR).

Advertisement

6. Update of urban GIS

In general, a GIS can be considered as a spatial decision-making tool. For any decision-making, GIS systems use digital spatial information, for which various digitized data creation methods are used. The most commonly used method of data creation is the digitization, where hard copy maps or survey plans are transferred into digital formats through the use of special software programs and spatial-referencing capabilities. With the emergence of the modern ortho-rectified images acquired from both space and air platforms, heads-up digitizing is becoming the main approach through which positional data is extracted (Amarsaikhan andGanzorig2010). Compared to the traditional method of tracing, heads-up digitizing involves the tracing of spatial data directly on top of the acquired imagery. Thus, due to rapid development in science an technology, primary spatial data acquisition within a GIS is becoming more and more sophisticated.

The current GISs allow the users and decision-makers to view, understand, question, interpret, analyze and visualize data sets in many different ways. The power of GIS systems comes from the ability to relate different information in a spatial context and to reach a conclusion about this relationship. Most of the information we have about our world contains a spatial reference, placing that information at some point on the Earth’s surface. For example, when information about urban commercial buildings is collected, it is important to know where the buildings are located. This can be done by applying a spatial reference system that uses a special coordinate system. Comparing that information with other information, such as the location of the main infrastructure, one can evaluate the market values of the buildings. In this case, a GIS helps in revealing important new information that leads to better decision-making.

Figure 9.

The digitized map, created from a topographic map of 1984(cyan-built-up area;dark cyan-ger area; green-green area; sienna-soil; blue-water).

At present, GISs are being widely used for urban planning and management. For an efficient decision-making, one needs accurate and updated spatial information. In urban context, spatial information can be collected from a number of sources such as city planning maps, topographic maps, digital cartography, thematic maps, global positioning system, aerial photography and space RS. Of these, only RS can provide real-time information that can be used for the real-time spatial analysis. Over the past few years, RS techniques and technologies, including system capabilities have been significantly improved. Meanwhile, the costs for the primary RS data sets have drastically decreased(Amarsaikhan et al. 2009b). This means that it is possible to extract from RS images different thematic information in a cost-effective way and update different layers within a GIS.

Figure 10.

A diagram forupdate of urban GISvia processing of multisource RS images.

Inthe present study, it is assumed that there is an operational urban GIS that stores historical thematic layers and there is a need to update a land cover layer. The current land cover layer was created using an existing topographic map of 1984 and for its digitizing ArcGIS system was used. The digitized map is shown in Figure 9. As the overall classification accuracy of the classified multisource images exceeds 90%, the result can be directly used to update the land cover layer of the operational GIS. For this end, a raster thematic map (i.e., classified image) extracted from the multisource RS data sets should be converted into a vector structure. After error cleaning and editing, the converted from raster to vector layer can be topologically structured and stored within the urban GIS. If one compares the land cover layers created from the topographic map and classified RS images, could see what changes had occurred. A diagram forupdate of a land cover layer of an urban GISvia processing of multisource RS images is shown in Figure 10.

Advertisement

7. Conclusions

The main purpose of the research was to compare the performances of different data fusion techniques for the enhancement of different surface features and evaluate the features obtained by the fusion techniques in terms of separation of urban land cover classes. For the data fusion, two different approaches such as fusion of SAR data with SAR data and fusion of optical data with SAR data were considered. As the fusion techniques, multiplicative method, Brovey transform, PCA, Gram-Schmidt fusion, wavelet-based fusion and Elhers fusion were applied. In the case of the SAR/SAR approach the fused SAR images could not properly distinguish the available spectral classes. In the case of the optical/SAR approach, although, fusion methods demonstrated different results, detailed analysis of each image revealed that the image obtained by the wavelet-based fusion gave a superior image in terms of the spatial and spectral separations among different urban features. For the classification of the fused images, statistical MLC and knowledge-based method were used and the results were compared. As could be seen from the classification results, the performance of the knowledge-based technique was much better than the performances of the standard method and output could be directly used to update urban GIS. Overall, the research indicated that multisource information can significantly improve the interpretation and classification of land cover classes and the knowledge-based method is a powerful tool in the production of a reliable land cover map.

References

  1. 1. AbidiM. A.GonzalezR. C. 1992 Data Fusion in Robotics and Machine Intelligence (New York: Academic Press).
  2. 2. AmarsaikhanD.GanzorigM.BatbayarG.NarangerelD.TumentsetsegS. H. 2004 An integrated approach of optical and SAR images for forest change study, Asian Journal of Geoinformatics, 3, 2733 .
  3. 3. AmarsaikhanD.andDouglas. T. 2004 Data fusion and multisource data classification, International Journal of Remote Sensing, 17, 35293539 .
  4. 4. AmarsaikhanD.GanzorigM.AcheP.BlotevogelH. H. 2007 The integrated use of Optical and InSAR data for urban land cover mapping, International Journal of Remote Sensing, 28, 11611171 .
  5. 5. AmarsaikhanD.BlotevogelH. H.GanzorigM.MoonT. H. 2009a Applications of remote sensing and geographic information systems for urban land-cover changesstudies, Geocarto International,a multi-disciplinary journal of Remote Sensing and GIS, 24, 257271 .
  6. 6. AmarsaikhanD.GanzorigM.BlotevogelH. H.NerguiB.GantuyaR. 2009b Integrated method to extract information from high and very high resolution RS images for urban planning, Journal of Geography and Regional Planning, 2(10), 258267 .
  7. 7. AmarsaikhanD.BlotevogelH. H.Van GenderenJ. L.GanzorigM.GantuyaR.NerguiB. 2010 Fusing high resolution TerraSAR and Quickbird images for urban land cover study in Mongolia, International Journal of Image and Data Fusion, 1, 8397 .
  8. 8. AmarsaikhanD.andGanzorig. M.(2010Principlesof. G. I. S.forNatural.ResourcesManagement. 2 edn (Ulaanbaatar: Academic Press).
  9. 9. BaghdadiN.KingN.BourguignonA.RemondA. 2002 Potential of ERS and Radarsat data for surface roughness monitoring over bare agricultural fields: application to catchments in Northern France, International Journal of Remote Sensing, 23, 34273442 .
  10. 10. BenediktssonJ. A.SveinssonJ. R.AtkinsonP. M.TatnaliA. 1997 Feature extraction for multisource data classification with artificial neural networks, International Journal of Remote Sensing, 18, 727740 .
  11. 11. CaoX.ChenJ.ImuraH.HigashiO. 2009 A SVM-based method to extract urban areas from DMSP-OLS and SPOT VGT data,Remote Sensing of Environment,10, 22052209 .
  12. 12. ColditzR. R.WehrmannT.BachmannM.SteinnocherK.SchmidtM.StrunzG.DechS. 2006 Influence of image fusion approaches on classification accuracy: a case study, International Journal of Remote Sensing, 27, 33113335 .
  13. 13. CostaM.(2005 (2005).Estimate of net primary productivity of aquatic vegetation of the Amazon floodplain using Radarsat and JERS-1,International Journal of Remote Sensing, 26, 45274536 .
  14. 14. DengJ. S.WangK.DengK. D.QiG. J. 2008 PCA-based land-use change detection and analysis using multitemporal and multisensor satellite data,International Journal of Remote Sensing, 29, 48234838 .
  15. 15. EhlersM. 2004 Spectral characteristics preserving image fusion based on Fourier domain filtering. Remote Sensing for Environmental Monitoring, GIS Applications, and Geology IV, Proceedings of SPIE, 93116 .
  16. 16. EhlersM.KlonusS.ÅstrandP. J.(2008 (2008).Quality Assessment for multi-sensor multi-date image fusion, CD-ROM Proceedings of ISPRS Congresses, Beijing, China, July 311 .
  17. 17. ENVI(1999).User’s Guide, Research Systems.
  18. 18. ErbekF. S.ZkanC. O.TabernerM. 2004 Comparison of maximum likelihood classification method with supervised artificial neural network algorithms for land use activities,International Journal of Remote Sensing, 25, 17331748 .
  19. 19. ERDAS (1999).Field Guide, 5 edn (Atlanta, Georgia: ERDAS, Inc.).
  20. 20. FranklinS. E.PeddleD. R.DechkaJ. A.StenhouseG. B. 2002 Evidential reasoning with Landsat TM, DEM and GIS data for landcover classification in support of grizzly bear habitat mapping,International Journal of Remote Sensing, 23, 46334652 .
  21. 21. GonzalezA. M.SaletaJ. L.CatalanR. G.GarciaR. 2004 Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,IEEE Transactions Geoscience and Remote Sensing, 6, 12911299 .
  22. 22. Hegarat-MascleS. L.QuesneyA.Vidal-MadjarD.TaconetO.NormandM.LoumagneM. 2000 Land cover discrimination from mutitemporal ERS images and multispectral Landsat images: a study case in an agricultural area in France,International Journal of Remote Sensing, 21, 435456 .
  23. 23. HeroldN. D.HaackB. N.(2002 (2002).Fusion of Radar and Optical Data for Land Cover Mapping.Geocarto International, 17, 2130 .
  24. 24. KarathanassiV.KolokousisP.IoannidouS. 2007 A comparison study on fusion methods using evaluation indicators,International Journal of Remote Sensing, 28, 23092341 .
  25. 25. LiZ.LeungH. 2009 Fusion of multispectral and panchromatic images using a restoration-based method, IEEE Transactions Geoscience and Remote Sensing, 5, 14821491 .
  26. 26. MascarenhaN. D. A.BanonG. J. F.CandeiasA. L. B.(1996 (1996).Multispectral image data fusion under a Bayesian approach,International Journal of Remote Sensing, 17, 14571471 .
  27. 27. MatherP. M.(1999 (1999).Computer Processing of Remotely-sensed Images: an Introduction, 2 edn (Chichester: John Wiley & Sons).
  28. 28. MeherS. K.ShankarB. U.GhoshA.(2007 (2007).Wavelet-feature-based classifiers for multispectral remote-sensing images,IEEE Transactions Geoscience and Remote Sensing, 6 18811886 .
  29. 29. MunechikaC. K.WarnickJ. S.SalvaggioC.SchottJ. R. 1993 Resolution enhancement of multispectral image data to improve classification accuracy,Photogrammetric Engineering and Remote Sensing, 59, 6772 .
  30. 30. PajaresG.CruzJ. M.(2004 2004).A wavelet-based image fusion,Pattern Recognition, 18551872 .
  31. 31. PalubinskasG.DatcuM.(2008 (2008).Information fusion approach for the data classification: an example for 12 InSAR data,International Journal of Remote Sensing, 29, 46894703 .
  32. 32. PellemansA. H.JordansR. W.AllewijnR. 1993 Merging multispectral and panchromatic Spot images with respect to the radiometric properties of the sensor,Photogrammetric Engineering and Remote Sensing, 59, 8187 .
  33. 33. PohlC.Van GenderenJ. L. 1998 Multisensor image fusion in remote sensing: concepts, methods and applications,International Journal of Remote Sensing, 19, 823854 .
  34. 34. RicchettiE. 2001 Visible-infrared and radar imagery fusion for geological application: a new approach using DEM and sun-illumination model,International Journal of Remote Sensing, 22, 22192230 .
  35. 35. RichardsJ. A.JiaS.(1999 (1999).Remote Sensing Digital Image Analysis-An Introduction, 3 edn (Berlin: Springer-Verlag).
  36. 36. SaadiN. M.WatanabeK. 2009 Assessing image processing techniques for geological mapping: a case study in Eljufra, Libya, Geocarto International, 24, 241253 .
  37. 37. SarafA. K. 1999 IRS-1C-LISS-III and PAN data fusion: an approach to improve remote sensing based mapping techniques,International Journal of Remote Sensing, 20, 9096 .
  38. 38. Seetha, M., Malleswari, B.L., Muralikrishna, I.V. and Deekshatulu, B.L. (2007).Image fusion- a performance assessment,Journal of Geomatics, 1, pp.33-39.
  39. 39. SerkanM.MusaogluN.KirkiciH.OrmeciC.(2008 (2008).Edge and fine detail preservation in SAR images through speckle reduction with an adaptive mean filter,International Journal of Remote Sensing, 29, 67276738 .
  40. 40. SerpicoS. B.RoliF. 1995 Classification of multisensor remote sensing images by structural neural networks,IEEE Transactions on Geoscience and Remote Sensing, 33, 562578 .
  41. 41. SohL.K.andTsatsoulis. C.(1999 (1999).Unsupervised segmentation of ERS and Radarsatsea ice images using multiresolution peak detection and aggregatedpopulation equalization,International Journal of Remote Sensing, 20,30873109 .
  42. 42. SolbergA. H. S.TaxtT.JainA. K. 1996 A Markov random field model for classification of multisource satellite imagery,IEEE Transactions on Geoscience and Remote Sensing, 34, 100112 .
  43. 43. StorvikG.FjortoftR.SolbergA. H. S.(2005 2005).A bayesian approach to classification of multiresolution remote sensing data, IEEE Transactions Geoscience and Remote Sensing, 3, 539547 .
  44. 44. Teggi, S., Cecchi, R. and Serafini, R. (2003).TM and IRS-1C-PAN data fusion using multiresolution decomposition methods based on the ‘a trous’ algorithm,International Journal of Remote Sensing, 24, pp.1287-1301.
  45. 45. TeohC. C.MansorS. B.MispanM. R.Mohamed-ShariffA. R.AhmadN. 2001 Extraction of infrastructure details from fused image. Geoscience and Remote Sensing Symposium, IGARSS ‘01. IEEE 2001 International, 3, July 9-13 2001, 14901492 .
  46. 46. VerbylaD. L.(2001 2001).A test of detecting spring leaf flush within the Alaskan boreal forest using ERS-2 and Radarsat SAR data,International Journal of Remote Sensing, 22, 11591165 .
  47. 47. Vrabel,J. 1996 Multispectral imagery band sharpening study,Photogrammetric Engineering and Remote Sensing, 62, 10751083 .
  48. 48. WangY.KoopmansB. N.PohlC. 1995 The 1995 flood in the Netherlands monitored from space-a multisensor approach,International Journal of Remote Sensing, 16, 27352739 .
  49. 49. Westra, T., Mertens, K.C. and De Wulf, R.R. (2005).ENVISAT ASAR wide swath and SPOT-vegetation image fusion for wetland mapping: evaluation of different wavelet-based methods,Geocarto International, 20, 2131 .
  50. 50. ZhangJ. 2010 Multi-source remote sensing data fusion: status and trends,International Journal of Image and Data Fusion, 1, 524 .

Written By

D. Amarsaikhan and M. Saandar

Submitted: 08 October 2010 Published: 24 June 2011