Open access peer-reviewed chapter

Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications

By Dong Jiang, Dafang Zhuang, Yaohuan Huang and Jinying Fu

Published: June 24th 2011

DOI: 10.5772/10548

Downloaded: 6869

1. Introduction

1.1. Definition of image fusion

With the development of multiple types of biosensors, chemical sensors, and remote sensors on board satellites, more and more data have become available for scientific researches. As the volume of data grows, so does the need to combine data gathered from different sources to extract the most useful information. Different terms such as data interpretation, combined analysis, data integrating have been used. Since early 1990’s, “Data fusion” has been adopt and widely used. The definition of data fusion/image fusion varies. For example:

  • Data fusion is a process dealing with data and information from multiple sources to achieve refined/improved information for decision making (Hall 1992) [1].

  • Image fusion is the combination of two or more different images to form a new image by using a certain algorithm (Genderen and Pohl 1994 )[2].

  • Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. (Guest editorial of Information Fusion, 2007)[3].

  • Image fusion is a process of combining images, obtained by sensors of different wavelengths simultaneously viewing of the same scene, to form a composite image. The composite image is formed to improve image content and to make it easier for the user to detect, recognize, and identify targets and increase his situational awareness. 2010.

    (http://www.hcltech.com/aerospace-and-defense/enhanced-vision-system/).

Generally speaking, in data fusion the information of a specific scene acquired by two or more sensors at the same time or separate times is combined to generate an interpretation of the scene not obtainable from a single sensor [4]. Image fusion is a component of data fusion when data type is strict to image format (Figure 1). Image fusion is an effective way for optimum utilization of large volumes of image from multiple sources. Multiple image fusion seeks to combine information from multiple sources to achieve inferences that are not feasible from a single sensor or source. It is the aim of image fusion to integrate different data in order to obtain more information than can be derived from each of the single sensor data alone (`1+1=3’) [4].

Figure 1.

Illustration of relationship of data fusion and image fusion

The literature on data fusion in computer vision, machine intelligence and medical imaging is substantial, but will not be discussed here. This chapter focused on multi-sensor data fusion in satellite remote sensing area. The fusion of information from sensors with different physical characteristics enhances the understanding of our surroundings and provides the basis for planning, decision-making, and control of autonomous and intelligent machines [1].

1.2. Advance of image fusion

In the past decades it has been applied to different fields such as pattern recognition, visual enhancement, object detection and area surveillance [4]. In 1997, Hall and Llinas gave a general introduction to multi-sensor data fusion [1]. Another in-depth review paper on multiple sensors data fusion techniques was published in 1998 [4]. This paper explained the concepts, methods and applications of image fusion as a contribution to multi-sensor integration oriented data processing. Since then, image fusion has received increasing attention. Further scientific papers on image fusion have been published with an emphasis on improving fusion quality and finding more application areas. As a case in point, Simone et al. describe three typical applications of data fusion in remote sensing, such as obtaining elevation maps from synthetic aperture radar (SAR) interferometers, the fusion of multi-sensor and multi-temporal images, and the fusion of multi-frequency, multi-polarization and multi-resolution SAR images [5]. Vijayaraj provided the concepts of image fusion in remote sensing applications [6]. Quite a few survey papers have been published recently, providing overviews of the history, developments, and the current state of the art of image fusion in the image-based application fields [7-9], but recent development of multi-sensor data fusion in remote sensing fields has not been discussed in detail. The objectives of this paper are to present an overview of new advances in multi-sensor satellite image fusion, focused on its main application fields in remote sensing.

Data sourceObjectiveAuthorsTime
SPOT HRV & ERS SARAutomatic registrationOlivier Thepaut, Kidiyo Kpalma, Joseph Ronsin [10]1994
Hyperspectral image & SAR imageAutomatic target cueingTamar Peli, Mon Young, Robert Knox, Ken Ellis, Fredrick Bennet[11]1999
Multifrequency, multipolarization SAR imagesLand use classificationG. Simone, A. Farina, F.C. Morabito, S.B. Serpico, L. Bruzzone[5]2001
Landsat ETM+ Pan band & CBERS-1 multiple spectral dataMethods comparisonMarcia L.S. Aguena, Nelson D.A. Mascarenhas[12]2006
Landsat ETM+ & MODISUrban sprawl monitoringYing Lei, Dong Jiang, and Xiaohuan Yang[13]2007
AVIRIS and LIDARCoastal mappingAhmed F. Elaksher[14]2008

Table 1.

Examples of application of image fusion

1.3. Categorization of image fusion techniques

Image fusion can be performed roughly at four different stages: signal level, pixel level, feature level, and decision level. Figure 2 illustrates of the concept of the four different fusion levels [15].

Figure 2.

An overview of categorization of the fusion algorithms [15].

  1. Signal level fusion. In signal-based fusion, signals from different sensors are combined to create a new signal with a better signal-to noise ratio than the original signals.

  2. Pixel level fusion. Pixel-based fusion is performed on a pixel-by-pixel basis. It generates a fused image in which information associated with each pixel is determined from a set of pixels in source images to improve the performance of image processing tasks such as segmentation

  3. Feature level fusion. Feature-based fusion at feature level requires an extraction of objects recognized in the various data sources. It requires the extraction of salient features which are depending on their environment such as pixel intensities, edges or textures. These similar features from input images are fused.

  4. Decision-level fusion consists of merging information at a higher level of abstraction, combines the results from multiple algorithms to yield a final fused decision. Input images are processed individually for information extraction. The obtained information is then combined applying decision rules to reinforce common interpretation.

2. Advance in image fusion techniques

During the past two decades, several fusion techniques have been proposed. Most of these techniques are based on the compromise between the desired spatial enhancement and the spectral consistency. Among the hundreds of variations of image fusion techniques, the widely used methods include, but are not limited to, intensity-hue-saturation (IHS), high-pass filtering, principal component analysis (PCA), different arithmetic combination (e.g. Brovey transform), multi-resolution analysis-based methods (e.g. pyramid algorithm, wavelet transform), and Artificial Neural Networks (ANNs), etc. The chapter will provide a general introduction to those selected methods with emphases on new advances in the remote sensing field.

2.1. Traditional fusion algorithms

The PCA transform converts inter-correlated multi-spectral (MS) bands into a new set of uncorrelated components. To do this approach first we must get the principle components of the MS image bands. After that, the first principle component which contains the most information of the image is substituted by the panchromatic image. Finally the inverse principal component transform is done to get the new RGB (Red, Green, and Blue) bands of multi-spectral image from the principle components.

The intensity-hue-saturation (HIS) fusion converts a color MS image from the RGB space into the IHS color space. The HIS components can be defined as follows:

I=(R+G+B)/3E1
H=(BR)/3(IR),S=1R/I,whenR=Minimum(R,G,B)E2
H=(RG)/3(IG),S=1G/I,whenG=Minimum(R,G,B)E3
H=(GB)/3(IB),S=1B/I,whenB=Minimum(R,G,B)E4

Were I,H,S stand for intensity, hue and saturation components respectively; R, G, B mean Red, Green, and Blue bands of multi-spectral image.

Because the intensity (I) band resembles a panchromatic (PAN) image, it is replaced by a high-resolution PAN image in the fusion. A reverse IHS transform is then performed on the PAN together with the hue (H) and saturation (S) bands, resulting in an IHS fused image.

Different arithmetic combinations have been developed for image fusion. The Brovey transform, Synthetic Variable Ratio (SVR), and Ratio Enhancement (RE) techniques are some successful examples [9]. The basic procedure of the Brovey transform first multiplies each MS band by the high resolution PAN band, and then divides each product by the sum of the MS bands. The algorithm is shown in equation (5).

DNfused=DNpan×DNb1/(DNb1+DNb2+DNb3)E5

Where DNfused means the digital number (DN) of the resulting fused image; DNb1, DNb2 and DNb3 stand for pixel values of three bands of multiple spectral image; DNpan stand for pixel values of high resolution Pan band.

The SVR and RE techniques are similar, but involve more sophisticated calculations for the MS sum for better fusion quality. For example (Fig.3 ), Spot 5 Pan band data with spatial resolution of 2.5m of Yanqing city, Beijing China, in 2005 was fused with multiple spectral bands of Landsat TM data (spatial resolution:30m) in 2007. A simple Brovey transformation fusion method was used and the 3rd, 4th, 7th bands of TM were selected for calculation. The building areas remained unchanged from 2005-2007 were grey-purple, meanwhile, the newly established buildings were highlighted (lime color in Figure 3) in the composed image and could be easily detected.

Figure 3.

An example of Brovey transform based image fusion

Traditional fusion algorithms mentioned above have been widely used for relatively simple and time efficient fusion schemes. However, several problems must be considered before their application: (1) These fusion algorithms generate a fused image from a set of pixels in the various sources. These pixel-level fusion methods are very sensitive to registration accuracy, so that co-registration of input images at sub-pixel level is required; (2) One of the main limitations of HIS and Brovey transform is that the number of input multiple spectral bands should be equal or less than three at a time; (3) These image fusion methods are often successful at improves the spatial resolution, however, they tend to distort the original spectral signatures to some extent [16,17]. More recently new techniques such as the wavelet transform seem to reduce the color distortion problem and to keep the statistical parameters invariable.

2.2. Multi-resolution analysis-based methods

Multi-resolution or multi-scale methods, such as pyramid transformation, have been adopted for data fusion since the early 1980s [18]. The Pyramid-based image fusion methods, including Laplacian pyramid transform, were all developed from Gaussian pyramid transform, have been modified and widely used [19,20].

In 1989, Mallat put all the methods of wavelet construction into the framework of functional analysis and described the fast wavelet transform algorithm and general method of constructing wavelet orthonormal basis. On the basis, wavelet transform can be really applied to image decomposition and reconstruction [21-23]. Wavelet transforms provide a framework in which an image is decomposed, with each level corresponding to a coarser resolution band. For example, in the case of fusing a MS image with a high-resolution PAN image with wavelet fusion, the Pan image is first decomposed into a set of low-resolution Pan images with corresponding wavelet coefficients (spatial details) for each level. Individual bands of the MS image then replace the low-resolution Pan at the resolution level of the original MS image. The high resolution spatial detail is injected into each MS band by performing a reverse wavelet transform on each MS band together with the corresponding wavelet coefficients (Figure 4).

Figure 4.

Generic flowchart of wavelet-based image fusion

In the wavelet-based fusion schemes, detail information is extracted from the PAN image using wavelet transforms and injected into the MS image. Distortion of the spectral information is minimized compared to the standard methods [24]. For example, CBERS multiple spectral image (Figure 5, a) with spatial resolution of 19.2 m of Yiwu City, Zhejiang Province, China, in 2007 was fused with CBERS-HR PAN image (Figure 5, b) with spatial resolution of 2.4 m. Buildings and liner objects (roads,etc.) could be easily identified from fused images(c).

Figure 5.

Example of wavelet-based image fusion

In order to achieve optimum fusion results, various wavelet-based fusion schemes had been tested by many researchers. Among these schemes several new concepts/algorithms were presented and discussed. Candes provided a method for fusing SAR and visible MS images using the Curvelet transformation. The method was proven to be more efficient for detecting edge information and denoising than wavelet transformation [25]. Curvelet-based image fusion has been used to merge a Landsat ETM+ panchromatic and multiple-spectral image. The proposed method simultaneously provides richer information in the spatial and spectral domains [26]. Donoho et al. presented a flexible multi-resolution, local, and directional image expansion using contour segments, the Contourlet transform, to solve the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing [27,28]. Contourlet transform provides flexible number of directions and captures the intrinsic geometrical structure of images.

In general, as a typical feature level fusion method, wavelet-based fusion could evidently perform better than convenient methods in terms of minimizing color distortion and denoising effects. It has been one of the most popular fusion methods in remote sensing in recent years, and has been standard module in many commercial image processing soft wares, such as ENVI, PCI, ERDAS. Problems and limitations associated with them include: (1) Its computational complexity compared to the standard methods; (2) Spectral content of small objects often lost in the fused images; (3) It often requires the user to determine appropriate values for certain parameters (such as thresholds). The development of more sophisticated wavelet-based fusion algorithm (such as Ridgelet, Curvelet, and Contourlet transformation) could improve the performance results, but these new schemes may cause greater complexity in the computation and setting of parameters.

2.3. Artificial neural network based fusion method

Artificial neural networks (ANNs) have proven to be a more powerful and self-adaptive method of pattern recognition as compared to traditional linear and simple nonlinear analyses [29,30]. The ANN-based method employs a nonlinear response function that iterates many times in a special network structure in order to learn the complex functional relationship between input and output training data. The general schematic diagram of the ANN-based image fusion method can be seen in Figure 6.

Figure 6.

General schematic diagram of the ANN-based image fusion method.

The input layer has several neurons, which represent the feature factors extracted and normalized from image A and image B. The function of each neuron is a sigmoid function given by:

f(x)=11+exE6

In Figure 6, the hidden layer has several neurons and the output layer has one neuron (or more neuron). The ith neuron of the input layer connects with the jth neuron of the hidden layer by weight Wij, and weight between the jth neuron of the hidden layer and the tth neuron of output layer is Vjt (in this case t = 1). The weighting function is used to simulate and recognize the response relationship between features of fused image and corresponding feature from original images (image A and image B). The ANN model is given as follows:

Y=11+exp[(j=1qVjHjγ)]E7

In equation (7), Y=pixel value of fused image exported from the neural network model, q = number of nodes hidden (q~8 here), Vj=weight between jth hidden node and output node (in this case, there is only one output node), γ=threshold of the output node, Hj=exported values from the jth hidden node:

Hj=11+exp[(i=1nWijaiθj)]E8

Where Wij=weight between ith input node and the jth hidden node, ai=values of the ith input factor, n=number of nodes of input (n~5 here), θj=threshold of the jth hidden node.

As the first step of ANN-based data fusion, two registered images are decomposed into several blocks with size of M and N (Figure 6). Then, features of the corresponding blocks in the two original images are extracted, and the normalized feature vector incident to neural networks can be constructed [31]. The features used here to evaluate the fusion effect are normally spatial frequency, visibility, and edge. The next step is to select some vector samples to train neural networks. An ANN is a universal function approximator that directly adapts to any nonlinear function defined by a representative set of training data. Once trained, the ANN model can remember a functional relationship and be used for further calculations. For these reasons, the ANN concept has been adopted to develop strongly nonlinear models for multiple sensors data fusion. Thomas et al. discussed the optimal fusion method of TV and infrared images using artificial neural networks [32]. After that, many neural network models have been proposed for image fusion such as BP, SOFM, and ARTMAP neural networks. BP algorithm has been mostly used. However, the convergence of BP networks is slow and the global minima of the error space may not be always achieved [33]. As an unsupervised network, SOFM network clusters input sample through competitive learning. But the number of output neurons should be set before constructing neural networks model [34]. RBF neural network can approximate objective function at any precise level if enough hidden units are provided. The advantages of RBF network training include no iteration, few training parameters, high training speed, simply process and memory functions [35]. Hong explored the way that using RBF neural networks combined with nearest neighbor clustering method to cluster, and membership weighting is used to fuse. Experiments show this method can obtain the better effect of cluster fusion with proper width parameter [36].

Gail et al. used Adaptive Resonance Theory (ART) neural networks to form a new framework for self-organizing information fusion. The ARTMAP neural network can act as a self-organizing expert system to derive hierarchical knowledge structures from inconsistent training data [37]. ARTMAP information fusion resolves apparent contradictions in input pixel labels by assigning output classes to levels in a knowledge hierarchy [38]. Rong et al. presented a feature-level image fusion method based on segmentation region and neural networks. The results indicated that this combined fusion scheme was more efficient than that of traditional methods [39].

The ANN-based fusion method exploits the pattern recognition capabilities of artificial neural networks, and meanwhile, the learning capability of neural networks makes it feasible to customize the image fusion process. Many of applications indicated that the ANN-based fusion methods had more advantages than traditional statistical methods, especially when input multiple sensor data were incomplete or with much noises. It is often served as an efficient decision level fusion tools for its self learning characters, especially in land use/land cover classification. In addition, the multiple inputs − multiple outputs framework make it to be a possible approach to fuse high dimension data, such as long-term time-series data or hyper-spectral data.

2.4. Dempster-Shafer evidence theory based fusion method

Dempster-Shafer decision theory is considered a generalized Bayesian theory, used when the data contributing to the determination of the analysis of the images is subject to uncertainty. It allows distributing support for proposition not only to a proposition itself but also to the union of propositions that include it. Huadong Wu et.al. presented a system framework that manages information overlap and resolves conflicts, and the system provides eneralizable architectural support that facilitates sensor fusion [40].

Compared with Bayesian theory, the Dempster-Shafer theory of evidence feels closer to our human perception and reasoning processes. Its capability to assign uncertainty or ignorance to propositions is a powerful tool for dealing with a large range of problems that otherwise would seem intractable [40]. The Dempster-Shafer theory of evidence has been applied on image fusion using SPOT/HRV image and NOAA/AVHRR series. The results show unambiguously the major improvement brought by such a data fusion, and the performance of the proposed method [41]. H. Borotschnig et.al. compared three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence [42]. The results indicated that Dempster-Shafer decision theory based sensor fusion method will achieve much higher performance improvement, and it provides estimates of imprecision and uncertainty of the information derived from different sources

2.5. Multiple algorithm fusion

As a coin has two sides, each fusion method has its own set of advantages and limitations. The combination of several different fusion schemes has been approved to be the useful strategy which may achieve better quality of results [16,24]. As a case in point, quite a few researchers have focused on incorporating the traditional IHS method into wavelet transforms, since the IHS fusion method performs well spatially while the wavelet methods perform well spectrally [24,41]. However, selection and arrangement of those candidate fusion schemes are quite arbitrary and often depends upon the user’s experience. Optimal combining strategy for different fusion algorithms, in another word, ‘algorithm fusion’ strategy, is thus urgent needed. Further investigations are necessary for the following aspects: 1) Design of a general framework for combination of different fusion approaches; 2) Development of new approaches which can combine aspects of pixel/feature/decision level image fusion; 3) Establishment of automatic quality assessment method for evaluation of fusion results.

3. Applications of image fusion

Remote sensing techniques have proven to be powerful tools for the monitoring of the Earth’s surface and atmosphere on a global, regional, and even local scale, by providing important coverage, mapping and classification of land cover features such as vegetation, soil, water and forests [5] The volume of remote sensing images continues to grow at an enormous rate due to advances in sensor technology for both high spatial and temporal resolution systems. Consequently, an increasing quantity of image data from airborne/satellite sensors have been available, including multi-resolution images, multi-temporal images, multi-frequency/spectral bands images and multi-polarization image. The goal of multiple sensor data fusion is to integrate complementary and redundant information to provide a composite image which could be used to better understanding of the entire scene. It has been widely used in many fields of remote sensing, such as object identification, classification, and change detection. The following paragraphs describe the recent achievements of image fusion in more detail.

3.1. Object identification

The feature enhancement capability of image fusion is visually apparent in VIR/VIR combinations that often results in images that are superior to the original data. In order to maximize the amount of information extracted from satellite image data useful products can be found in fused images [4]. A Dempster-Shafer fusion method for urban building detection was presented in 2004. First and last pulse of LIDAR data and multi-spectral aerial imagery were used. Apart from buildings, the classes ‘tree’, ‘grass land’, and ‘bare soil’ are also distinguished by a classification method based on the Dempster-Shafer theory of data fusion. Identification of linear objects such as roads could also benefit from image fusion techniques. An integrated system for automatic road mapping from high-resolution multi-spectral satellite imagery by information fusion was discussed by Xiaoying et al. in 2005 [43]. Andrea presents a solution to enhance the spatial resolution of MS images with high-resolution PAN data. The proposed method exploits the undecimated discrete wavelet transform, and the vector multi-scale Kalman filter, which is used to model the injection process of wavelet details. Fusion simulations on spatially degraded data and fusion tests at

Figure 7.

NDVI profile for different crop types.

the full scale reveal that an accurate and reliable PAN-sharpening is achieved by the proposed method [44]. A case study, which extract crop field using high spatial resolution image and images with high time repetitiveness, was shown as follows.

Identification of crop types from satellite imagery is a challenging task. Here we present an automatic approach for planting areas extracting in mixed planting regions around Beijing city using MODIS data and Landsat TM data. Firstly, planting areas were distinguished with non-crop areas from Landsat TM image using traditional supervised classifier. Then, time series NDVI derived from MODIS data were used for indentifying different types of crops. Because different crop has different growth stage, maximum or minimum value of crop’s NDVI is not same and it appears in different date.

After investigating the planting structure of main crops and analyzing the NDVI value of different crop from the middle of March to the middle of November 2002 in Beijing, planting area of winter wheat, spring maize, summer maize and bean in Beijing has been extracted.

Figure 8.

Spatial distribution of main crops of Beijing in 2002

3.2. Classification

Classification is one of the key tasks of remote sensing applications. The classification accuracy of remote sensing images is improved when multiple source image data are introduced to the processing [4]. Images from microwave and optical sensors offer complementary information that helps in discriminating the different classes. As discussed in the work of Wang et al., a multi-sensor decision level image fusion algorithm based on fuzzy theory are used for classification of each sensor image, and the classification results are fused by the fusion rule. Interesting result was achieved mainly for the high speed classification and efficient fusion of complementary information [45]. Land-use/land-cover classification had been improved using data fusion techniques such as ANN and the Dempster-Shafer theory of evidence. The experimental results show that the excellent performance of classification as compared to existing classification techniques [46, 47]. Image fusion methods will lead to strong advances in land use/land cover classifications by use of the complementary of the data presenting either high spatial resolution or high time repetitiveness.

For example, Indian P5 Panchromatic image (Figure 9 b) with spatial resolution of 2.18 m of Yiwu City, Southeast China, in 2007 was fused with multiple spectral bands of China-Brazil CBERS data (spatial resolution: 19.2m) (Figure 9 a) in 2007. Brovey transformation fusion method was used.

Figure 9.

Result of image fusion: CBERS MS and P5 PAN

Figure 10.

Land use classification of Yiwu city,2007

Results indicated that the accuracy of residential areas of Yiwu city derived from fused image is much higher than result derived from CBERS multiple spectral image (Table 2).

Data sourcesResidential and build-up areas
(km2)
Accuracy
(%)
CBERS8682
P5 + CBERS6792
Statistical data73-

Table 2.

Comparison of land use classification results

3.3. Change detection

Change detection is the process of identifying differences in the state of an object or phenomenon by observing it at different times [48]. Change detection is an important process in monitoring and managing natural resources and urban development because it provides quantitative analysis of the spatial distribution of the population of interest [49]. Image fusion for change detection takes advantage of the different configurations of the platforms carrying the sensors. The combination of these temporal images in same place enhances information on changes that might have occurred in the area observed. Sensor image data with low temporal resolution and high spatial resolution can be fused with high temporal resolution data to enhance the changing information of certain ground objects. Madhavan et al. presented a decision level fusion system that automatically performs fusion of information from multi-spectral, multi-resolution, and multi-temporal high-resolution airborne data for a change-detection analysis. Changes are automatically detected in buildings, building structures, roofs, roof color, industrial structures, smaller vehicles, and vegetation [50]. Two examples of Change detection using image fusion method are shown as follows.

1. Change detection using Landsat ETM+ and MODIS data

Recent study indicated that urban expansion could be efficiently monitored using satellite images with multi-temporal and multi-spatial resolution. For example, Landsat ETM+ Panchromatic image (Figure 11 a) with spatial resolution of 10 m of Chongqing City, Southwest China, in 2000 was fused with daily-received multiple spectral bands of MODIS data (spatial resolution: 250m) (Figure 11 b) in 2006. Brovey transformation fusion method was used. The building areas remained unchanged from 2000 to 2006 were in grey-pink. Meanwhile, the newly established buildings were in dark red color in the composed image (Figure 12) and could be easily identified.

Figure 11.

Satellite images of Chongqing City

Figure 12.

Fusion result of multiple sources images of Chongqing City

2. Change detection using former land-cover map and multiple spectral images

In the study area, Qingpu district of Shanghai City,China, two kinds of data were fused for automatic urban sprawl monitoring, which include land cover map, multiple spectral image of Environment Satellites1 (HJ-1). The land cover map of 2005 was used as prior knowledge for hyperspace analysis and segment. HJ-1 image of September 22, 2009 were geometric and radiometric corrected. HJ-1 images consisted of four spectral bands, which are three visible bands and a near infra-red (NIR) band.

Two data layers were overlapped and spectral DN value of the five kinds of land cover types were extracted. The results in Figure 3 show that spectral DN value of the five land cover types most clusters in relevant three-dimensional ellipsoid spaces. Outliers were considered pixels with higher probability of changed area. Based on three-dimensional feature space analysis, the map of urban expansion could be achieved.

Figure 13.

Three-dimensional scatter plots and feature space of five kinds of land cover types

In recent years, object-oriented processing techniques are becoming more popular, compared to traditional pixel-based image analysis, object-oriented change information is necessary in decision support systems and uncertainty management strategies. An in-depth paper presented by Ruvimbo et al. introduced the concept and applications of object-oriented change detection for urban areas [49]. In general, due to the extensive statistical and derived information available with the object-oriented approach, a number of change images can be presented depending on research objectives. In land use and land cover analysis; this level of precision is valuable as analysis at the object level enables linkage with other GIS databases or derived socio-economic attributes.

3.4. Maneuvering target tracking

Maneuvering target tracking is a fundamental task in intelligent vehicle research. With the development of sensor techniques and signal/image processing methods, automatic maneuvering targets tracking can be conducted operationally. Meanwhile, multi-sensor fusion is found to be a powerful tool to improve tracking efficiency. The tracking of objects using distributed multiple sensors is an important field of work in the application areas of autonomous robotics, military applications, and mobile systems [51].

The numbers of the papers focused on the problem of fusion between radar and image sensors in targets tracking have appeared in recent years [52,53]. Fusion of radar data and infrared images could improve the positioning accuracy and narrow down the image working area [54]. Vahdati-khajeh addressed the multi-target tracking problem for maneuvering targets in cluttered environments. The multiple scan joint probabilistic data association (MJPDA) algorithm was used for the sake of overcoming the problem of clutter points and targets which have joint observation [55]. In order to overcome the defects of the current statistical model on non-maneuvering target tracking, Chen et al. presented a novel multi-sensor data fusion algorithm for tracking the large-scale maneuvering target. The fuzzy adaptive Kalman filtering algorithm with maneuvering detection was used for large-scale maneuvering target which extracts feature data from Kalman filtering processes to estimate the magnitude and time of maneuvering. The simulation results showed that the tracking system with active and passive radar has higher precision than those with a single sensor for large-scale problems [52].

4. Discussion and conclusions

Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. It is widely recognized as an efficient tool for improving overall performance in image based application. The chapter provides a state-of-art of multi-sensor image fusion in the field of remote sensing. Below are some emerging challenges and recommendations.

1. Improvements of fusion algorithms.

Among the hundreds of variations of image fusion techniques, methods which had be widely used including IHS, PCA, Brovey transform, wavelet transform, and Artificial Neural Network (ANN). For methods like HIS, PCA and Brovey transform, which have lower complexity and faster processing time, the most significant problem is color distortion [16]. Wavelet-based schemes perform better than those methods in terms of minimizing color distortion. The development of more sophisticated wavelet-based fusion algorithm (such as Ridgelet, Curvelet, and Contourlet transformation) could evidently improve performance result, but they often cause greater complexity in computation and parameters setting. Another challenge on existing fusion techniques will be the ability for processing hyper-spectral satellite sensor data. Artificial neural network seem to be one possible approach to handle the high dimension nature of hyper-spectral satellite sensor data.

2. Establishment of an automatic quality assessment scheme.

Automatic quality assessment is highly desirable to evaluate the possible benefits of fusion, to determine an optimal setting of parameters for a certain fusion scheme, as well as to compare results obtained with different algorithms [34]. Mathematical methods were used to judge the quality of merged imagery in respect to their improvement of spatial resolution while preserving the spectral content of the data. Statistical indices, such as cross entropy, mean square error, signal-to-noise ratio, have been used for evaluation purpose. While recently a few image fusion quality measures have been proposed, analytical studies of these measures have been lacking. The work of Yin et al. focused on one popular mutual information-based quality measure and weighted averaging image fusion [56]. Jiying presented a new metric based on image phase congruency to assess the performance of the image fusion algorithm [57]. However, in general, no automatic solution has been achieved to consistently produce high quality fusion for different data sets [58]. It is expected that the result of fusing data from multiple independent sensors will offer the potential for better performance than can be achieved by either sensor, and will reduce vulnerability to sensor specific countermeasures and deployment factors. We expect that future research will address new performance assessment criteria and automatic quality assessment methods [59].

© 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Dong Jiang, Dafang Zhuang, Yaohuan Huang and Jinying Fu (June 24th 2011). Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications, Image Fusion and Its Applications, Yufeng Zheng, IntechOpen, DOI: 10.5772/10548. Available from:

chapter statistics

6869total chapter downloads

14Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Image Fusion Based on Integer Lifting Wavelet Transform

By Gang Hu, Yufeng Zheng and Xin-qiang Qin

Related Book

First chapter

A Survey of Image Segmentation by the Classical Method and Resonance Algorithm

By Fengzhi Dai, Masanori Sugisaka and Baolong Zhang

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us