Open access

Image Fusion Based on Integer Lifting Wavelet Transform

Written By

Gang Hu, Yufeng Zheng and Xin-qiang Qin

Submitted: 09 February 2011 Published: 24 June 2011

DOI: 10.5772/16358

From the Edited Volume

Image Fusion and Its Applications

Edited by Yufeng Zheng

Chapter metrics overview

3,779 Chapter Downloads

View Full Metrics

1. Introduction

Image fusion can synthesize many images from different sensors into a picture which can meet specific application by using a mathematical model. It can effectively combine the advantages from different images and improve the analysis ability (Blum et al., 2005). In recent years, the image fusion in automatic target recognition, computer vision, remote sensing, robots, medical image processing and military fields has a very wide range of applications.

In many of the image fusion technology, image fusion based on multi-resolution analysis has become the focus of research and hotspot now. At present, the image fusion base on multi-resolution method can be divided into three kinds (Hu et al., 2008): the first kind is based on the pyramid decomposition (such as Laplace pyramid, ratio low-pass pyramid, contrast pyramid, gradient pyramid, etc.). The second type is based on wavelet decomposition, such as discrete wavelet transform (Li &Wu, 2003), wavelet and wavelet packet framework (Wang, 2004), multi-wavelet transform (Zhang et al., 2005), integer wavelet transform (Wang et al., 2008), FILWT (Li&Zhu, 2007), the dual tree complex wavelet transform (Yang et al., 2007), etc. The third type is new multiresolution methods, such as image fusion based on finite ridgelet transform (Liu et al., 2007), curvelet transform(Filippo et al., 2007) and contourlet transform (Li et al., 2008; Yang & Jiao, 2008).

In the multi-resolution fusion process, the choices of rules and operators is crucial and it will affects the quality of fusion image. However, the existing multiresolution image fusion research and experiment are basically only for two images and the model is only suitable for 2 image fusion. It cannot be generalized to many images (two above ) fusion. Although in the multiresolution image fusion methods, the existing simple fusion rules model can be extended to the fusion which has more than two image. Such as average accurate measurement, pixel absolute value choose big quasi measurement and based on region characteristics (such as regional energy, entropy, variance, average gradient, contrast and markov distance etc) choice of large prospective measurement, etc, but the result is very limited.

Therefore, this paper, a novel fusion algorithm of multiple images based on fast integer lifting wavelet transform is proposed. The algorithm may consider both of the result and speed of the fusion with tools of lifting wavelet. Also, according to the image promotion wavelet transformation different sub-bands characteristics, two kinds of new high and low frequency fusion strategy are proposed respectively. The result of experiment shows that this algorithm is not only suitable for rebalanced source image fusion, and can achieve good fusion result and faster fusion speed.

Advertisement

2. Integer lifting wavelet transform

Sweldens from Bell Labs proposed a method which does not depend on the Fourier transform of ascension wavelet construction in the mid 1990's, it not only inherited the time-frequency localization features of traditional wavelet but also has some other advantages (Li&Zhu, 2007; Lin, 2005)

2.1. The basic principle of integer lifting wavelet transform

In spatial domain, the realization process of lifting wavelet transform mainly divided into three steps:

Step 1. Split. Which produce a simple lazy wavelet and it make a original signals0,ninto two smaller each intersection Wavelet subset sl,k0anddl,k0according to the parity of split,

split(s0,n)=(sl,k0,dl,k0)E1

Where the “split()”means Split operator.

Step 2. Calculate. The process of calculation mainly means dual lifting. The adjacent even number sequence can be used to predict the odd sequence because of the dependence between the data,

dl,ki=dl,ki1predict(sl,ki1)E2

Step 3. Update. The update is further improving properties of original process of ascension. The basic idea is to find a better son datasl,ki. Make it retain some of the scale character of original subsetssl,k0. The expression of the process of update is:

dl,ki=dl,ki1predict(sl,ki1)E3

Where the “update()”means Update operators.

After the finite layer of ascension, even sequences represent wavelet decomposition of low-frequency even after the ascending sequence represents wavelet decomposition low-frequency and odd sequence represents the high coefficients of low-frequency do the same operation can be get to the next level transformation. Ascension wavelet transform inverter change just put the above steps in turn.

2.2. Biorthogonal symmetric 9/7 wavelet decomposition of integer ascension

Any limited long wavelet filters can be decomposed by factor its corresponding mechanism of promoting realization. This paper used for image fusion of ascension wavelet transforms using image processing use most (9/7) filter. If the input signal recorded ass={sk|kZ}.Then the corresponding implementation method for ascension (Lin, 2005):

sl(0)=s2k,dl(0)=s2k+1dl(1)=dl(0)+[α(sl(0)+sl+1(0))+1/2]sl(1)=sl(0)+[β(dl(1)+dl1(1))+1/2]dl(2)=dl(1)+[γ(sl(1)+sl+1(1))+1/2]sl(2)=sl(1)+[δ(dl(2)+dl1(2))+1/2]sl=sl(2)/K,dl=Kdl(2)E4

Where

α=1.586134342, β=0.0529801185

γ=0.8829110762, δ=0.4435068522

K=1.149604398E5

sland dlrespectively means the low frequency and high frequency components of the wavelet decomposition. Wavelet decomposition of the low and high component inverter change just pour push back.

For image promotion wavelet transform method usually adopts the ranks of image is hierarchically, first did them ascension decomposition of matrix transformation, again to results obtained by columns are ascending decomposition transform, after a layer decomposes available source image smooth characteristics respectively reflect the low-frequency sub image and reflect brightness mutation and details of the three characteristics in horizontal and vertical direction (in) and diagonal high-frequency sub image. Compared with the traditional based on convolution computation of wavelet transform compared, adopting lifting scheme can wavelet transform the calculation average halved, in two-dimensional image data processing, can reduce about 3/4 computation (Li&Zhu, 2007; Lin, 2005). Figure 1 (Top) shows a two dimensional example of visible light image for integer lifting wavelet decomposition with depth 2 and with depth 3.

Advertisement

3. Image fusion based on integer lifting wavelet transform

Assume I1,I2,,Inare already registration source image, Fis fusion image,J is ascension wavelet decomposition layers; Suppose CJ,I1,CJ,I2,,CJ,Inand CJ,F respectively are source image I1,I2,,Inand the low-frequency sub-image in J ofF,Soppose Dj,I1ε,Dj,I2ε,,Dj,InεandDj,Fε respectively are I1,I2,,Inand FIn decomposed scalej(1jJ) on the direction of high-frequency sub image ofε, ε=1,2,3respectively denote vertical, HPOS and diagonal position.

Figure 1.

Top) Two-dimensional integer lifting wavelet decomposition of visible light image with depth 2. (Bottom) Two-dimensional integer lifting wavelet decomposition of visible light image with depth 3.

Advertisement

3.1 The fusion rules of the low frequency area

First, suppose the ABS of the Low-frequency coefficients of I1,I2,,In are|CJ,I1|,|CJ,I2|,,|CJ,In|, normalize for:

NCJ,Ii(x,y)=|CJ,Ii(x,y)|/i=1n|CJ,Ii(x,y)|(i=1,2,,n)E6

Then defining a matching degreeMNCJ(x,y):

MNCJ(x,y)=maxi{1,2,,n}{NCJ,Ii(x,y)}mini{1,2,,n}{NCJ,Ii(x,y)}E7

Finaly, determine fusion operators. Defining a threshold T (Usually in0.5~1), ifMNCJ(x,y)T, then

CJ,F(x,y)=i=1npIiCJ,Ii(x,y)E8

Where, the complete computational algorithm of weighting coefficientpIi(i=1,2,,n)is detailed in Table 1.

Table 1.

Table 1. The algorithm of computation on weighting coefficientpIi

If

MNCJ(x,y)<TE9
, then

CJ,F(x,y)=i=1nqIiCJ,Ii(x,y)E10

Where, qIiis the new weighting coefficient defined as:

qIi(x,y)=NCJ,Ii(x,y),
(i=1,2,,n)E11

By above knowable, while the ABS of the low-frequency coefficients have large differences,then choose the coefficient absolute value larger as the fusion of pixels; while the ABS of the low-frequency coefficients have rarely differences,adopt WA-Weighted Average determine fusion of low-frequency coefficients. So the low-frequency fusion rules may, according to the characteristics of the image itself dynamically select the weighted average method and the pixel of absolute value chosen 51, thus namely suitable for low frequence part complementary stronger image, and suitable for low frequence part more similar, complementary poor image.

3.2. The fusion rules of the high frequency area

First, we can compute respectively the local average value variance MSEj,Iiε(i=1,2,,n;ε=1,2,3;1jJ) of image Ii(i=1,2,,n) in decomposition scalejand in directionε:

MSEj,Iiε(x,y)=1M×Nm=1Mn=1N[Dj,Iiε(x+mM+12,y+nN+12)m¯j,Iiε(x,y)]2E12

Where, M、N indicate the local area respectively the rows and columns number (General took for odd); m¯j,Iiε(x,y)indicate Iilocal area to should the average value of a pixel,

m¯j,Iiε(x,y)=1M×Nm=1Mn=1NDj,Iiε(x+mM+12,y+nN+12)E13

By the reason of MSEj,Iiε can describe the local area grayscale value variations and dispersion degree. It fully represented parties upward of local significant, and it can reflect the detail of the image and the peripheral information. Mean variance is bigger, the local area grayscale value more change, grey value more scattered. So, this image of local area chooses mean variance as active measure for high frequency components of the fusion.

Then, we can normalize the mean variance MSEj,Iiε of the source image in decomposition scale jand in directionε to be:

NMSEj,Iiε(x,y)=MSEj,Iiε(x,y)i=1nMSEj,Iiε(x,y)  ,  1jJ;ε=1,2,3;i=1,2,,nE14
,

Then, a matching degrees is defined as given below.

MNMSEjε(x,y)=maxi{1,2,,n}{NMSEj,Iiε(x,y)}mini{1,2,,n}{NMSEj,Iiε(x,y)}E15

At last, a matching degree threshold αis defined that its value usually from 0.5 to 12. IfMNMSEjε(x,y)α, then

Dj,Fε(x,y)=i=1npIiDj,Iiε(x,y)E16

Where, the complete computational algorithm of weighting coefficientpIi(i=1,2,,n)is detailed in Table 2.

Table 2.

The algorithm of computation on weighting coefficientpIi

Namely select with current processing pixels (x, y) as the center of local area the mean variance biggest image of wavelet coefficients as fusion image and the corresponding wavelet coefficients.

If
MNMSEjε(x,y)<αE17
, then
Dj,Fε(x,y)=i=1nqIiDj,Iiε(x,y)E18

Where, qIiis the new weighting coefficient defined as:

qIi(x,y)=NMSEj,Iiε(x,y)(i=1,2,,n)E19

Above fusion rules indicate, while the high frequency component of source image have large difference in local mean variance in the corresponding decomposition layers and the corresponding on the direction, the corresponding decomposition layers and the corresponding on the direction of local mean variance large difference, that one of the source image contains rich details information, and other source image containing less detailed information, then use local mean variance choose big fusion rules. And in other hands, while the high frequency component of source image have less difference in local mean variance in the corresponding decomposition layers and the corresponding on the direction, it explained that the high coefficients contain rich details information, when using the weighted average fusion operators determine after fusion of wavelet coefficients. This can be clearly retention of image significantly signal details, can avoid again missing information, reduce the noise again at the same time ensure the consistency of the fused image.

3.3. The image fusion scheme

The fusion framework using integer lifting wavelet transform is shown in Fig.2. The approach to image fusion in ILWT (integer lifting wavelet transform) domain is as follows.

Step 1. First, separately two-dimensional integer lifting wavelet decomposition to the source imageI1,I2,,Inwhich have registration already.

Step 2. Adopt the rules of selection and weighted average low-frequency fusion in section 3.1 for the low-frequency decomposition coefficient; the corresponding low-frequency fused images are obtained by using (Eq. 7) and (Eq. 9).

Step 3. For the vertical and horizontal and diagonal three orientations of high-frequency decomposition coefficient, go for high frequency component by using (Eq. 14) and (Eq. 15) in section 3.2.

Step 4. Finally, determine the Scale coefficientsCJ,Fand the coefficients Dj,Fε(1jJ;ε=1,2,3) for each wavelet of the fused image. By IILWT (inverter integer lifting wavelet transform), we can get the ultimate fusion image.

Advertisement

4. Experimental study and analysis

In order to verify the algorithm the effect and the performance, we focus more on three pieces of the image fusion simulation experiment. At the same time, also with the other two traditional wavelet images fusion algorithms of results were compared. In this algorithm, the source image on three-layer integer ascension wavelet decomposition, take a Form with

Figure 2.

The fusion framework using integer lifting wavelet transform

size of 3×3 in local region; the fusion when matching degree threshold of high frequency part and the low frequency part are separately took to be 0.75 and 0.65. In order to facilitate comparison, here the other two traditional wavelet image fusion algorithms are recorded as: wavelet fusion algorithm I and wavelet fusion algorithm II. Including, wavelet fusion algorithm of I high and low frequency component used respectively to local variance choose big norms and weighted average of low-frequency fusion rule, and the wavelet fusion algorithm of high and low frequency component II respectively by means of absolute value choose big norms and take an average of low-frequency fusion rules。In addition, the fusion algorithm of wavelet II and algorithm II, choose sym type 4 as multi-scale wavelet image decomposition and reconstruction tool, but wavelet decomposition layers and the local window size of the region and the algorithm are consistent.

Figure 3 gives more focused on three pictures of image fusion result. Figure 3 (a) and (b), (c) respectively, focusing on the left, middle and the right target for fusion source image, its image size are 512 x 512, and with precise registration. Figure 3 (a) from left of target clearer, figure 3 (b) closer to the middle goal on the relatively clear, figure 3 (c) on the right target clearer. Fusion purpose is to get a picture on the left, and right among both imaging of fused image is very clear. Figure 3 (d) and (e), (f) in order to take advantage of this paper fusion algorithm, wavelet and wavelet I fusion algorithm fusion algorithm II income of fused image. From the fusion results can see, three algorithms can get quite satisfactory visual effect, as far as possible to eliminate the source image focusing of the difference, raises the fused image of overall clarity. But by comparing it was evident that this algorithm can income of fused image is best effect, the fused images in all goals are the most clearly. In order to better compare three algorithm is superior, figure 2 shows three algorithm of fused image left, income from middle and right to local area of large PIC。Among them, the figure 4 (a) and (d), (g) respectively derived from the algorithm income of fused image left, and right among local regions; Figure 4 (b), (e), (h) respectively from fusion algorithm I income fused images, left, and right among the local area; Figure 4 (c), (f), (I) respectively from fusion algorithm respectively from II income fused images, left, and right among the local area. By figure 4 can see, this algorithm can income of fused image left, center and local area are very clear right and other details are well reserve. For example, figure 4 (a) the rings and the cord (dotted line box labeled part) was obviously reservation; Figure 4 (d) of saw tooth and background passage clearly visible; Figure 4 (g) the edge of the ball was the most obvious, the most clear, the details to double eliminate the most clean。Relative to character, the algorithm of wavelet fusion algorithm I income of fusion result commercial and wavelet fusion algorithm II income of fusion result worst, such as figure 4 (c) of the rings and the cord almost invisible, figure 4 (f) the zigzag and background line and figure 4 (I), the ball is not too clear.

Subjective visual evaluation can be given fusion result intuitive contrast, but are susceptible to personal experience and visual psychology factors, so need and objective evaluation standard combining comprehensive evaluation. Here we use the information entropy (IE), average gradient (AG) and SD as objective performance evaluation standard (Blum et al., 2005), is presented in table 3 the above image fusion result the objective performance evaluation results. From table 3 shows an objective evaluation index data can see: relative wavelet fusion algorithm, this algorithm can I and II obtained relatively good objective evaluation index and faster fusion. From figure 1 of the fusion results can see, relative to other two kinds of wavelet fusion method, this chapter algorithm of fused image edge details of income is more significant. In order to more intuitive to assess the algorithm can well reserve the detail information source image edge, Laplace operator selects the edge extraction of fused image in figure 3, and the edge extraction of fused image is shown in Fig.5. Obviously the algorithm fusion image edges detail information richer, more conducive to image segmentation, identification further treatment.

Advertisement

5. Conclusion

Based on the analysis of the existing multi-resolution fusion method, based on most current fusion effect good multi-resolution fusion model is only suitable for two images fusion, unable to better suit many images fusion of the problem, this paper proposes an integer ascension based on wavelet transform many images fusion algorithm. For many, blurring the amplitude image fusion experimental simulation and the experimental results were compared and analyzed. Experimental results show that the algorithm not only suitable for real-time rebalanced source image fusion, and can obtain rapid visual effect is better, details, and more abundant of fused image. In addition, can put the image fusion effect the objective evaluation index is introduced to this paper matching degree of the threshold of adaptive selection process, the matching degree of the threshold of optimal choice problem is transformed into an optimal problem, so using intelligent optimization algorithm (such as immune genetic algorithm, cloning algorithm and the particle swarm optimization algorithm, etc) to realize the matching degree of the threshold of optimal choice, it needs to be further research.

Figure 3.

Multi-focus Source image and fusion results

Figure 4.

Zoom-in of image fusion results

Fused imageObjective evaluation indexFusion time consuming /S
EntropyAverage gradientStandard deviation
Fig.3(d)7.186922.903758.10621.819
Fig.3(e)7.104320.447956.85172.358
Fig.3(f)7.079519.053655.98412.137

Table 3.

Performance comparison of difference fusion schemes Multi-focus image

Figure 5.

Edge extraction from image fusion results

References

  1. 1. BlumR. S.XueZ. Z.ZhangM. 2005 Multi-sensor image fusion and its applications, InTech, 0-84933-417-9 Raton, USA
  2. 2. HuG.LiuZ.XuX. 2008 Research and Recent Revelopment of ImageFusion at Pixel Level. Application Research of Computers, 25 3 (March 2008), 650655 , (in Chinese)
  3. 3. LiM.WuS. J. 2003 A New Image Fusion Algorithm Based on Wavelet Transform, Proceedings of International Conference on Computational Intelligence and Multimedia Applications, 154159 , Phoenix, Arizona, USA, May 17-22, 2003
  4. 4. WangH. 2004 A New Multiwavelet-based Approach to Image Fusion. Journal of Mathematical Imaging and Vision, 21 3 (March 2004), 177192 .
  5. 5. ZhangX.PanQ.ZhaoY. 2005 Image Fusion Method Based on Stationary Multi-wavelet Transform. Journal of Optoelectronics Laser, 16 5 (May 2005), 605609 .
  6. 6. LiW.ZhuX. 2007 An Image Fusion Algorithm Based on Second Generation Wavelet Transform and Its Performance Evaluation. Acta Aotomatica Sinica, 33 8 (August 2007), 817822 .
  7. 7. YangX.JinH.JiaoL. 2007 Adaptive Image Fusion Algorithm for Infrared and Visible Light Images Based on DT-CWT. Journal of InfraredMillim.Waves, 26 6 (June 2007), 419424 .
  8. 8. WangZ.YuX.ZhangL. B. 2008 A Remote Sensing Image Fusion Algorithm Based on Integer Wavelet Transform[J]. Journal of Optoelectronics Laser, 19 11 (November 2008), 15421545 .
  9. 9. FilippoN.AndreaG.StefanoB. 2007 Remote Sensing Image Fusion Using the Curvelet Transform. Information Fusion, 8 2007, 143156 .
  10. 10. LiuK.GuoL.LiH. H. 2007 Image Fusion Based on Finite Ridgelet Transform [J]. Journal of Optoelectronics Laser, 18 11 (November 2008), 13821385 .
  11. 11. YangL.GuoB. L.NiW. 2008 Multimodality Medical Image Fusion Based on Multiscale Geometric Analysis of Contourlet Transform[J]. Neurocomputing, 72 2008, 203211 .
  12. 12. YangX. H.JiaoL. C. 2008 Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform[J]. Acta Automatica Sinica, 34 3 (March 2008), 274281 .
  13. 13. LinZ. X. D. 2005 Study on Algorithms of Wavelet Transform for Image Processing Via Lifting Scheme, In: wavelet technology, 4768 , S.D. Xidian University, Xi’an, China

Written By

Gang Hu, Yufeng Zheng and Xin-qiang Qin

Submitted: 09 February 2011 Published: 24 June 2011