Open access peer-reviewed chapter

Image Fusion Based on Shearlets

By Miao Qiguang, Shi Cheng and Li Weisheng

Submitted: April 17th 2012Reviewed: August 27th 2013Published: November 20th 2013

DOI: 10.5772/56945

Downloaded: 1583

1. Introduction

Image decomposition is important to image fusion and affects the information extraction quality, even the whole fusion quality. Wavelet theory has been developed since the beginning of the last century. It was first applied to signal processing in the 1980’s[1], and over the past decade it has been recognized as having great potential in image processing applications, as well as in image fusion[2]. Wavelet transforms are more useful than Fourier transforms, and it is efficient in dealing with one-dimensional point-wise smooth signal [3-5]. However the limitations of the direction make it not perform well for multidimensional data. Images contain sharp transition such as edges, and wavelet transforms are not optimally efficient in representing them.

Recently, a theory for multidimensional data called multi-scale geometric analysis (MGA) has been developed. Many MGA tools were proposed, such as ridgelet, curvelet, bandelet, contourlet, etc [6-9]. The new MGA tools provide higher directional sensitivity than wavelets. Shearlets, a new approach provided in 2005, possess not only all above properties, but equipped with a rich mathematical structure similar to wavelets, which are associated to a multiresolution analysis. The shearlets form a tight frame at various scales and directions, and are optimally sparse in representing images with edges. Only the curvelets has the similar properties with shearlets [10-14]. But the construction of curvelets is not built directly in the discrete domain and it does not provide a multiresolution representation of the geometry. The decomposition of shearlets is similar to contourlets, while the contourlet transform consists of an application of the Laplacian pyramid followed by directional filtering, for shearlets, the directional filtering is replaced by a shear matrix. An important advantage of the shearlet transform over the contourlet transform is that there are no restrictions on the direction numbers. [15-19]

In recent years, the theory of the shearlets, which is used in image processing, has been studied gradually. Now the applications of shearlets are mainly in image denoising, sparse image representation [20] and edge detection [21, 22]. Its applications in image fusion are still under exploring.

2. Shearlets [12, 20]

2.1. The theory of Shearlets

In dimension n=2, the affine systems with composite dilations are defined as follows.

AAS(ψ)={ψj,l,k(x)=|detA|j/2ψ(SlAjxk):j,l,k2}E1

Where ψL2(2), A, Sare both 2×2invertible matrices, and |detS|=1, the elements of this system are called composite wavelet if AAS(ψ)forms a tight frame for L2(2).

j,l,k|<f,ψj,l,k>|2=f2

Let A denote the parabolic scaling matrix and S denote the shear matrix. For each a>0and s,

A=(a00a),S=(1s01).

The matrices described above have the special roles in shearlet transform. The first matrix (a00a)controls the ‘scale’ of the shearlets, by applying a fine dilation faction along the two axes, which ensures that the frequency support of the shearlets becomes increasingly elongated at finer scales. The second matrix (1s01), on the other hand, is not expansive, and only controls the orientation of the shearlets. The size of frequency support of the shearlets is illustrated in Fig. 1 for some particular values of a and s.

Figure 1.

frequency support of the shearlets

ψj,l,kfor different values of a and s.

In references [12], assume a=4,s=1, where A=A0is the anisotropic dilation matrix and S=S0is the shear matrix, which are given by

A0=(4002),S0=(1101)

For ξ=(ξ1,ξ2)^2, ξ10, let ψ^(0)(ξ)be given by

ψ^(0)(ξ)=ψ^(0)(ξ1,ξ2)=ψ^1(ξ1)ψ^2(ξ2ξ1)E2

Where ψ^1C()is a wavelet, and suppψ^1[1/2,1/16][1/16,1/2]; ψ^2C(), and suppψ^2[1,1]. This implies ψ^(0)C(), and suppψ^(0)[1/2,1/2]2.

In addition, we assume that

j0|ψ^1(22jω)|2=1,|ω|1/8E3

and for j0

l=2j2j1|ψ^2(2jωl)|2=1,|ω|1E4

There are several examples of functions ψ1,ψ2satisfying the properties described above. Eq. (3) and (4) imply that

j0l=2j2j1|ψ^(0)(ξA0jS0l)|2=j0l=2j2j1|ψ^1(22jξ1)|2|ψ^2(2jξ2ξ1l)|2=1,E5

for any (ξ1,ξ2)D0, where D0={(ξ1,ξ2)^2:|ξ1|1/8,|ξ2|1}, the functions {ψ^(0)(ξA0jS0l)}form a tiling of D0. This is illustrated in Fig.2 (a). This property described above implies that the collection

{ψj,l,k(0)(x)=23j2ψ(0)(S0lA0jxk):j0,2jl2j1,k2}E6

is a Parseval frame for L2(D0)={fL2(2):supp f^D0}. And from the conditions on the support of ψ^1and ψ^2one can easily observe that the function ψj,l,khave frequency support,

suppψ^j,k,l(0){(ξ1,ξ2):ξ1[22j1,22j4][22j4,22j1],|ξ2ξ1+l2j|2j}E7

That is, each element ψj,l,kis support on a pair of trapezoids, of approximate size 22j×2j, oriented along lines of slope l2j.(see Fig.2 (b)).

Figure 2.

(a) The tiling of the frequency by the shearlets; (b) The size of the frequency support of a shearlet ψj,l,k.

Similarly we can construct a Parseval frame for L2(D1), where D1is the vertical cone,

D1={(ξ1,ξ2)^2:|ξ2|1/8,|ξ1ξ2|1},E8

Let

A1=(2004),S1=(1011)

and ψ^(1)(ξ)=ψ^(1)(ξ1,ξ2)=ψ^1(ξ2)ψ^2(ξ1ξ2), where ψ^1and ψ^2are defined as (2) and (3), then the Parseval frame for L2(D1)is as follows,

{ψj,l,k(1)(x)=23j2ψ(1)(S1lA1jxk):j0,2jl2j1,k2}.E9

To make this discussion more rigorous, it will be useful to examine this problem from the point of view of approximation theory. If F={ψμ:μI}is a bas is or, more generally, a tight frame for L2(R2), then an image f can be approximated by the partial sums

fM=μIM<f,ψμ>ψμ,E10

Where IMis the index set of the Mlargest inner products |<f,ψμ>|. The resulting approximation error is

εM=||ffM||2=μIM|f,ψμ|2,E11

and this quantity approaches asymptotically zero as Mincreases.

The approximation error of Fourier approximations is εMCM1/2, of the Wavelet is εMCM1, and the approximation error of Shearlets is εMC(logM)3M2, which is better than Fourier and Wavelet approximations.

2.2. Discrete Shearlets

It will be convenient to describe the collection of shearlets presented above in a way which is more suitable to derive numerical implementation. For ξ=(ξ1,ξ2)R^2,j0and l=2j,,2j1, Let

Wj,l0(ξ)={ψ^2(2jξ2ξ1l)χD0(ξ)+ψ^2(2jξ1ξ2l+1)χD1(ξ)l=2jψ^2(2jξ2ξ1l)χD0(ξ)+ψ^2(2jξ1ξ2l-1)χD1(ξ)l=2j1ψ^2(2jξ2ξ1l)otherwiseE12

and

Wj,l1(ξ)={ψ^2(2jξ2ξ1l+1)χD0(ξ)+ψ^2(2jξ1ξ2l)χD1(ξ)l=2jψ^2(2jξ2ξ1l1)χD0(ξ)+ψ^2(2jξ1ξ2l)χD1(ξ)l=2j1ψ^2(2jξ1ξ2l)otherwiseE13

Where ψ2,D0,D1are defined in section 2. For 2jl2j2, each term Wj,l(d)(ξ)is a window function localized on a pair of trapezoids, as illustrated in fig.1a. When l=2jor l=2j1, at the junction of the horizontal cone D0and the vertical cone, Wj,l(d)(ξ)is the superposition of two such function.

Using this notation, for j0,2j+1l2j2,kZ2,d=0,1, we can write the Fourier transform of the Shearlets in the compact form

ψ^j,l,k(d)(ξ)=23j2V(22jξ)Wj,l(d)(ξ)e2πiξAdjBdlkE14

Where V(ξ1,ξ2)=ψ^1(ξ1)χD0(ξ1,ξ2)+ψ^1(ξ2)χD1(ξ1,ξ2).

The Shearlet transform of fL2(R2)can be computed by

<f,ψj,l,k(d)>=23j2R2f^(ξ)V(22jξ)Wj,l(d)(ξ)¯e2πiξAdjBdlkdξE15

Indeed, one can easily verify that

d=01l=2j2j1|Wj,l(d)(ξ1,ξ2)|2=1E16

And form this it follows that

|ϕ^(ξ1,ξ2)|2+d=01j0l=2j2j1|V(22jξ1,22jξ2)||Wj,l(d)(ξ1,ξ2)|2=1E17

3. Multi-focus image fusion based on Shearlets

3.1. Algorithm framework of multi-focus image fusion using Shearlets

3.1.1. Image decomposition

Image decomposition based on shearlet transform is composed by two parts, decomposition of multi-direction and multi-scale.

  1. Multi-direction decomposition of image using shear matrix S0or S1.

  2. Multi-scale decompose of each direction using wavelet packets decomposition.

In step (1), if the image is decomposed only by S0, or by S1, the number of the directions is 2(l+1)+1. If the image is decomposed both by S0and S1, the number of the directions is 2(l+2)+2. The framework of Image decomposition with shearlets is shown in Fig. 3.

Figure 3.

Image decomposition framework with shearlets

3.1.2. Image fusion

Image fusion framework based on shearlets is shown in Fig. 4. The following steps of image fusion are adopted.

  1. The two images taking part in the fusion are geometrically reg is tered to each other.

  2. Transform the original images using shearlets. Both horizontal and vertical cones are adopted in this method. The number of the directions is 6. Then the wavelet packets are used in multi-scale decomposition with j=5.

  3. Fusion rule based on regional absolute value is adopted in this algorithem.

  1. The choice of low frequency coefficients.

Low frequency coefficients of the fused image are replaced by the average of low frequency coefficients of the two source images.

  1. The choice of high frequency coefficients.

DX(i,j)=iM,jN|YX(i,j)|,X=A,BE18

Calculate the absolute value of high frequency coefficients in the neighborhood by Eq.(18) Where M=N=3is the size of the neighborhood, Xdenotes the two source images, DX(i,j)is the regional absolute value of Ximage within 3 neighborhood with the center at (i,j), YX(i,j)means the pixel value at (i,j)from X.

Select the high frequency coefficients from the two source images.

F(i,j)={A(i,j)DA(i,j)DB(i,j)B(i,j)DA(i,j)<DB(i,j)E19

Where Fis the high frequency coefficients of the fused image.

Finally the region consistency check is done based on the fuse-decision map, which is shown in Eq.(20).

Map(i,j)={1DA(i,j)DB(i,j)0DA(i,j)<DB(i,j)E20

According to Eq.(20), if the certain coefficient in the fused image is to come from source image A, but with the majority of its surrounding neighbors from B, this coefficient will be switched to come from B.

  1. The fused image is gotten using the inverse shearlet transform.

Figure 4.

Image fusion framework based on shearlets

3.2. Simulation experiments

  1. Multi-focus image of Bottle

The following group images are selected to prove the validity proposed in this section.

The two source images, Fig.5.(a) and (b), are the multi-focus images, which focus on the different parts. The fusion methods of these experiments are shearlets, contourlets, Haar, Daubechies, PCA and Laplacian Pyramid (LP). Fusion rule mentioned above is used in this experiment. The following image quality metrics are used in this experiment: Standard deviation (STD), Difference of entropy (DEN), Overall cross entropy (OCE), Entropy (EN), Sharpness (SP), Peak signal to noise ratio (PSNR), Mean square error (MSE) and Q.

Figure 5.

Fusion results on experiment images

Fig.5. (c) is the ideal image, Fig.5.(d) ~Fig.5.(i) are the fused images with different methods. From the subjective evaluation of Fig.6 and objective metrics from Table 1, we can see that shearlet transform have more detail information, disperse the gray level and higher sharpness of the fused image than other methods do.

shearletcontourletHaarDaubechiesPCALP
STD
DEN
OCE
EN
SP
PSNR
MSE
Q
43.3322
0.0021
0.0107
6.9628
19.1502
40.8004
5.0067
0.9042
43.3313
0.0227
0.0125
6.9577
18.7049
39.3935
7.0625
0.8703
41.3589
0.0150
0.0442
6.9499
15.3007
31.4881
45.8016
0.8954
41.2225
0.0144
0.0470
6.9493
14.8401
31.188
49.0528
0.9010
41.3253
0.0113
0.0484
6.9462
12.9532
31.1887
49.4549
0.9131
44.1356
0.0354
0.0179
6.9703
19.4853
40.3666
5.9761
0.8809

Table 1.

Comparison of multi-focus image fusion

  1. Multi-focus Images of CT and MRI

The source images are the CT (Computer Tomography) and MRI (Magnetic Resonance Imaging) images. And Entropy (EN), Sharpness (SP), Standard deviation (STD) and Q is used to evaluate the effect of the fused images.

Fig.6 (a) is a CT image, whose brightness has relation with tissue density and the bone is shown clearly, but soft tissue is invisible. Fig.6 (b) is a MRI image, whose brightness has relation with the number of hydrogen atoms in t issue, so the soft t issue is shown clearly, but the bone is invisible. The CT image and the MRI image are complementary, the advantages could be fused into one image. The desired standard image cannot be acquired, thus only entropy and sharpness are adopted to evaluate the fusion result. Fusion rule mentioned above is used in this experiment.

ShearletContourletHaarDaubechiesPCAAverage
EN
SP
STD
Q
6.1851
20.5271
45.0704
0.6881
5.9189
24.8884
50.4706
0.3022
5.9870
16.9938
35.8754
0.4960
5.9784
14.8810
35.1490
0.4994
5.8792
17.2292
45.3889
0.6847
5.9868
16.9935
34.9141
0.4943

Table 2.

Comparison of medical image fusion

Figure 6.

Fusion results on experiment images

4. Remote sensing image fusion based on Shearlets and PCNN

4.1. Theory of PCNN

PCNN, called the third generation artificial neural network, is feedback network formed by the connection of lots of neurons, according to the inspiration of biologic v is ual cortex pattern. Every neuron is made up of three sections: receptive section, modulation and pulse generator section, which can be described by discrete equation [23-25].

The receptive field receives the input from the other neurons or external environment, and transmits them in two channels: F-channel and L-channel. In the modulation on field, add a positive offset on signal Ljfrom L-channel; use the result to multiply modulation with signal Fjfrom F-channel. When the neuron threshold θjUj, the pulse generator is turned off; otherwise, the pulse generator is turned on, and output a pulse. The mathematic model of PCNN is described below [26-30].

{Fij[n]=exp(αF)Fij[n1]+VFmijklYkl[n1]+SijLij[n]=exp(αL)Lij[n1]+VLwijklYkl[n1]Uij[n]=Fij[n](1+βLij[n])Yij[n]=1ifUij[n]>θij[n]or0otherwiseθij[n]=exp(αθ)θij[n1]+VθYij[n1]E21

Where αF,αLis the constant time of decay, αθis the threshold constant time of decay, Vθis the threshold amplitude coefficient, VF,VLare the link amplitude coefficients, βis the value of link strength, and mijkl,wijklare the link weight matrix.

Figure 7.

The model of PCNN neuron

4.2. Algorithm framework of remote sensing image fusion using Shearlts and PCNN

When PCNN is used for image processing, it is a single two-dimensional network. The number of the neurons is equal to the number of pixels. There is a one-to-one correspondence between the image pixels and the network neurons.

In this paper, Shearlets and PCNN are used to fuse images. The steps are described below:

  1. Decompose the original images A and B respectively into many different directions fNA,f^NA,fNB,f^NB(N=1,...,n)via Shear matrixs (In this chapter, n=3).

  2. Calculate the gradient features in every direction to form feature maps, GradfNA,Gradf^NA,GradfNB,Gradf^NB.

  3. Decompose feature map of all directions using DWT, DGfNA,DGf^NA,DGfNBDGf^NBare high frequency coefficients after the decomposition.

  4. Take DGfNA,DGf^NA,DGfNBDGf^NBinto PCNN, and fire maps in all directions firefNA,firef^NA,firefNB,firef^NBare obtained.

  5. Take the Shearlets on original images A and B, the high frequency coefficients in all directions are fNAh,f^NAh,fNBhand f^NBh, and the low are fNAl,f^NAl,fNBland f^NBl. The fused high frequency coefficients in all directions can be selected as follow:

fNh={fNAh,firefNAfirefNBfNBh,firefNA<firefNB,f^Nh={f^NAh,firef^NAfiref^NBf^NBh,firef^NA<firef^NB.

The fusion rule of the low frequency coefficients in any direction is described below:

fNl={fNAl,VarfNAlVarfNBlfNBl,VarfNAl<VarfNBl,f^Nl={f^NAl,Varf^NAlVarf^NBlf^NBl,Varf^NAl<Varf^NBl

Where Varfis the variance of f.

  1. The fused image is obtained using the inverse Shearlet transform.

Figure 8.

Image fusion framework with Shearlets and PCNN

4.3. Simulation experiments

In this section, three different examples, Optical and SAR images, remote sensing image and hyperspectral image, are provided to demonstrate the effectiveness of the proposed method. Many different methods, including Average, Laplacian Pyramid (LP), Gradient Pyramid (GP), Contrast Pyramid (CP), Contourlet-PCNN (C-P), and Wavelet-PCNN (W-P), are used to compare with our proposed approach. The subjective v is ual perception gives us direct Comparisons, and some objective image quality assessments are also used to evaluate the performance of the proposed approach. The following image quality metrics are used in this paper: Entropy (EN), Overall cross entropy (OCE), Standard deviation (STD), Average gradient (Ave-grad), Q, and QAB/F.

In these three different experiments, the parameters of values of PCNN are showing as follows:

Experiment 1: αL=0.03,αθ=0.1,VL=1,Vθ=10,β=0.2,W=(1/211/21111/211/2), and the iterative number is n=100.

Experiment 2: αL=0.02,αθ=0.05,VL=1,Vθ=15,β=0.7,W=(1/211/21111/211/2), and the iterative number is n=100.

Experiment 3: αL=0.03,αθ=0.1,VL=1,Vθ=15,β=0.5,W=(1/211/21111/211/2), and the iterative number is n=100.

As optical and SAR images, remote sensing image and hyperspectral image are widely used in military, so the study of these images in image fusion are of very important.

Fig.9-11 gives the fused images with Shearlet-PCNN and some other different methods. From Fig.9-11 and Table3, we can see that image fusion based on Shearlets and PCNN can get more information and less distortion than other methods. In experiment 1, the edge feature from Fig. 9(a) and spectral information from Fig. 9(b) are kept in the fused image by using the proposed method, which is showing in Fig.9(c). In Fig.9 (d), the spectral character in the fused image, fused by Contourlet and PCNN, is distorted and the from visual point of view, the color of image is too prominent. From Fig.9 (e)-(f), spectral information of the fused images is lost and the edge features are vague. Fig. 10 are the fused Remote sensing image, which is able to provide more new information since it can penetrate clouds, rain, and even vegetation. With different imaging modalities and different bands, its features are different in each image. In Fig.10(c) and (d), band 8 has more river characteristics but less city information, while band 4 has opposite imaging features. Fig.10 (c) is the fused image using Shearlets and PCNN. The numerical results in Fig.5 and Table 1 show that the fused image based on Shearlets and PCNN keep better river information, and even involve excellent city features. In Fig 10.(d), in the middle of the fused image using Contourlet and PCNN, has obvious splicing effect. Fig.11(c) is the fused Hyperspectral image. Fig.11(a) and (b) are the two original images, The track of the airport is clear in Fig.11(a), however, some planes information are lost. Fig. 11(b) shows the different information. In the fused image, the track information is more clearly, and aircrafts characters are more obvious. But lines on the runways are not clear enough in the fused images using other methods. From Table 3 we can see that most metric values using the proposed method are better than other methods do.

Figure 9.

Optical and SAR images fusion results based on Shearlets and PCNN

Figure 10.

Remote sensing image fusion results based on Shearlets and PCNN

Figure 11.

Hyperspectral image fusion results based on Shearlets and PCNN

DatasetAlgorithmQAB/FQENSTDAve-gradOCE
Experiment 1Average
LP
GP
CP
C-P
W-P
proposed
0.1842
0.3002
0.2412
0.2816
0.3562
0.3753
0.4226
0.2908
0.3017
0.2953
0.2961
0.4523
0.4976
0.5010
6.3620
6.5209
6.3993
6.4759
6.7424
6.6142
6.9961
22.1091
24.8906
22.6744
24.1864
31.2693
25.2683
34.1192
0.0285
0.0478
0.0379
0.0457
0.0665
0.0662
0.0575
3.2870
3.0844
3.2336
3.1292
0.5538
0.5689
0.5410
Experiment 2Average
LP
GP
CP
C-P
W-P
proposed
0.4016
0.5219
0.4736
0.5120
0.5658
0.4283
0.6212
0.7581
0.7530
0.7599
0.7475
0.7516
0.7547
0.7775
6.1975
6.9594
6.9024
6.9237
7.3332
6.8543
7.1572
46.1587
49.2283
47.0888
48.9839
54.3504
47.3304
56.2993
0.0236
0.0399
0.0342
0.0392
0.0390
0.0346
0.0381
2.9600
3.3738
3.6190
3.3812
3.0628
3.2436
2.9046
Experiment 3Average
LP
GP
CP
C-P
W-P
proposed
0.5021
0.6414
0.5720
0.5909
0.5838
0.5319
0.6230
0.7955
0.7728
0.7898
0.7469
0.7435
0.7788
0.7502
6.5011
6.8883
6.5649
6.7499
6.9451
6.5847
7.0791
41.0552
47.4990
41.3974
43.4631
46.5294
41.6623
55.9533
0.0161
0.0274
0.0223
0.0318
0.0262
0.0231
0.0246
1.0939
0.9959
1.0249
0.9834
1.1745
1.5318
0.5246

Table 3.

Comparison of image quality metrics

5. Conclusion

The theory of Shearlets is introduced in this chapter. As a novel MGA tool, shearlets offer more advantages over other MGA tools. The main advangtage of shearlets is that it can be studied within the framework of a generalized Multi-Resolution Analysis and with directional subdivision schemes generalizing those of traditional wavelets. This is very relevant for the development of fast algorithmic implementations of the many directional representation systems proved in the last decade.

In this chapter, we have succeed in demonstrations that shearlets are very competitive formulti-focus image and remote sensing image fusion. As a new MGA tool, Shearlet is equipped with a rich mathematical structure similar to wavelet and can capture the information in any direction. And the edge and orientation information are more sensitive than gray according to human visibility. We take full advantage of multidirection of Shearlets and gradient information to fuse image. Moreover, PCNN is selected as a fusion rule to select the fusion coefficients. Because the character is tics of directional and gradient facilitate motivating PCNN neurons, the more precise image fusion results are gotten. Several different kinds of images, shown in the experiments, prove that the new algorithm we proposed in this chapter is effective.

After development in recent years, the theory of Shearlets is gradually improving. But the time complexity of Shearlets decomposition has been the focus of the study. Which need further study, especially in its theory and applications. We will focus on other image processing methods using shearlets in our future work.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Miao Qiguang, Shi Cheng and Li Weisheng (November 20th 2013). Image Fusion Based on Shearlets, New Advances in Image Fusion, Qiguang Miao, IntechOpen, DOI: 10.5772/56945. Available from:

Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/new-advances-in-image-fusion/image-fusion-based-on-shearlets" />

Embed this code snippet in the HTML of your website to show this chapter

chapter statistics

1583total chapter downloads

3Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Investigation of Image Fusion for Remote Sensing Application

By Dong Jiang, Dafang Zhuang and Yaohuan Huang

Related Book

Frontiers in Guided Wave Optics and Optoelectronics

Edited by Bishnu Pal

First chapter

Frontiers in Guided Wave Optics and Optoelectronics

By Bishnu Pal

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us