Image Fusion Based on Shearlets

Image decomposition is important to image fusion and affects the information extraction quality, even the whole fusion quality. Wavelet theory has been developed since the begin‐ ning of the last century. It was first applied to signal processing in the 1980’s[1], and over the past decade it has been recognized as having great potential in image processing appli‐ cations, as well as in image fusion[2]. Wavelet transforms are more useful than Fourier transforms, and it is efficient in dealing with one-dimensional point-wise smooth signal [3-5]. However the limitations of the direction make it not perform well for multidimension‐ al data. Images contain sharp transition such as edges, and wavelet transforms are not opti‐ mally efficient in representing them.


Introduction
Image decomposition is important to image fusion and affects the information extraction quality, even the whole fusion quality. Wavelet theory has been developed since the beginning of the last century. It was first applied to signal processing in the 1980's [1], and over the past decade it has been recognized as having great potential in image processing applications, as well as in image fusion [2]. Wavelet transforms are more useful than Fourier transforms, and it is efficient in dealing with one-dimensional point-wise smooth signal [3][4][5]. However the limitations of the direction make it not perform well for multidimensional data. Images contain sharp transition such as edges, and wavelet transforms are not optimally efficient in representing them.
Recently, a theory for multidimensional data called multi-scale geometric analysis (MGA) has been developed. Many MGA tools were proposed, such as ridgelet, curvelet, bandelet, contourlet, etc [6][7][8][9]. The new MGA tools provide higher directional sensitivity than wavelets. Shearlets, a new approach provided in 2005, possess not only all above properties, but equipped with a rich mathematical structure similar to wavelets, which are associated to a multiresolution analysis. The shearlets form a tight frame at various scales and directions, and are optimally sparse in representing images with edges. Only the curvelets has the similar properties with shearlets [10][11][12][13][14]. But the construction of curvelets is not built directly in the discrete domain and it does not provide a multiresolution representation of the geometry. The decomposition of shearlets is similar to contourlets, while the contourlet transform consists of an application of the Laplacian pyramid followed by directional filtering, for shearlets, the directional filtering is replaced by a shear matrix. An important advantage of the shearlet transform over the contourlet transform is that there are no restrictions on the direction numbers. [15][16][17][18][19] In recent years, the theory of the shearlets, which is used in image processing, has been studied gradually. Now the applications of shearlets are mainly in image denoising, sparse image representation [20] and edge detection [21,22]. Its applications in image fusion are still under exploring. [12,20]

The theory of Shearlets
In dimension n = 2, the affine systems with composite dilations are defined as follows.
Where ψ ∈ L 2 (ℝ 2 ), A, S are both 2 × 2 invertible matrices, and | detS | = 1, the elements of this system are called composite wavelet if A AS (ψ) forms a tight frame for L 2 (ℝ 2 ). ∑ j,l,k Let A denote the parabolic scaling matrix and S denote the shear matrix. For each a > 0 and s ∈ ℝ, A = ( a 0 0 a ), S = ( 1 s 0 1 ).
The matrices described above have the special roles in shearlet transform. The first matrix ( a 0 0 a ) controls the 'scale' of the shearlets, by applying a fine dilation faction along the two axes, which ensures that the frequency support of the shearlets becomes increasingly elongated at finer scales. The second matrix ( 1 s 0 1 ), on the other hand, is not expansive, and only controls the orientation of the shearlets. The size of frequency support of the shearlets is illustrated in Fig. 1 for some particular values of a and s. ψ j,l ,k for different values of a and s.
In addition, we assume that and for ∀ j ≥ 0 There are several examples of functions ψ 1 , ψ 2 satisfying the properties described above.
The approximation error of Fourier approximations is ε M ≤ CM −1/2 , of the Wavelet is ε M ≤ CM −1 , and the approximation error of Shearlets is ε M ≤ C(logM ) 3 M −2 , which is better than Fourier and Wavelet approximations.

Discrete Shearlets
It will be convenient to describe the collection of shearlets presented above in a way which is more suitable to derive numerical implementation. For ξ = (ξ 1 , ξ 2 ) ∈ R 2 , j ≥ 0 and Image Fusion Based on Shearlets http://dx.doi.org/10.5772/56945 and 0 1 0 1 Where (ξ) is the superposition of two such function.
Using this notation, for j ≥ 0, − 2 j + 1 ≤ l ≤ 2 j − 2, k ∈ Z 2 , d = 0, 1, we can write the Fourier transform of the Shearlets in the compact form Where The Shearlet transform of f ∈ L 2 (R 2 ) can be computed by Indeed, one can easily verify that And form this it follows that

Image decomposition
Image decomposition based on shearlet transform is composed by two parts, decomposition of multi-direction and multi-scale.

1.
Multi-direction decomposition of image using shear matrix S 0 or S 1 .

2.
Multi-scale decompose of each direction using wavelet packets decomposition.
In step (1), if the image is decomposed only by S 0 , or by S 1 , the number of the directions is 2(l + 1) + 1. If the image is decomposed both by S 0 and S 1 , the number of the directions is 2(l + 2) + 2. The framework of Image decomposition with shearlets is shown in Fig. 3.

Image fusion
Image fusion framework based on shearlets is shown in Fig. 4. The following steps of image fusion are adopted.

1.
The two images taking part in the fusion are geometrically reg is tered to each other.

2.
Transform the original images using shearlets. Both horizontal and vertical cones are adopted in this method. The number of the directions is 6. Then the wavelet packets are used in multi-scale decomposition with j = 5.

3.
Fusion rule based on regional absolute value is adopted in this algorithem.
a. The choice of low frequency coefficients.
Low frequency coefficients of the fused image are replaced by the average of low frequency coefficients of the two source images.
b. The choice of high frequency coefficients.
Calculate the absolute value of high frequency coefficients in the neighborhood by Eq. (18) Where M = N = 3 is the size of the neighborhood, X denotes the two source images, D X (i, j) is the regional absolute value of X image within 3 neighborhood with the center at (i, j), Y X (i, j) means the pixel value at (i, j) from X .
Select the high frequency coefficients from the two source images.
Where F is the high frequency coefficients of the fused image.
Finally the region consistency check is done based on the fuse-decision map, which is shown in Eq. (20).
According to Eq. (20), if the certain coefficient in the fused image is to come from source image A, but with the majority of its surrounding neighbors from B, this coefficient will be switched to come from B.

4.
The fused image is gotten using the inverse shearlet transform.

Multi-focus image of Bottle
The following group images are selected to prove the validity proposed in this section.
The two source images,

Multi-focus Images of CT and MRI
The source images are the CT (Computer Tomography) and MRI (Magnetic Resonance Imaging) images. And Entropy (EN), Sharpness (SP), Standard deviation (STD) and Q is used to evaluate the effect of the fused images. Fig.6 (a) is a CT image, whose brightness has relation with tissue density and the bone is shown clearly, but soft tissue is invisible. Fig.6 (b) is a MRI image, whose brightness has relation with the number of hydrogen atoms in t issue, so the soft t issue is shown clearly, but the bone is invisible. The CT image and the MRI image are complementary, the advantages could be fused into one image. The desired standard image cannot be acquired, thus only entropy and sharpness are adopted to evaluate the fusion result. Fusion rule mentioned above is used in this experiment.

Theory of PCNN
PCNN, called the third generation artificial neural network, is feedback network formed by the connection of lots of neurons, according to the inspiration of biologic v is ual cortex pattern. Every neuron is made up of three sections: receptive section, modulation and pulse generator section, which can be described by discrete equation [23][24][25].
The receptive field receives the input from the other neurons or external environment, and transmits them in two channels: F -channel and L -channel. In the modulation on field, add a positive offset on signal L j from L -channel; use the result to multiply modulation with signal F j from F -channel. When the neuron threshold θ j ≥ U j , the pulse generator is turned off; otherwise, the pulse generator is turned on, and output a pulse. The mathematic model of PCNN is described below [26][27][28][29][30].
[ ] exp( ) [ Where α F , α L is the constant time of decay, α θ is the threshold constant time of decay, V θ is the threshold amplitude coefficient, V F , V L are the link amplitude coefficients, β is the value of link strength, and m ijkl , w ijkl are the link weight matrix.

Algorithm framework of remote sensing image fusion using Shearlts and PCNN
When PCNN is used for image processing, it is a single two-dimensional network. The number of the neurons is equal to the number of pixels. There is a one-to-one correspondence between the image pixels and the network neurons.
In this paper, Shearlets and PCNN are used to fuse images. The steps are described below:

Decompose the original images A and B respectively into many different directions
f NA , f NA , f NB , f NB (N = 1, ..., n) via Shear matrixs (In this chapter, n = 3).
Where Varf is the variance of f .

6.
The fused image is obtained using the inverse Shearlet transform. As optical and SAR images, remote sensing image and hyperspectral image are widely used in military, so the study of these images in image fusion are of very important. Fig.9-11 gives the fused images with Shearlet-PCNN and some other different methods. From Fig.9-11 and Table3, we can see that image fusion based on Shearlets and PCNN can get more information and less distortion than other methods. In experiment 1, the edge feature from Fig. 9(a) and spectral information from Fig. 9(b) are kept in the fused image by using the proposed method, which is showing in Fig.9(c). In Fig.9 (d), the spectral character in the fused image, fused by Contourlet and PCNN, is distorted and the from visual point of view, the color of image is too prominent. From Fig.9 (e)-(f), spectral information of the fused images is lost and the edge features are vague. Fig. 10 are the fused Remote sensing image, which is able to provide more new information since it can penetrate clouds, rain, and even vegetation. With different imaging modalities and different bands, its features are different in each image. In Fig.10(c) and (d), band 8 has more river characteristics but less city information, while band 4 has opposite imaging features.  Table 1 show that the fused image based on Shearlets and PCNN keep better river information, and even involve excellent city features. In Fig 10.(d), in the middle of the fused image using Contourlet and PCNN, has obvious splicing effect. Fig.11(c) is the fused Hyperspectral image. Fig.11(a) and (b) are the two original images, The track of the airport is clear in Fig.11(a), however, some planes information are lost. Fig. 11(b) shows the different information. In the fused image, the track information is more clearly, and aircrafts characters are more obvious. But lines on the runways are not clear enough in the fused images using other methods. From Table 3 we can see that most metric values using the proposed method are better than other methods do.
can see that most metric values using the proposed method are better than other methods do.    Table 3. Comparison of image quality metrics

Conclusion
The theory of Shearlets is introduced in this chapter. As a novel MGA tool, shearlets offer more advantages over other MGA tools. The main advangtage of shearlets is that it can be studied within the framework of a generalized Multi-Resolution Analysis and with directional subdivision schemes generalizing those of traditional wavelets. This is very relevant for the development of fast algorithmic implementations of the many directional representation systems proved in the last decade.
In this chapter, we have succeed in demonstrations that shearlets are very competitive formulti-focus image and remote sensing image fusion. As a new MGA tool, Shearlet is equipped with a rich mathematical structure similar to wavelet and can capture the information in any direction. And the edge and orientation information are more sensitive than gray according to human visibility. We take full advantage of multidirection of Shearlets and gradient information to fuse image. Moreover, PCNN is selected as a fusion rule to select the fusion coefficients. Because the character is tics of directional and gradient facilitate motivating PCNN neurons, the more precise image fusion results are gotten. Several different kinds of images, shown in the experiments, prove that the new algorithm we proposed in this chapter is effective.
After development in recent years, the theory of Shearlets is gradually improving. But the time complexity of Shearlets decomposition has been the focus of the study. Which need further study, especially in its theory and applications. We will focus on other image processing methods using shearlets in our future work.