Open access

Image Fusion Based on Color Transfer Technique

Written By

Guangxin Li

Submitted: 20 October 2010 Published: 24 June 2011

DOI: 10.5772/17413

From the Edited Volume

Image Fusion and Its Applications

Edited by Yufeng Zheng

Chapter metrics overview

4,704 Chapter Downloads

View Full Metrics

1. Introduction

In 2001, with the nonlinear lαβ space (Ruderman et al., 1998), Reinhard et al. (Reinhard et al., 2001) introduced a method to transfer colors between two color images. The goal of their work was to make a synthetic image take on another image’s look and feel. Applying Reinhard’s statistical color transfer strategy, Toet (Toet, 2003) subsequently developed a color-transfer-based image fusion algorithm. With an appropriate daylight color image as the target image, the method can produce a natural appearing “daytime-like” color fused image and significantly improve observer performance. Therefore, the Toet’s approach has received considerable attention in recent years (Li & Wang, 2007b; Li et al., 2010a; Li et al., 2010b; Li et al., 2005; Tsagaris & Anastassopoulos, 2005; Toet & Hogervorst, 2008; Toet & Hogervorst, 2009; Wang et al., 2007; Zheng & Essock, 2008).

Although Toet’s work implies that Reinhard’s lαβ color transfer method can be successfully applied to image fusion, it is difficult to develop a fast color image fusion algorithm based on this color transfer technique. The main reason is that it is restricted to the nonlinear lαβ space (See Appendix A). This color space is logarithmic, the transformation between RGB and lαβ spaces must be transmitted through LMS and logLMS spaces. This therefore increases the system’s storage requirements and computational time. On the other hand, since the dynamic range of the achromatic component in lαβ space is very different from that of a normal grayscale image, it becomes inconvenient to enhance the luminance contrast of the final color fused image in lαβ space by using conventional methods, such as directly using a high contrast grayscale fused image to replace the luminance component of the color fused image.

To eliminate the limitations mentioned above, we (Li et al., 2010a) employed YCBCR space to implement Reinhard’s color transfer scheme and applied the YCBCR color transfer technique to the fusion of infrared and visible images. Through a series of mathematical derivation and proof, we (Li et al., 2010b) further presented a fast color-transfer-based image fusion algorithm. Our experiments demonstrate the performance of the fast color-transfer-based image fusion method is superior to other related color image fusion methods, including the Toet’s approach.

The rest of this chapter is organized as follows. Section 2 reviews Reinhard’s lαβ color transfer method. Section 3 outlines our YCBCR color transfer method. Section 4 describes two basic image fusion methods based on the YCBCR color transfer technique. Section 5 introduces our fast color-transfer-based image fusion method. Experimental results and associated discussions are provided in Section 6. Finally, in Section 7, conclusions are drawn and future works are suggested.

Advertisement

2. lαβ color transfer method

Studies by Reinhard et al. (Reinhard et al., 2001) have found that straightforward color statistics can capture some important subjective notions of style and appearance in images. Their work shows how to transfer them from one image to another to copy some of the atmosphere and mood of a good picture. With the pixel data represented in lαβ space, Reinhard et al. transfer color statistics from the target image to the source image by applying a linear map to each axis separately:

θ s * = ( σ t θ / σ s θ ) ( θ s μ s θ ) + μ t θ , fo r θ = l , α , β E1

where the indexes ‘s’ and ‘t’ refer to the source and target images respectively. θ denotes an individual color component of an image in lαβ space. ( μ s θ , σ s θ ) and ( μ t θ , σ t θ ) are the means and standard deviations of the source and target images, respectively, both over the channel θ . Following this step, the resulting source image data is converted back to RGB space for display. After the transform described in (Eq. 1), the first and second order statistics of color distribution of the source image conform to those of the target color image. This color statistics match procedure is quite simple but can successfully transfer one image’s color characteristics to anther.

Advertisement

3. YCBCR color transfer method

Instead of using the nonlinear lαβ space, our proposed YCBCR color transfer method (Li et al., 2010a) transfers the color distribution of the target image to the source image in the linear YCBCR color space. The forward and backward YCBCR transformations (Neelamani et al., 2006; Skodras et al., 2001) are achieved by means of (Eq. 2) and (Eq. 3), respectively.

[ Y C B C R ] = [ 0.2990 0.5870 0.1140 0.1687 0.3313 0.5000 0.5000 0.4187 0.0813 ] [ R G B ] E2
[ R G B ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ Y C B C R ] E3

where Y denote the luminance, CB and CR are two chromatic channels, which correspond to the color difference model (Poynton, 2003; Jack, 2001). As shown in (Eq. 4), CB and CR stand for blue and red color difference channels, respectively.

C B = 0.5 0.886 ( B Y ) C R = 0.5 0.701 ( R Y ) E4

Since the YCBCR transformation is linear, its computational complexity is far lower than that of the lαβ conversion. Let N be the number of the columns and rows of the input RGB image. The lαβ transformation requires a total of 22N2 additions, 34N2 multiplications, 3N2 logarithm and 3N2 exponential computations. In contrast, the YCBCR transformation avoids the logarithm and exponential operations. From (Eq. 2) and (Eq. 3), we can observe that only 10N2 additions and 13N2 multiplications are required for its implementation. Obviously, the simplicity of the YCBCR transformation enables a more efficient implementation of color transfer between images. Like the Reinhard’s scheme, the result’s quality of the YCBCR color transfer method also depends on the images’ similarity in composition.

The YCBCR transformation can be extended into a general formalism defined in (Eq. 5) and (Eq. 6). For color transfer, using color spaces conforming to this general YCBCR space framework, such as YUV space (Pratt, 2001), can produce same recoloring results as using YCBCR space.

[ Y ˜ C ˜ B C ˜ R ] = [ 0.2990 x 0.5870 x 0.1140 x 0.1687 y 0.3313 y 0.5000 y 0.5000 z 0.4187 z 0.0813 z ] [ R G B ] + [ c 1 c 2 c 3 ] E5
[ R G B ] = [ 1.0000 x 1 0.0000 1.4020 z 1 1.0000 x 1 0.3441 y 1 0.7141 z 1 1.0000 x 1 1.7720 y 1 0.0000 ] ( [ Y ˜ C ˜ B C ˜ R ] [ c 1 c 2 c 3 ] ) E6

where x, y, z, c1, c2 and c3 are constants, and x, y and z are nonzero. This fact can be proved by the following proposition.

Proposition 1: Let [ R ˜ s * , G ˜ s * , B ˜ s * ] T and [ R s * , G s * , B s * ] T be the recolored source images obtained by the Y ˜ C ˜ B C ˜ R and YCBCR based color transfer methods, respectively. Suppose the two cases use the same target image, then for a fixed source image,

[ R ˜ s * , G ˜ s * , B ˜ s * ] T = [ R s * , G s * , B s * ] T E7

Proof: See Appendix B.

The Y ˜ C ˜ B C ˜ R

transformation specified in (Eq. 5) and (Eq. 6) can be regarded as an extension of the YCBCR transformation. If x = y = z = 1, and c1 = c2 = c3 = 0, then the Y ˜ C ˜ B C ˜ R transformation becomes equivalent to the YCBCR transformation. If x = 1, y =0.872, z = 1.23, and c1 = c2 = c3 = 0, then the Y ˜ C ˜ B C ˜ R transformation is equivalent to the YUV transformation as follows.

[ Y U V ] = [ 0.2990 0.5870 0.1140 0.1471 0.2888 0.4359 0.6148 0.5148 0.1000 ] [ R G B ] E8
[ R G B ] = [ 1.0000 0.0000 1.1403 1.0000 0.3947 0.5808 1.0000 2.0325 0.0000 ] [ Y U V ] E9
Advertisement

4. Basic fusion method based on YCBCR color transfer technique

With the YCBCR color transfer method, it is very easy to develop color image fusion methods for merging infrared and visible images. We introduced two basic fusion approaches, one named StaCT (Standard Color-Transfer-Based Fusion Method), the other called CECT (Contrast Enhanced Version of Color-Transfer-Based Fusion Method) (Li et al., 2010b). These two basic fusion methods have to produce a source false color fused image, we employed the NRL (Naval Research Laboratory) scheme (Scribner et al., 1998; McDanie et al., 1998) to generate the source color fused image. The NRL algorithm is very simple and fast, and greatly useful for developing fast image fusion method.

4.1. StaCT method

The StaCT method uses the YCBCR color transfer method to directly recolor the source false color fused image, its whole steps are as follows.

1. Construct the source false color fused image [Rf, Gf, Bf]T by using the NRL scheme:

[ R f , G f , B f ] T = [ I R , V i s , V i s ] T E10

where IR and Vis represent the input infrared and visible images, respectively.

2. Convert the image [Rf, Gf, Bf]T to YCBCR space and produce the source YCBCR components [Y s ,C B ,s ,C R ,s ] T :

[ Y s C B , s C R , s ] = [ Y f C B , f C R , f ] = [ 0.2990 0.5870 0.1140 0.1687 0.3313 0.5000 0.5000 0.4187 0.0813 ] [ R f G f B f ] E11

3. Stretch the source YCBCR components to make their statistics match the color statistics of the target daylight color image:

Y c = σ t Y σ s Y ( Y s μ s Y ) + μ t Y C B , c = σ t C B σ s C B ( C B , s μ s C B ) + μ t C B C R , c = σ t C R σ s C R ( C R , s μ s C R ) + μ t C R E12

where Yc, CB,c and CR,c are the three color components of the final color fused image [Rc, Gc, Bc]T in YCBCR space. The indexes ‘s’ and ‘t’ stand for the source and target images, respectively. ( μ s θ , σ s θ ) and ( μ t θ , σ t θ ) , where θ = Y ,C B ,C R , are the means and standard deviations of the source and target images, respectively, both over the channel θ .

Transform the result back into RGB representation and get the final color fused image [R c , G c , B c ] T :

[ R c G c B c ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ Y c C B , c C R , c ] E13

The Toet’s algorithm (Toet, 2003) produces the color fused imagery by using lαβ space. The main difference between the StaCT method and the Toet’s method lies in the color space conversion of images. As explained above, the linear YCBCR transformation has lower computational complexity in comparison with the nonlinear lαβ transformation. Hence, the StaCT method offers a more efficient and simpler way to produce a color fused image compared to the Toet’s method.

4.2. CECT method

The luminance contrast reduction issue, as described by Toet (Toet, 2003), may arise when the contrast of an image detail varies strongly among the different bands. In some conditions a detail may be represented with opposite contrast in different spectral bands. The combination of the individual image bands into a single color image may therefore significantly reduce the luminance contrast of an image detail. As a result, a detail that is noticeable in the individual image bands may be much less visible in the final color representation, due to the lack of luminance contrast. An appropriate grayscale fused image obtained by combining the individual sensor images can preserve all relevant contrast details of the individual bands. Hence, we can enhance the luminance contrast of the final color fused image by replacing the luminance component of the source false color fused image with the grayscale fused image. Based on this strategy, we proposed the CECT method (Li et al., 2010b). Its whole steps to perform are the following.

1. Construct the source false color fused image [Rf, Gf, Bf]T by using the NRL scheme given in (Eq. 10).

2. Convert the image [Rf, Gf, Bf]T to YCBCR space as in (Eq. 11).

3. Replace the luminance component Yf by the grayscale fused image F, which as well as the CB,f and CR,f components are directly used as the source YCBCR components:

[ Y s , C B , s , C R , s ] T = [ F , C B , f , C R , f ] T E14

4. Stretch the source YCBCR components to make their statistics match the color statistics of the target daylight color image as shown in (Eq. 12).

5. Transform the result back into RGB representation as in (Eq. 13).

The color spaces conforming to the Y ˜ C ˜ B C ˜ R framework defined in (Eq. 5) and (Eq. 6) can also be used in the CECT method. When the constant x is positive, using any form of Y ˜ C ˜ B C ˜ R space can yield the same color fused image as in YCBCR space. This result can be proved by the following proposition.

Proposition 2: Let [ R ˜ c , G ˜ c , B ˜ c ] T and [ R c , G c , B c ] T be the color fused images obtained by the Y ˜ C ˜ B C ˜ R and YCBCR based CECT methods, respectively. Suppose the two cases use the same grayscale fusion method and target image. In addition, assume x defined in the Y ˜ C ˜ B C ˜ R transform is positive, then for a fixed pair of input images,

[ R ˜ c , G ˜ c , B ˜ c ] T = [ R c , G c , B c ] T E15

The proof of this proposition is similar to the one of Proposition 1. Hence, we do not prove it here.

The achromatic component in lαβ space cannot be used directly for the luminance replacement process because its dynamic range is very different from that of the grayscale fused image. Therefore, Toet (Toet, 2003) suggested using HSV (Hue, Saturation, and Value) color space, in which the value component (V) has the same amplitude range as the grayscale fused image, to address this problem. The final high contrast color fused image is constructed by converting the recolored fused image into HSV space, replacing the value component by the high contrast grayscale fused image and then transforming back to RGB space. One can see that three color spaces (RGB, lαβ and HSV) are employed and four color space transformations (RGB to lαβ, lαβ to RGB, RGB to HSV, and HSV to RGB) are needed in the above manner. This is not an efficient way relative to the CECT method. Taking advantage of the linear YCBCR space, the CECT method only uses two color spaces (RGB and YCBCR) and requires two color space conversions (RGB to YCBCR and YCBCR to RGB) during the whole procedure to obtain the high contrast output image.

Advertisement

5. Fast fusion method based on YCBCR color transfer technique

For real-time image fusion system, faster methods are always desirable. We proposed a fast algorithm by mathematically optimizing the CECT approach’s architecture, which is named AOCT (Architecture-Optimized Version of Color-Transfer-Based Fusion Method) (Li et al., 2010b). Fig. 1 illustrates the fusion architecture of the AOCT approach, and its whole steps are as follows.

Figure 1.

Fusion architecture of the AOCT method

1. Use [F, Vis-IR, IR-Vis]T to form the source YCBCR components:

[ Y s C B , s C R , s ] = [ F V i s I R I R V i s ] E16

Steps 2 and 3 are the same as the steps 4 and 5 of the CECT method. Thus,

2. Perform color statistics matching as in (Eq. 12).

3. Perform the inverse YCBCR transform to obtain the final color fused image as in (Eq. 13).

We have demonstrated that the AOCT method has the same performance as the CECT method (Li et al., 2010b). The PA (Pixel Averaging) and MR (Multiresolution) fusion schemes are adopted to produce the grayscale fused image in the AOCT method. The AOCT approach using the PA grayscale fusion algorithm is named P-AOCT. The AOCT approach using the MR grayscale fusion algorithm is called M-AOCT.

Similar to the CECT approach, the AOCT method also allows us to choose other color spaces conforming to the Y ˜ C ˜ B C ˜ R framework in (Eq. 5) and (Eq. 6). When the constants x, y and z are positive, using any form of Y ˜ C ˜ B C ˜ R space can produce the same color fusion result as in YCBCR space. This conclusion can be proved by the following proposition.

Proposition 3: Let [ R ˜ c , G ˜ c , B ˜ c ] T and [ R c , G c , B c ] T be the color fused images obtained by the Y ˜ C ˜ B C ˜ R based and YCBCR based AOCT methods, respectively. Suppose the two cases use the same grayscale fusion method and target image. In addition, assume the constants x, y and z defined in the Y ˜ C ˜ B C ˜ R transformation are positive, then for a fixed pair of input images,

[ R ˜ c , G ˜ c , B ˜ c ] T = [ R c , G c , B c ] T E17

Proof: See Appendix C.

In contrast, the construction process of the source chromatic components (CB,s and CR,s) in the AOCT method is quite simpler than that in the CECT method. After performing the YCBCR transform and replacing the luminance component in the CECT method (by inserting (Eq. 10) and (Eq. 11) in (Eq. 14)), we arrive at

[ Y s C B , s C R , s ] = [ F C B , f C R , f ] = [ F 0.1687 I R 0.3313 V i s + 0.5 V i s 0.5 I R 0.4187 V i s 0.0813 V i s ] E18

Let N be the number of the columns and rows of the input images. According to (Eq. 18), the CECT method requires 4N2 additions and 6N2 multiplications to obtain the source chromatic components. The AOCT method skips the YCBCR forward transformation and directly uses the simple difference signals of the input images (that is, Vis-IR and IR-Vis) to form the source chromatic components. Equation (Eq. 16) states that the AOCT method only requires 2N2 additions in the construction process. Therefore, the AOCT method is faster and easier to implement compared to the CECT method.

The following strategies can help the AOCT method to be implemented in an extremely fast and memory efficient way.

    F = 0.5 ( I R + V i s ) E19

    We have proved that the source luminance component Y s can be directly achieved by the following solution when the P-AOCT method is adopted (Li et al., 2010b).

    Y s = I R + V i s E20
      C R , s = C B , s = ( V i s I R ) E21

      From this by applying the mean and standard deviation properties, we can derive that

      μ s C R = μ s C B σ s C R = σ s C B E22

      Equations (Eq. 21) and (Eq. 22) imply that, after computing C B , s , μ s C B , and σ s C B , there is no need to recalculate C R , s , μ s C R , and σ s C R . A more efficient way is to obtain them directly from the negative polarity C B , s , the negative polarity μ s C B , and σ s C B , respectively.

      3. From (Eq. 12), we can see that only six target color statistical parameters ( μ t Y , μ t C B , μ t C R , σ t Y , σ t C B and σ t C R ) are required in color statistics matching process. Hence, in practice, there is no real need to store target images. A system that is equipped with a look-up table of color statistical parameters for different types of backgrounds is sufficient to enable users to adjust the color to their specific needs.

      Advertisement

      6. Experimental results

      6.1. Color transfer between landscape photographs

      Figure 2.

      Color transfer between landscape photographs. Left: the different source images. Second column: the corresponding target images. Third column: the results of the lαβ color transfer method. Right: the results of the YCBCR color transfer method. Top to bottom: trees, sunset, and mountains image pairs. (All the source and target images courtesy of © Ian Britton - FreeFoto.com.)

      To demonstrate the potential of the YCBCR color transfer method, we applied it to some landscape photographs of scenes, including trees, sunset, and mountains. The lαβ color transfer scheme (Reinhard et al., 2001) was selected to serve as a comparison. The source image of each scene is shown in the left column of Fig. 2. The second left column represents their respective target images. Note that each source image contains similar composition as the corresponding target image. The third and right columns illustrate the results of applying the lαβ and YCBCR color transfer methods respectively to the source image at the left column with the corresponding image in the second column as the target. lαβ space works very well on the task of recoloring these natural color photographs. This provides some validation for Reinhard et al.’s work (Reinhard et al., 2001). The images produced by using YCBCR space look a little lighter than in lαβ space, but the results are also pleasing and reasonable. This indicates that the YCBCR color transfer method can be applied to many image types.

      6.2. Comparison among different color image fusion methods

      Fig. 3. show some examples of color image fusion methods related to color transfer technique, inculding the NRL (Scribner et al., 1998; McDanie et al., 1998), Toet’s (Toet, 2003), improved Toet’s, StaCT, P-AOCT, M-AOCT schemes. The CECT approach has the same fusion performance as the AOCT method (Li et al., 2010b), thus, we do not discuss the CECT approach in our experiments.

      Figure 3.

      Fusion results of applying different color image fusion methods to the UN Camp images of a nighttime scene representing a building, a path, heather, and a person walking along the fence. (a) The input 3-5 μm midwave infrared image. (b) The input visible image. (c) The target image. (d) The result of the NRL method. (e)-(i) are the results of the Toet’s method, the improved Toet’s method, the StaCT method, the P-AOCT method and the M-AOCT method respectively, all with (c) as the target image. (The target image courtesy of Alexander Toet.)

      As described above, the Toet’s method (Toet, 2003) produces a color fused image by using the lαβ color transfer approach to recolor a source false color fused image. In his study, the source false color fused image is generated by mapping the infrared image and the visible image respectively to the green and blue channels of an RGB image, the red channel is set to zero or black. Similar to the Toet approach, the improved Toet’s method also employs lαβ space to perform color transfer, the difference is that the improved Toet’s method uses the NRL scheme to yield the source false color fused image.

      In this study, we employ a discrete wavelet transform based fusion scheme (Li et al., 2007a) to achieve the grayscale fused image in the M-AOCT method, and use a 4-level wavelet transform with ‘5-3’ biorthogonal filters (Cohen et al., 1992; Daubechies, 1992) to implement the MR decompositions of the input images. The approximation images at the highest level (low resolution) are combined by taking the average of them. The detail images at each high frequency band of each decomposition level are fused by a weighted average fusion rule based on the local amplitude ratio. This weighted average fusion rule can achieve a fused image with less ‘ringing’ artifacts compared to the widely used maximum selection fusion rule (Zhang & Blum, 1999; Piella, 2003).

      The original registered infrared (3-5 μm) and visible test images are supplied by Toet at TNO Human Factors and available online at www.imagefusion.org. Since the corresponding daylight color photographs are not available for these images, we adopt an arbitrary color image as target.

      Fig. 3 corresponds to a nighttime scene, which represents a building, a path, heather, and a person walking along the fence. The input infrared (3-5 μm) and visible images are shown in Fig. 3(a) and (b), respectively. Fig. 3(c) shows an arbitrary daytime color image with similar color distribution as the source scene. Fig. 3(d) shows the false color fused image constructed by the NRL method. Most salient features from the inputs are clearly visible in this image, but the color appearance is rather unnatural. Fig. 3(e) shows the fusion result obtained by the Toet’s method with Fig. 3(c) as target. The image has a comparatively natural color appearance. But the color characteristics are just roughly close to those of the target image. It doesn’t preserve the trees’ green colors in the target image very well. Note that some heather appears yellow. Moreover, there emerges excessive saturation phenomenon in the resulting color fused image, and thus produces a glaring white, which severely hides some salient detail information contained in the input images, such as the textures of the road. In addition, some image details, such as the poles of the fence and the outlines of the building, are represented with low contrast in Fig. 3(e). Fig. 3(f) shows the fused image produced by the improved Toet’s method with Fig. 3(c) as target. The image takes the target image’s color characteristics and has high contrast, but the excessive saturation phenomenon doesn’t disappear, some salient details, such as the textures of the road, are hiden by the glaring white. Fig. 3(g) shows the fused image generated by the StaCT method with Fig. 3(c) as target. The image does not only gain the natural color appearance of the target image, but contains most salient information from the input images, moreover, avoids the excessive saturation phenomenon. Fig. 3(h) shows the fused image produced by the P-AOCT method with Fig. 3(c) as target. The image has better visual quality in comparison with the result obtained by the StaCT scheme as shown in Fig. 3(g). The visibility of image details in Fig. 3(h), such as the poles of the fence and the outlines of the person, is evidently higher than in the StaCT algorithm. Fig. 3(i) illustrates the fused image achieved by the M-AOCT method with Fig. 3(c) as target. Clearly, the M-AOCT approach has the best overall performance.

      The M-AOCT method employs the MR fusion scheme, the Toet’s and improved Toet’s methods utilize the lαβ color transfer strategy. Therefore, the computational complexities of these three methods are all higher than that of the P-AOCT method. In fact, the StaCT method has the same complexity as the P-CECT method (the CECT approach using the PA fusion algorithm to implement grayscale fusion). Thus, according to the algorithm analysis in Section 5, we can confirm that the computational cost of the P-AOCT method is lower than that of the StaCT method. Hence, in the five color-transfer-based image fusion algorithms (the Toet’s, improved Toet’s, StaCT, P-AOCT, and M-AOCT methods), the P-AOCT method has the lowest implementation complexity.

      Although it can be seen from Fig. 3 that the salient information from the input images is represented with higher contrast in the fused image obtained by the M-AOCT method than those produced by other approaches, this superior performance of the M-AOCT method comes at a cost of increased computational complexity of the fusion process. In contrast, the P-AOCT method is much faster and easier to implement. Moreover, this algorithm can provide a visually pleasing color fused image with available contrast, and the salient information contained in the input images is also represented quite well. Hence, if there is no special requirement, in practice, it is not necessary to utilize the M-AOCT method to merge images, the low computational-cost P-AOCT method is generally sufficient to fulfill user needs.

      6.3. Choice of target image

      For the AOCT method, the actual choice of the target image is not critical. The method can provide a natural looking color fused image as long as the color distribution of the target image is to some extent similar to that of the source scene.

      Figure 4.

      Fusion results of applying the P-AOCT method with different target images to the Dune images of a nighttime scene representing a person walking over a dune area covered with semi-shrubs and sandy paths. Top left: the input 3-5 μm midwave infrared image. Top right: the input visible image. Middle: four different target images. Bottom: the corresponding fusion results corresponding to each target image. (The two left target images courtesy of www.pics4learning.com, and the two right target images courtesy of www.bigfoto.com.)

      To demonstrate this fact, we applied the P-AOCT method to merging the Dune images with different target images. The original registered infrared (3-5 μm) and visible test images are shown at the top left and right of Fig. 4 respectively. In this scene, a person is walking over a dune area covered with semi-shrubs and sandy paths. These images are also supplied by Toet at TNO Human Factors and available online at www.imagefusion.org. The second row of Fig. 4 represents the different target images. The third row illustrates the results of applying the P-AOCT approach respectively to the input images at the top row with the corresponding image in the second row as the target. The second left column of Fig. 4 shows an interesting example where a color photograph representing a grassland area with an elk in it was adopted as the target image. The content of the target image in this case is quite dissimilar to that of the source scene, but the target image has similar color distribution as the source scene. As a result, the corresponding color fused images still have a fairly natural appearance. Another special example depicted in the right column of Fig. 4 shows that the P-AOCT method fails when the color compositions of the target image and the source scene are too dissimilar. In this case, the target image also displays a dune like scene, but has more bright blue sky in the background. Consequently, the appearance of the resulting fused images is quite unnatural and the sandy paths are represented in unreasonable blue.

      From the above examples, we can see that the depicted scenes of the target and source images don’t have to be identical, as long as their color distributions resemble each other to some extent. In practice, surveillance systems usually register a fixed scene, a daylight color image of the same scene that is being monitored could be used as an optimal target image.

      Advertisement

      7. Conclusion

      In this chapter, we introduced color transfer techniques and some typical image fusion methods based on color transfer technique. More importantly, we presented a fast image fusion algorithm based on YCBCR color transfer technique, named AOCT. Depending on the PA and MR grayscale image fusion schemes, we developed two solutions, namely the P-AOCT and M-AOCT methods, to fulfill different user needs. The P-AOCT method answers to a need of easy implementation and speed of use. The M-AOCT method answers to the high quality need of the fused products. Experimental results demonstrate that the AOCT method can effectively produce a natural appearing “daytime-like” color fused image with good contrast. Even the low-complexity P-AOCT method can provide a satisfactory result.

      Another important contribution of this chapter is that we have mathematically proved some useful propositions about color-transfer-based image fusion. These theories clarify that other color spaces, which are founded on the basis of luminance, blue and red color difference components, such as YUV space, can be used as an alternative to YCBCR space in the image fusion approaches based on YCBCR color transfer technique, including the StaCT, CECT, and AOCT methods.

      Currently, the AOCT method only supports the fusion of imagery from two sensors. Future research effort will be the extension of the AOCT method to accept imagery from three or four spectral bands, e.g. visible, short-wave infrared, middle-wave infrared, and long-wave infrared bands. In addition, designing quantitative measure for color image fusion performance is another worthwhile and challenging research topic.

      Advertisement

      8. Appendix

      A. RGB to lαβ transform

      In this Appendix we present the RGB to lαβ transform (Ruderman et al., 1998). This transform is derived from a principal component transform of a large ensemble of hyperspectral images that represents a good cross-section of natural scenes. The resulting data representation is compact and symmetrical, and provides automatic decorrelation to higher than second order.

      The actual transform is as follows. First the RGB tristimulus values are converted to LMS space by

      [ L M S ] = [ 0.3811 0.5783 0.0402 0.1967 0.7244 0.0782 0.0241 0.1288 0.8444 ] [ R G B ] E23

      The data in this color space shows a great deal of skew, which is largely eliminated by taking a logarithmic transform:

      [ L M S ] = [ log L log M log S ] E24

      The inverse transform from LMS cone space back to RGB space is as follows. First, the LMS pixel values are raised to the power ten to go back to linear LMS space.

      [ L M S ] = [ 10 L 10 Μ 10 S ] E25

      Then, the data can be converted from LMS to RGB using the inverse transform of (Eq. 23):

      [ R G B ] = [ 4.4679 3.5873 0.1193 1.2186 2.3809 0.1624 0.0497 0.2439 1.2045 ] [ L M S ] E26

      Ruderman et al. (Ruderman et al., 1998) presented the following simple transform to decorrelate the axes in the LMS space:

      [ l α β ] = [ 1 / 3 0 0 0 1 / 6 0 0 0 1 / 2 ] [ 1 1 1 1 1 2 1 1 0 ] [ L M S ] E27

      If we think of the L channel as red, the M as green, and S as blue, we see that this is a variant of a color opponent model:

      Achromatic R + G + B Yellow-blue R + G B Red-green R G E28

      After processing the color signals in the lαβ space the inverse transform of (Eq. 27) can be used to return to the LMS space:

      [ L M S ] = [ 1 1 1 1 1 1 1 2 0 ] [ 1 / 3 0 0 0 1 / 6 0 0 0 1 / 2 ] [ l α β ] E29

      B. Proof of Proposition 1

      From (Eq. 5) we derive that

      [ Y ˜ C ˜ B C ˜ R ] = [ x 0 0 0 y 0 0 0 z ] [ 0.2990 0.5870 0.1140 0.1687 0.3313 0.5000 0.5000 0.4187 0.0813 ] [ R G B ] + [ c 1 c 2 c 3 ] = [ x 0 0 0 y 0 0 0 z ] [ Y C B C R ] + [ c 1 c 2 c 3 ] = [ x Y y C B z C R ] + [ c 1 c 2 c 3 ] E30

      Hence, we can prove that the original source and target images respectively satisfy

      [ Y ˜ s C ˜ B , s C ˜ R , s ] = [ x Y s y C B , s z C R , s ] + [ c 1 c 2 c 3 ] a n d [ Y ˜ t C ˜ B , t C ˜ R , t ] = [ x Y t y C B , t z C R , t ] + [ c 1 c 2 c 3 ] E31

      where [ Y ˜ s , C ˜ B , s , C ˜ R , s ] T and [ Y s , C B , s , C R , s ] T are the three components of the source image in Y ˜ C ˜ B C ˜ R and YCBCR spaces, respectively. [ Y ˜ t , C ˜ B , t , C ˜ R , t ] T and [ Y t , C B , t , C R , t ] T represent the three components of the target image in Y ˜ C ˜ B C ˜ R and YCBCR spaces, respectively. We know that the mean and standard deviation respectively have the following properties:

      μ ( λ X + c ) = λ μ ( X ) + c a n d σ ( λ X + c ) = | λ | σ ( X ) E32

      where λ and c are constants, λ , c , X is a random variable. Thus for the original source and target images, the means of their achromatic components in Y ˜ C ˜ B C ˜ R and YCBCR spaces respectively satisfy

      μ s Y ˜ = x μ s Y + c 1 a n d μ t Y ˜ = x μ t Y + c 1 E33

      The corresponding standard deviations respectively satisfy

      σ s Y ˜ = | x | σ s Y a n d σ t Y ˜ = | x | σ t Y E34
      Let [ Y ˜ s * , C ˜ B , s * , C ˜ R , s * ] T be the three components of [ R ˜ s * , G ˜ s * , B ˜ s * ] T in Y ˜ C ˜ B C ˜ R space, [ Y s * , C B , s * , C R , s * ] T be the three components of [ R s * , G s * , B s * ] T in YCBCR space. By inserting (Eq. 33), (Eq. 34) and Y ˜ s = x Y s + c 1 in
      Y ˜ s * = ( σ t Y ˜ / σ s Y ˜ ) ( Y ˜ s μ s Y ˜ ) + μ t Y ˜ E35

      we can derive the relationship between Y ˜ s * and Y s * :

      Y ˜ s * = x Y s * + c 1 E36

      In a similar way, we can obtain

      C ˜ B , s * = y C B , s * + c 2 a n d C ˜ R , s * = z C R , s * + c 3 E37

      From (Eq. 6) we have

      [ R ˜ s * G ˜ s * B ˜ s * ] = [ 1.0000 x 1 0.0000 1.4020 z 1 1.0000 x 1 0.3441 y 1 0.7141 z 1 1.0000 x 1 1.7720 y 1 0.0000 ] ( [ Y ˜ s * C ˜ B , s * C ˜ R , s * ] [ c 1 c 2 c 3 ] ) E38

      By inserting (Eq. 36) and (Eq. 37) in (Eq. 38), we derive that

      [ R ˜ s * G ˜ s * B ˜ s * ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ x 1 0 0 0 y 1 0 0 0 z 1 ] ( [ x Y s * + c 1 y C B , s * + c 2 z C R , s * + c 3 ] [ c 1 c 2 c 3 ] ) = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ x 1 0 0 0 y 1 0 0 0 z 1 ] [ x Y s * y C B , s * z C R , s * ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ Y s * C B , s * C R , s * ] = [ R s * G s * B s * ] E39

      This completes the proof.

      C. Proof of Proposition 3

      Let [ Y ˜ s , C ˜ B , s , C ˜ R , s ] T and [ Y s , C B , s , C R , s ] T be the source color components in the Y ˜ C ˜ B C ˜ R based and YCBCR based AOCT methods, respectively. Thus, from the given condition we have
      [ Y ˜ s C ˜ B , s C ˜ R , s ] = [ Y s C B , s C R , s ] = [ F V i s I R I R V i s ] E40
      Let [ Y ˜ t , C ˜ B , t , C ˜ R , t ] T and [ Y t , C B , t , C R , t ] T be the three components of the target image in Y ˜ C ˜ B C ˜ R and YCBCR spaces, respectively. We know from (Eq. 30) that the target image satisfies
      [ Y ˜ t C ˜ B , t C ˜ R , t ] = [ x Y t y C B , t z C R , t ] + [ c 1 c 2 c 3 ] E41

      Then, under the given assumption that x, y and z > 0, using the mean and standard deviation properties, from (Eq. 40) and (Eq. 41) we can derive

      [ μ s Y ˜ μ s C ˜ B μ s C ˜ R ] = [ μ s Y μ s C B μ s C R ] , [ σ s Y ˜ σ s C ˜ B σ s C ˜ R ] = [ σ s Y σ s C B σ s C R ] , [ μ t Y ˜ μ t C ˜ B μ t C ˜ R ] = [ x μ t Y y μ t C B z μ t C R ] + [ c 1 c 2 c 3 ] , [ σ t Y ˜ σ t C ˜ B σ t C ˜ R ] = [ x σ t Y y σ t C B z σ t C R ] E42
      Let [ Y ˜ c , C ˜ B , c , C ˜ R , c ] T be the three components of [ R ˜ c , G ˜ c , B ˜ c ] T in Y ˜ C ˜ B C ˜ R space, [ Y c , C B , c , C R , c ] T be the three components of [ R c , G c , B c ] T in YCBCR space, respectively. By inserting (Eq. 40), (Eq. 41) and (Eq. 42) in (Eq. 12), we deduce that
      [ Y ˜ c C ˜ B , c C ˜ R , c ] = [ x Y c y C B , c z C R , c ] + [ c 1 c 2 c 3 ] E43

      From (Eq. 6) we have

      [ R ˜ c G ˜ c B ˜ c ] = [ 1.0000 x 1 0.0000 y 1 1.4020 z 1 1.0000 x 1 0.3441 y 1 0.7141 z 1 1.0000 x 1 1.7720 y 1 0.0000 z 1 ] ( [ Y ˜ c C ˜ B , c C ˜ R , c ] [ c 1 c 2 c 3 ] ) E44

      Thus, by inserting (Eq. 43) in (Eq. 44), we can derive that

      [ R ˜ c G ˜ c B ˜ c ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ x 1 0 0 0 y 1 0 0 0 z 1 ] ( [ x Y c + c 1 y C B , c + c 2 z C R , c + c 3 ] [ c 1 c 2 c 3 ] ) = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ x 1 0 0 0 y 1 0 0 0 z 1 ] [ x Y c y C B , c z C R , c ] = [ 1.0000 0.0000 1.4020 1.0000 0.3441 0.7141 1.0000 1.7720 0.0000 ] [ Y c C B , c C R , c ] = [ R c G c B c ] E45

      This finishes the proof.

      Advertisement

      Acknowledgments

      The author thanks everyone who contributed images to this chapter. The author also thanks TNO Human Factors, ImageFusion.org, Pics4Learning.com, BigFoto.com, and FreeFoto.com for test images acquisition.

      References

      1. 1. Cohen A. Daubechies I. Feauveau J. C. 1992 Biorthogonal Bases of Compactly Supported Wavelets. Commun. Pure Appl. Math., 45 485 560
      2. 2. Daubechies I. 1992 Ten Lectures on Wavelets, SIAM, Philadelphia, PA
      3. 3. Jack K. 2001 Video Demystified, 3rd ed., LLH Technology Publishing, Eagle Rock, VA
      4. 4. Li G. Wang K. 2007a Merging Infrared and Color Visible Images with a Contrast Enhanced Fusion Method. Proc. SPIE, 6571 657108 657101 -657108-12
      5. 5. Li G. Wang K. 2007b Applying Daytime Colors to Nighttime Imagery with an Efficient Color Transfer Method. Proc. SPIE, 6559 65590L 655901 -65590L-12
      6. 6. Li G. Xu S. Zhao X. 2010a An Efficient Color Transfer Algorithm for Recoloring Multiband Night Vision Imagery. Proc. SPIE, 7689 76890A 768901 -76890A-12
      7. 7. Li G. Xu S. Zhao X. 2010b Fast Color-Transfer-Based Image Fusion Method for Merging Infrared and Visible Images. Proc. SPIE, 7710 77100S 771001 -77100S-12
      8. 8. Li Z. Jing Z. Yang X. et al. 2005 Color Transfer Based Remote Sensing Image Fusion Using Non-separable Wavelet Frame Transform. Pattern Recognition Letters, 26 13 2006 2014
      9. 9. Mc Danie R. V. Scribner D. A. Krebs W. K. et al. 1998 Image Fusion for Tactical Applications. Proc. SPIE, 3436 685 695
      10. 10. Neelamani R. Queiroz R. de Fan Z. et al. 2006 JPEG Compression History Estimation for Color Images. IEEE Trans. Image Process.,15 6 1365 1378
      11. 11. Poynton C. 2003 Digital Video and HDTV, Algorithms and Interfaces, Morgan Kaufmann, San Francisco, CA
      12. 12. Pratt W. K. 2001 Digital Image Processing, 3rd ed., Wiley, New York
      13. 13. Piella G. 2003 A General Framework for Multiresolution Image Fusion: From Pixels to Regions. Inf. Fusion, 4 259 280
      14. 14. Ruderman D. L. Cronin T. W. Chiao C. C. 1998 Statistics of Cone Responses to Natural Images: Implications for Visual Coding. J. Optical Soc. of America, 15 8 2036 2045
      15. 15. Reinhard E. Ashikhmin M. Gooch B. et al. 2001 Color Transfer between Images. IEEE Comput. Graph. Appl., 21 5 34 41
      16. 16. Scribner D. A. Schuler J. M. Warren P. R. et al. 1998 Infrared Color Vision: Separating Objects from Backgrounds. Proc. SPIE, 3379 2 13
      17. 17. Skodras A. Christopoulos C. Ebrahimi T. 2001 The JPEG 2000 Still Image Compression Standard. IEEE Signal Processing Mag., 18 5 36 58
      18. 18. Toet A. 2003 Natural Colour Mapping for Multiband Nightvision Imagery. Inf. Fusion, 4 155 166
      19. 19. Tsagaris V. Anastassopoulos V. 2005 Fusion of Visible and Infrared Imagery for Night Color Vision. Displays, 26 4-5 , 191 196
      20. 20. Toet A. Hogervorst M. A. 2008 Portable Real-time Color Night Vision. Proc. SPIE, 6974 697402 697401 -697402-12
      21. 21. Toet A. Hogervorst M. A. 2009 The TRICLOBS Portable Triband Lowlight Color Observation System. Proc. SPIE, 7345 734503 734501 -734503-11
      22. 22. Wang L. Zhao Y. Jin W. et al. 2007 Real-time Color Transfer System for Low-light Level Visible and Infrared Images in YUV Color Space. Proc. SPIE, 6567 65671G 656711 -65671G-8
      23. 23. Zhang Z. Blum R. S. 1999 A Categorization and Study of Multiscale-Decomposition-Based Image Fusion Schemes with a Performance Study for a Digital Camera Application. Proc. IEEE, 87 8 1315 1326
      24. 24. Zheng Y. Essock E. A. 2008 A Local-coloring Method for Night-vision Colorization Utilizing Image Analysis and Fusion. Inf. Fusion, 9 2 186 199

      Written By

      Guangxin Li

      Submitted: 20 October 2010 Published: 24 June 2011