Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.
We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.
Although Toet’s work implies that Reinhard’s lαβ color transfer method can be successfully applied to image fusion, it is difficult to develop a fast color image fusion algorithm based on this color transfer technique. The main reason is that it is restricted to the nonlinear lαβ space (See Appendix A). This color space is logarithmic, the transformation between RGB and lαβ spaces must be transmitted through LMS and logLMS spaces. This therefore increases the system’s storage requirements and computational time. On the other hand, since the dynamic range of the achromatic component in lαβ space is very different from that of a normal grayscale image, it becomes inconvenient to enhance the luminance contrast of the final color fused image in lαβ space by using conventional methods, such as directly using a high contrast grayscale fused image to replace the luminance component of the color fused image.
To eliminate the limitations mentioned above, we (Li et al., 2010a) employed YCBCR space to implement Reinhard’s color transfer scheme and applied the YCBCR color transfer technique to the fusion of infrared and visible images. Through a series of mathematical derivation and proof, we (Li et al., 2010b) further presented a fast color-transfer-based image fusion algorithm. Our experiments demonstrate the performance of the fast color-transfer-based image fusion method is superior to other related color image fusion methods, including the Toet’s approach.
The rest of this chapter is organized as follows. Section 2 reviews Reinhard’s lαβ color transfer method. Section 3 outlines our YCBCR color transfer method. Section 4 describes two basic image fusion methods based on the YCBCR color transfer technique. Section 5 introduces our fast color-transfer-based image fusion method. Experimental results and associated discussions are provided in Section 6. Finally, in Section 7, conclusions are drawn and future works are suggested.
Studies by Reinhard et al. (Reinhard et al., 2001) have found that straightforward color statistics can capture some important subjective notions of style and appearance in images. Their work shows how to transfer them from one image to another to copy some of the atmosphere and mood of a good picture. With the pixel data represented in lαβ space, Reinhard et al. transfer color statistics from the target image to the source image by applying a linear map to each axis separately:
θs*=(σtθ/σsθ)(θs−μsθ)+μtθ,forθ=l,α,βE1
where the indexes ‘s’ and ‘t’ refer to the source and target images respectively. θdenotes an individual color component of an image in lαβ space. (μsθ,σsθ)and (μtθ,σtθ) are the means and standard deviations of the source and target images, respectively, both over the channelθ. Following this step, the resulting source image data is converted back to RGB space for display. After the transform described in (Eq. 1), the first and second order statistics of color distribution of the source image conform to those of the target color image. This color statistics match procedure is quite simple but can successfully transfer one image’s color characteristics to anther.
Instead of using the nonlinear lαβ space, our proposed YCBCR color transfer method (Li et al., 2010a) transfers the color distribution of the target image to the source image in the linear YCBCR color space. The forward and backward YCBCR transformations (Neelamani et al., 2006; Skodras et al., 2001) are achieved by means of (Eq. 2) and (Eq. 3), respectively.
where Y denote the luminance, CB and CR are two chromatic channels, which correspond to the color difference model (Poynton, 2003; Jack, 2001). As shown in (Eq. 4), CB and CR stand for blue and red color difference channels, respectively.
CB=0.50.886(B−Y)CR=0.50.701(R−Y)E4
Since the YCBCR transformation is linear, its computational complexity is far lower than that of the lαβ conversion. Let N be the number of the columns and rows of the input RGB image. The lαβ transformation requires a total of 22N2 additions, 34N2 multiplications, 3N2 logarithm and 3N2 exponential computations. In contrast, the YCBCR transformation avoids the logarithm and exponential operations. From (Eq. 2) and (Eq. 3), we can observe that only 10N2 additions and 13N2 multiplications are required for its implementation. Obviously, the simplicity of the YCBCR transformation enables a more efficient implementation of color transfer between images. Like the Reinhard’s scheme, the result’s quality of the YCBCR color transfer method also depends on the images’ similarity in composition.
The YCBCR transformation can be extended into a general formalism defined in (Eq. 5) and (Eq. 6). For color transfer, using color spaces conforming to this general YCBCR space framework, such as YUV space (Pratt, 2001), can produce same recoloring results as using YCBCR space.
where x, y, z, c1, c2 and c3 are constants, and x, y and z are nonzero. This fact can be proved by the following proposition.
Proposition 1: Let [R˜s*,G˜s*,B˜s*]T and [Rs*,Gs*,Bs*]T be the recolored source images obtained by the Y˜C˜BC˜R and YCBCR based color transfer methods, respectively. Suppose the two cases use the same target image, then for a fixed source image,
[R˜s*,G˜s*,B˜s*]T=[Rs*,Gs*,Bs*]TE7
Proof: See Appendix B.
The Y˜C˜BC˜R
transformation specified in (Eq. 5) and (Eq. 6) can be regarded as an extension of the YCBCR transformation. If x = y = z = 1, and c1 = c2 = c3 = 0, then the Y˜C˜BC˜R transformation becomes equivalent to the YCBCR transformation. If x = 1, y =0.872, z = 1.23, and c1 = c2 = c3 = 0, then the Y˜C˜BC˜R transformation is equivalent to the YUV transformation as follows.
4. Basic fusion method based on YCBCR color transfer technique
With the YCBCR color transfer method, it is very easy to develop color image fusion methods for merging infrared and visible images. We introduced two basic fusion approaches, one named StaCT (Standard Color-Transfer-Based Fusion Method), the other called CECT (Contrast Enhanced Version of Color-Transfer-Based Fusion Method) (Li et al., 2010b). These two basic fusion methods have to produce a source false color fused image, we employed the NRL (Naval Research Laboratory) scheme (Scribner et al., 1998; McDanie et al., 1998) to generate the source color fused image. The NRL algorithm is very simple and fast, and greatly useful for developing fast image fusion method.
4.1. StaCT method
The StaCT method uses the YCBCR color transfer method to directly recolor the source false color fused image, its whole steps are as follows.
1. Construct the source false color fused image [Rf, Gf, Bf]T by using the NRL scheme:
[Rf,Gf,Bf]T=[IR,Vis,Vis]TE10
where IR and Vis represent the input infrared and visible images, respectively.
2. Convert the image [Rf, Gf, Bf]T to YCBCR space and produce the source YCBCR components[Ys,CB,s,CR,s]T:
where Yc, CB,c and CR,c are the three color components of the final color fused image [Rc, Gc, Bc]T in YCBCR space. The indexes ‘s’ and ‘t’ stand for the source and target images, respectively. (μsθ,σsθ)and(μtθ,σtθ), whereθ=Y,CB,CR, are the means and standard deviations of the source and target images, respectively, both over the channelθ.
Transform the result back into RGB representation and get the final color fused image [Rc, Gc, Bc]T:
The Toet’s algorithm (Toet, 2003) produces the color fused imagery by using lαβ space. The main difference between the StaCT method and the Toet’s method lies in the color space conversion of images. As explained above, the linear YCBCR transformation has lower computational complexity in comparison with the nonlinear lαβ transformation. Hence, the StaCT method offers a more efficient and simpler way to produce a color fused image compared to the Toet’s method.
4.2. CECT method
The luminance contrast reduction issue, as described by Toet (Toet, 2003), may arise when the contrast of an image detail varies strongly among the different bands. In some conditions a detail may be represented with opposite contrast in different spectral bands. The combination of the individual image bands into a single color image may therefore significantly reduce the luminance contrast of an image detail. As a result, a detail that is noticeable in the individual image bands may be much less visible in the final color representation, due to the lack of luminance contrast. An appropriate grayscale fused image obtained by combining the individual sensor images can preserve all relevant contrast details of the individual bands. Hence, we can enhance the luminance contrast of the final color fused image by replacing the luminance component of the source false color fused image with the grayscale fused image. Based on this strategy, we proposed the CECT method (Li et al., 2010b). Its whole steps to perform are the following.
1. Construct the source false color fused image [Rf, Gf, Bf]T by using the NRL scheme given in (Eq. 10).
2. Convert the image [Rf, Gf, Bf]T to YCBCR space as in (Eq. 11).
3. Replace the luminance component Yf by the grayscale fused image F, which as well as the CB,f and CR,f components are directly used as the source YCBCR components:
[Ys,CB,s,CR,s]T=[F,CB,f,CR,f]TE14
4. Stretch the source YCBCR components to make their statistics match the color statistics of the target daylight color image as shown in (Eq. 12).
5. Transform the result back into RGB representation as in (Eq. 13).
The color spaces conforming to the Y˜C˜BC˜R framework defined in (Eq. 5) and (Eq. 6) can also be used in the CECT method. When the constant x is positive, using any form of Y˜C˜BC˜R space can yield the same color fused image as in YCBCR space. This result can be proved by the following proposition.
Proposition 2: Let [R˜c,G˜c,B˜c]T and [Rc,Gc,Bc]T be the color fused images obtained by the Y˜C˜BC˜R and YCBCR based CECT methods, respectively. Suppose the two cases use the same grayscale fusion method and target image. In addition, assume x defined in the Y˜C˜BC˜R transform is positive, then for a fixed pair of input images,
[R˜c,G˜c,B˜c]T=[Rc,Gc,Bc]TE15
The proof of this proposition is similar to the one of Proposition 1. Hence, we do not prove it here.
The achromatic component in lαβ space cannot be used directly for the luminance replacement process because its dynamic range is very different from that of the grayscale fused image. Therefore, Toet (Toet, 2003) suggested using HSV (Hue, Saturation, and Value) color space, in which the value component (V) has the same amplitude range as the grayscale fused image, to address this problem. The final high contrast color fused image is constructed by converting the recolored fused image into HSV space, replacing the value component by the high contrast grayscale fused image and then transforming back to RGB space. One can see that three color spaces (RGB, lαβ and HSV) are employed and four color space transformations (RGB to lαβ, lαβ to RGB, RGB to HSV, and HSV to RGB) are needed in the above manner. This is not an efficient way relative to the CECT method. Taking advantage of the linear YCBCR space, the CECT method only uses two color spaces (RGB and YCBCR) and requires two color space conversions (RGB to YCBCR and YCBCR to RGB) during the whole procedure to obtain the high contrast output image.
5. Fast fusion method based on YCBCR color transfer technique
For real-time image fusion system, faster methods are always desirable. We proposed a fast algorithm by mathematically optimizing the CECT approach’s architecture, which is named AOCT (Architecture-Optimized Version of Color-Transfer-Based Fusion Method) (Li et al., 2010b). Fig. 1 illustrates the fusion architecture of the AOCT approach, and its whole steps are as follows.
Figure 1.
Fusion architecture of the AOCT method
1. Use [F, Vis-IR, IR-Vis]T to form the source YCBCR components:
[YsCB,sCR,s]=[FVis−IRIR−Vis]E16
Steps 2 and 3 are the same as the steps 4 and 5 of the CECT method. Thus,
2. Perform color statistics matching as in (Eq. 12).
3. Perform the inverse YCBCR transform to obtain the final color fused image as in (Eq. 13).
We have demonstrated that the AOCT method has the same performance as the CECT method (Li et al., 2010b). The PA (Pixel Averaging) and MR (Multiresolution) fusion schemes are adopted to produce the grayscale fused image in the AOCT method. The AOCT approach using the PA grayscale fusion algorithm is named P-AOCT. The AOCT approach using the MR grayscale fusion algorithm is called M-AOCT.
Similar to the CECT approach, the AOCT method also allows us to choose other color spaces conforming to the Y˜C˜BC˜R framework in (Eq. 5) and (Eq. 6). When the constants x, y and z are positive, using any form of Y˜C˜BC˜R space can produce the same color fusion result as in YCBCR space. This conclusion can be proved by the following proposition.
Proposition 3: Let [R˜c,G˜c,B˜c]T and [Rc,Gc,Bc]T be the color fused images obtained by the Y˜C˜BC˜R based and YCBCR based AOCT methods, respectively. Suppose the two cases use the same grayscale fusion method and target image. In addition, assume the constants x, y and z defined in the Y˜C˜BC˜R transformation are positive, then for a fixed pair of input images,
[R˜c,G˜c,B˜c]T=[Rc,Gc,Bc]TE17
Proof: See Appendix C.
In contrast, the construction process of the source chromatic components (CB,s and CR,s) in the AOCT method is quite simpler than that in the CECT method. After performing the YCBCR transform and replacing the luminance component in the CECT method (by inserting (Eq. 10) and (Eq. 11) in (Eq. 14)), we arrive at
Let N be the number of the columns and rows of the input images. According to (Eq. 18), the CECT method requires 4N2 additions and 6N2 multiplications to obtain the source chromatic components. The AOCT method skips the YCBCR forward transformation and directly uses the simple difference signals of the input images (that is, Vis-IR and IR-Vis) to form the source chromatic components. Equation (Eq. 16) states that the AOCT method only requires 2N2 additions in the construction process. Therefore, the AOCT method is faster and easier to implement compared to the CECT method.
The following strategies can help the AOCT method to be implemented in an extremely fast and memory efficient way.
F=0.5(IR+Vis)E19
We have proved that the source luminance component Ys can be directly achieved by the following solution when the P-AOCT method is adopted (Li et al., 2010b).
Ys=IR+VisE20
CR,s=−CB,s=−(Vis−IR)E21
From this by applying the mean and standard deviation properties, we can derive that
μsCR=−μsCBσsCR=σsCBE22
Equations (Eq. 21) and (Eq. 22) imply that, after computingCB,s, μsCB, andσsCB, there is no need to recalculateCR,s, μsCR, andσsCR. A more efficient way is to obtain them directly from the negative polarityCB,s, the negative polarityμsCB, andσsCB, respectively.
3. From (Eq. 12), we can see that only six target color statistical parameters (μtY, μtCB, μtCR, σtY, σtCBandσtCR) are required in color statistics matching process. Hence, in practice, there is no real need to store target images. A system that is equipped with a look-up table of color statistical parameters for different types of backgrounds is sufficient to enable users to adjust the color to their specific needs.
To demonstrate the potential of the YCBCR color transfer method, we applied it to some landscape photographs of scenes, including trees, sunset, and mountains. The lαβ color transfer scheme (Reinhard et al., 2001) was selected to serve as a comparison. The source image of each scene is shown in the left column of Fig. 2. The second left column represents their respective target images. Note that each source image contains similar composition as the corresponding target image. The third and right columns illustrate the results of applying the lαβ and YCBCR color transfer methods respectively to the source image at the left column with the corresponding image in the second column as the target. lαβ space works very well on the task of recoloring these natural color photographs. This provides some validation for Reinhard et al.’s work (Reinhard et al., 2001). The images produced by using YCBCR space look a little lighter than in lαβ space, but the results are also pleasing and reasonable. This indicates that the YCBCR color transfer method can be applied to many image types.
6.2. Comparison among different color image fusion methods
Fig. 3. show some examples of color image fusion methods related to color transfer technique, inculding the NRL (Scribner et al., 1998; McDanie et al., 1998), Toet’s (Toet, 2003), improved Toet’s, StaCT, P-AOCT, M-AOCT schemes. The CECT approach has the same fusion performance as the AOCT method (Li et al., 2010b), thus, we do not discuss the CECT approach in our experiments.
Figure 3.
Fusion results of applying different color image fusion methods to the UN Camp images of a nighttime scene representing a building, a path, heather, and a person walking along the fence. (a) The input 3-5 μm midwave infrared image. (b) The input visible image. (c) The target image. (d) The result of the NRL method. (e)-(i) are the results of the Toet’s method, the improved Toet’s method, the StaCT method, the P-AOCT method and the M-AOCT method respectively, all with (c) as the target image. (The target image courtesy of Alexander Toet.)
As described above, the Toet’s method (Toet, 2003) produces a color fused image by using the lαβ color transfer approach to recolor a source false color fused image. In his study, the source false color fused image is generated by mapping the infrared image and the visible image respectively to the green and blue channels of an RGB image, the red channel is set to zero or black. Similar to the Toet approach, the improved Toet’s method also employs lαβ space to perform color transfer, the difference is that the improved Toet’s method uses the NRL scheme to yield the source false color fused image.
In this study, we employ a discrete wavelet transform based fusion scheme (Li et al., 2007a) to achieve the grayscale fused image in the M-AOCT method, and use a 4-level wavelet transform with ‘5-3’ biorthogonal filters (Cohen et al., 1992; Daubechies, 1992) to implement the MR decompositions of the input images. The approximation images at the highest level (low resolution) are combined by taking the average of them. The detail images at each high frequency band of each decomposition level are fused by a weighted average fusion rule based on the local amplitude ratio. This weighted average fusion rule can achieve a fused image with less ‘ringing’ artifacts compared to the widely used maximum selection fusion rule (Zhang & Blum, 1999; Piella, 2003).
The original registered infrared (3-5 μm) and visible test images are supplied by Toet at TNO Human Factors and available online at www.imagefusion.org. Since the corresponding daylight color photographs are not available for these images, we adopt an arbitrary color image as target.
Fig. 3 corresponds to a nighttime scene, which represents a building, a path, heather, and a person walking along the fence. The input infrared (3-5 μm) and visible images are shown in Fig. 3(a) and (b), respectively. Fig. 3(c) shows an arbitrary daytime color image with similar color distribution as the source scene. Fig. 3(d) shows the false color fused image constructed by the NRL method. Most salient features from the inputs are clearly visible in this image, but the color appearance is rather unnatural. Fig. 3(e) shows the fusion result obtained by the Toet’s method with Fig. 3(c) as target. The image has a comparatively natural color appearance. But the color characteristics are just roughly close to those of the target image. It doesn’t preserve the trees’ green colors in the target image very well. Note that some heather appears yellow. Moreover, there emerges excessive saturation phenomenon in the resulting color fused image, and thus produces a glaring white, which severely hides some salient detail information contained in the input images, such as the textures of the road. In addition, some image details, such as the poles of the fence and the outlines of the building, are represented with low contrast in Fig. 3(e). Fig. 3(f) shows the fused image produced by the improved Toet’s method with Fig. 3(c) as target. The image takes the target image’s color characteristics and has high contrast, but the excessive saturation phenomenon doesn’t disappear, some salient details, such as the textures of the road, are hiden by the glaring white. Fig. 3(g) shows the fused image generated by the StaCT method with Fig. 3(c) as target. The image does not only gain the natural color appearance of the target image, but contains most salient information from the input images, moreover, avoids the excessive saturation phenomenon. Fig. 3(h) shows the fused image produced by the P-AOCT method with Fig. 3(c) as target. The image has better visual quality in comparison with the result obtained by the StaCT scheme as shown in Fig. 3(g). The visibility of image details in Fig. 3(h), such as the poles of the fence and the outlines of the person, is evidently higher than in the StaCT algorithm. Fig. 3(i) illustrates the fused image achieved by the M-AOCT method with Fig. 3(c) as target. Clearly, the M-AOCT approach has the best overall performance.
The M-AOCT method employs the MR fusion scheme, the Toet’s and improved Toet’s methods utilize the lαβ color transfer strategy. Therefore, the computational complexities of these three methods are all higher than that of the P-AOCT method. In fact, the StaCT method has the same complexity as the P-CECT method (the CECT approach using the PA fusion algorithm to implement grayscale fusion). Thus, according to the algorithm analysis in Section 5, we can confirm that the computational cost of the P-AOCT method is lower than that of the StaCT method. Hence, in the five color-transfer-based image fusion algorithms (the Toet’s, improved Toet’s, StaCT, P-AOCT, and M-AOCT methods), the P-AOCT method has the lowest implementation complexity.
Although it can be seen from Fig. 3 that the salient information from the input images is represented with higher contrast in the fused image obtained by the M-AOCT method than those produced by other approaches, this superior performance of the M-AOCT method comes at a cost of increased computational complexity of the fusion process. In contrast, the P-AOCT method is much faster and easier to implement. Moreover, this algorithm can provide a visually pleasing color fused image with available contrast, and the salient information contained in the input images is also represented quite well. Hence, if there is no special requirement, in practice, it is not necessary to utilize the M-AOCT method to merge images, the low computational-cost P-AOCT method is generally sufficient to fulfill user needs.
6.3. Choice of target image
For the AOCT method, the actual choice of the target image is not critical. The method can provide a natural looking color fused image as long as the color distribution of the target image is to some extent similar to that of the source scene.
Figure 4.
Fusion results of applying the P-AOCT method with different target images to the Dune images of a nighttime scene representing a person walking over a dune area covered with semi-shrubs and sandy paths. Top left: the input 3-5 μm midwave infrared image. Top right: the input visible image. Middle: four different target images. Bottom: the corresponding fusion results corresponding to each target image. (The two left target images courtesy of www.pics4learning.com, and the two right target images courtesy of www.bigfoto.com.)
To demonstrate this fact, we applied the P-AOCT method to merging the Dune images with different target images. The original registered infrared (3-5 μm) and visible test images are shown at the top left and right of Fig. 4 respectively. In this scene, a person is walking over a dune area covered with semi-shrubs and sandy paths. These images are also supplied by Toet at TNO Human Factors and available online at www.imagefusion.org. The second row of Fig. 4 represents the different target images. The third row illustrates the results of applying the P-AOCT approach respectively to the input images at the top row with the corresponding image in the second row as the target. The second left column of Fig. 4 shows an interesting example where a color photograph representing a grassland area with an elk in it was adopted as the target image. The content of the target image in this case is quite dissimilar to that of the source scene, but the target image has similar color distribution as the source scene. As a result, the corresponding color fused images still have a fairly natural appearance. Another special example depicted in the right column of Fig. 4 shows that the P-AOCT method fails when the color compositions of the target image and the source scene are too dissimilar. In this case, the target image also displays a dune like scene, but has more bright blue sky in the background. Consequently, the appearance of the resulting fused images is quite unnatural and the sandy paths are represented in unreasonable blue.
From the above examples, we can see that the depicted scenes of the target and source images don’t have to be identical, as long as their color distributions resemble each other to some extent. In practice, surveillance systems usually register a fixed scene, a daylight color image of the same scene that is being monitored could be used as an optimal target image.
In this chapter, we introduced color transfer techniques and some typical image fusion methods based on color transfer technique. More importantly, we presented a fast image fusion algorithm based on YCBCR color transfer technique, named AOCT. Depending on the PA and MR grayscale image fusion schemes, we developed two solutions, namely the P-AOCT and M-AOCT methods, to fulfill different user needs. The P-AOCT method answers to a need of easy implementation and speed of use. The M-AOCT method answers to the high quality need of the fused products. Experimental results demonstrate that the AOCT method can effectively produce a natural appearing “daytime-like” color fused image with good contrast. Even the low-complexity P-AOCT method can provide a satisfactory result.
Another important contribution of this chapter is that we have mathematically proved some useful propositions about color-transfer-based image fusion. These theories clarify that other color spaces, which are founded on the basis of luminance, blue and red color difference components, such as YUV space, can be used as an alternative to YCBCR space in the image fusion approaches based on YCBCR color transfer technique, including the StaCT, CECT, and AOCT methods.
Currently, the AOCT method only supports the fusion of imagery from two sensors. Future research effort will be the extension of the AOCT method to accept imagery from three or four spectral bands, e.g. visible, short-wave infrared, middle-wave infrared, and long-wave infrared bands. In addition, designing quantitative measure for color image fusion performance is another worthwhile and challenging research topic.
In this Appendix we present the RGB to lαβ transform (Ruderman et al., 1998). This transform is derived from a principal component transform of a large ensemble of hyperspectral images that represents a good cross-section of natural scenes. The resulting data representation is compact and symmetrical, and provides automatic decorrelation to higher than second order.
The actual transform is as follows. First the RGB tristimulus values are converted to LMS space by
The data in this color space shows a great deal of skew, which is largely eliminated by taking a logarithmic transform:
[LMS]=[logLlogMlogS]E24
The inverse transform from LMS cone space back to RGB space is as follows. First, the LMS pixel values are raised to the power ten to go back to linear LMS space.
[LMS]=[10L10Μ10S]E25
Then, the data can be converted from LMS to RGB using the inverse transform of (Eq. 23):
where [Y˜s,C˜B,s,C˜R,s]T and [Ys,CB,s,CR,s]T are the three components of the source image in Y˜C˜BC˜R and YCBCR spaces, respectively. [Y˜t,C˜B,t,C˜R,t]Tand [Yt,CB,t,CR,t]T represent the three components of the target image in Y˜C˜BC˜R and YCBCR spaces, respectively. We know that the mean and standard deviation respectively have the following properties:
μ(λX+c)=λμ(X)+candσ(λX+c)=|λ|σ(X)E32
where λ and c are constants, λ,c∈ℝ, X is a random variable. Thus for the original source and target images, the means of their achromatic components in Y˜C˜BC˜R and YCBCR spaces respectively satisfy
μsY˜=xμsY+c1andμtY˜=xμtY+c1E33
The corresponding standard deviations respectively satisfy
σsY˜=|x|σsYandσtY˜=|x|σtYE34
Let [Y˜s*,C˜B,s*,C˜R,s*]T be the three components of [R˜s*,G˜s*,B˜s*]T in Y˜C˜BC˜R space, [Ys*,CB,s*,CR,s*]Tbe the three components of [Rs*,Gs*,Bs*]T in YCBCR space. By inserting (Eq. 33), (Eq. 34) and Y˜s=xYs+c1 in
Y˜s*=(σtY˜/σsY˜)(Y˜s−μsY˜)+μtY˜E35
we can derive the relationship between Y˜s* andYs*:
Let [Y˜s,C˜B,s,C˜R,s]T and [Ys,CB,s,CR,s]T be the source color components in the Y˜C˜BC˜R based and YCBCR based AOCT methods, respectively. Thus, from the given condition we have
[Y˜sC˜B,sC˜R,s]=[YsCB,sCR,s]=[FVis−IRIR−Vis]E40
Let [Y˜t,C˜B,t,C˜R,t]T and [Yt,CB,t,CR,t]T be the three components of the target image in Y˜C˜BC˜R and YCBCR spaces, respectively. We know from (Eq. 30) that the target image satisfies
[Y˜tC˜B,tC˜R,t]=[xYtyCB,tzCR,t]+[c1c2c3]E41
Then, under the given assumption that x, y and z > 0, using the mean and standard deviation properties, from (Eq. 40) and (Eq. 41) we can derive
Let [Y˜c,C˜B,c,C˜R,c]T be the three components of [R˜c,G˜c,B˜c]T in Y˜C˜BC˜R space, [Yc,CB,c,CR,c]Tbe the three components of [Rc,Gc,Bc]T in YCBCR space, respectively. By inserting (Eq. 40), (Eq. 41) and (Eq. 42) in (Eq. 12), we deduce that
The author thanks everyone who contributed images to this chapter. The author also thanks TNO Human Factors, ImageFusion.org, Pics4Learning.com, BigFoto.com, and FreeFoto.com for test images acquisition.
References
1.CohenA.DaubechiesI.FeauveauJ.C.1992Biorthogonal Bases of Compactly Supported Wavelets. Commun. Pure Appl. Math., 45485560
2.DaubechiesI.1992Ten Lectures on Wavelets, SIAM, Philadelphia, PA
3.JackK.2001Video Demystified, 3rd ed., LLH Technology Publishing, Eagle Rock, VA
4.LiG.WangK.2007aMerging Infrared and Color Visible Images with a Contrast Enhanced Fusion Method. Proc. SPIE, 6571657108657101 -657108-12
5.LiG.WangK.2007bApplying Daytime Colors to Nighttime Imagery with an Efficient Color Transfer Method. Proc. SPIE, 655965590L655901 -65590L-12
6.LiG.XuS.ZhaoX.2010aAn Efficient Color Transfer Algorithm for Recoloring Multiband Night Vision Imagery. Proc. SPIE, 768976890A768901 -76890A-12
7.LiG.XuS.ZhaoX.2010bFast Color-Transfer-Based Image Fusion Method for Merging Infrared and Visible Images. Proc. SPIE, 771077100S771001 -77100S-12
8.LiZ.JingZ.YangX.et al.2005Color Transfer Based Remote Sensing Image Fusion Using Non-separable Wavelet Frame Transform. Pattern Recognition Letters, 261320062014
9.Mc DanieR. V.ScribnerD. A.KrebsW. K.et al.1998Image Fusion for Tactical Applications. Proc. SPIE, 3436685695
10.NeelamaniR.QueirozR.de FanZ.et al.2006JPEG Compression History Estimation for Color Images. IEEE Trans. Image Process.,15613651378
11.PoyntonC.2003Digital Video and HDTV, Algorithms and Interfaces, Morgan Kaufmann, San Francisco, CA
12.PrattW. K.2001Digital Image Processing, 3rd ed., Wiley, New York
13.PiellaG.2003A General Framework for Multiresolution Image Fusion: From Pixels to Regions. Inf. Fusion, 4259280
14.RudermanD. L.CroninT. W.ChiaoC. C.1998Statistics of Cone Responses to Natural Images: Implications for Visual Coding. J. Optical Soc. of America, 15820362045
15.ReinhardE.AshikhminM.GoochB.et al.2001Color Transfer between Images. IEEE Comput. Graph. Appl., 2153441
16.ScribnerD. A.SchulerJ. M.WarrenP. R.et al.1998Infrared Color Vision: Separating Objects from Backgrounds. Proc. SPIE, 3379213
17.SkodrasA.ChristopoulosC.EbrahimiT.2001The JPEG 2000 Still Image Compression Standard. IEEE Signal Processing Mag., 1853658
18.ToetA.2003Natural Colour Mapping for Multiband Nightvision Imagery. Inf. Fusion, 4155166
19.TsagarisV.AnastassopoulosV.2005Fusion of Visible and Infrared Imagery for Night Color Vision. Displays, 264-5 , 191196
20.ToetA.HogervorstM. A.2008Portable Real-time Color Night Vision. Proc. SPIE, 6974697402697401 -697402-12
21.ToetA.HogervorstM. A.2009The TRICLOBS Portable Triband Lowlight Color Observation System. Proc. SPIE, 7345734503734501 -734503-11
22.WangL.ZhaoY.JinW.et al.2007 Real-time Color Transfer System for Low-light Level Visible and Infrared Images in YUV Color Space. Proc. SPIE, 656765671G656711 -65671G-8
23.ZhangZ.BlumR. S.1999A Categorization and Study of Multiscale-Decomposition-Based Image Fusion Schemes with a Performance Study for a Digital Camera Application. Proc. IEEE, 87813151326
24.ZhengY.EssockE. A.2008A Local-coloring Method for Night-vision Colorization Utilizing Image Analysis and Fusion. Inf. Fusion, 92186199
Written By
Guangxin Li
Submitted: October 20th, 2010Published: June 24th, 2011