The Objective Evaluation Index (OEI) for Evaluation of Night Vision Colorization Techniques

A night vision colorization technique can produce colorized imagery with a naturalistic and stable color appearance by processing multispectral night vision (NV) imagery. The multi‐ spectral images typically include visual-band (e.g., red, green, and blue (RGB), or intensified) imagery and infrared imagery (e.g., near infrared (NIR) and long wave infrared (LWIR)). Although appropriately false-colored imagery is often helpful for human observers in improving their performance on scene classification and reaction time tasks (Waxman et al., 1996; Essock et al., 1999), inappropriate color mappings can also be detrimental to human performance (Toet et al., 2001; Varga, 1999). A possible reason is lack of physical color constancy. Another drawback with false coloring is that observers need specific training with each of the false color schemes so that they can correctly and quickly recognize objects; whereas with colorized nighttime imagery rendered with natural colors, users should be able to readily recognize and identify objects without any training.


Introduction
A night vision colorization technique can produce colorized imagery with a naturalistic and stable color appearance by processing multispectral night vision (NV) imagery. The multispectral images typically include visual-band (e.g., red, green, and blue (RGB), or intensified) imagery and infrared imagery (e.g., near infrared (NIR) and long wave infrared (LWIR)). Although appropriately false-colored imagery is often helpful for human observers in improving their performance on scene classification and reaction time tasks (Waxman et al., 1996;Essock et al., 1999), inappropriate color mappings can also be detrimental to human performance (Toet et al., 2001;Varga, 1999). A possible reason is lack of physical color constancy. Another drawback with false coloring is that observers need specific training with each of the false color schemes so that they can correctly and quickly recognize objects; whereas with colorized nighttime imagery rendered with natural colors, users should be able to readily recognize and identify objects without any training.
There are several night vision (NV) colorization techniques developed in past decades. Toet (2003) proposed a NV colorization method that transfers the color characteristics of daylight imagery into multispectral NV images. Essentially, this color-mapping method matches the statistical properties (i.e., mean and standard deviation) of the NV imagery to that of a natural daylight color image (manually selected as the "target" color distribution). Zheng and Essock (2008) presented a "local coloring" method that can colorize the NV images more like daylight imagery by using histogram matching. The local-coloring method renders the multispectral images with natural colors segment by segment (i.e., "segmentationbased"), and also provides automatic association between the source and target images. Zheng (2011) recently introduced a channel-based color fusion method, which is fast enough for real-time applications. Note that the term "color fusion" in this chapter refers to combing multispectral images into a color-version image with the purpose of resembling natural scenes. Toet (2008 &2012) recently proposed a new color mapping method using a lookup table (LUT). The LUT is created between a false-colored image (formed with multispectral NV images) and its color reference image (aiming at the same scene but taken at daytime). The colors in the resulting colored NV image resemble the colors in the daytime color image. This LUT-mapping method runs fast for real-time implementations. The LUTmapping method and the statistic-matching method are also summarized in their recent paper (Toet & Hogervorst, 2012). Most recently Zheng (2012) developed a joint-histogram matching method for NV colorization.
The quality of colorized images can be assessed by subjective and/or objective measures. However, subjective evaluation normally costs time and resources. Moreover, the subjective evaluation methods cannot be readily and routinely used for real-time and automated systems. On the other hand, objective evaluation metrics can automatically and quantitatively measure the image qualities (Liu et al., 2012& Blasch et al., 2008. Over the past decade, many objective metrics for grayscale image evaluations have been proposed (Alparone et al., 2004;Wald et al., 1997;Tsagaris & Anastassopoulos, 2006). However, the metrics for grayscale images cannot be directly extended to the evaluations of colorized images. Recently, some objective evaluations of color images have been reported in the literature. To objectively assess a color fusion method, Tsagaris (2009) proposed a color image fusion measure (CIFM) by using the amount of common information between the source images and the colorized image, and also the distribution of color information. Yuan et al. (2011) presented an objective evaluation method for visible and infrared color fusion utilizing four metrics: image sharpness metric, image contrast metric, color colorfulness metric, and color naturalness metric. In this chapter, we introduce an objective evaluation index (OEI) to quantitatively evaluate the colorized images. Given a reference (daylight color) image and several versions of the colorized NV images from different coloring techniques, all color images are first converted into International Commission on Illumination (CIE) LAB space, with dimension L for lightness and a and b for the color-opponent dimensions (Malacara, 2002). Then the OEI metric is computed with the four established metrics, phase congruency metric (PCM), gradient magnitude metric (GMM), image contrast metric (ICM), and color natural metric (CNM).
Certainly, a color presentation of multispectral night vision images can provide a better visual result for human users. We would prefer the color images resembling natural daylight pictures that we are used to; meanwhile the coloring process shall be efficient enough ideally for real time applications. In this chapter, we will discuss and explore how to objectively evaluate the image qualities of colorized images. The remainder of this chapter is organized as follows. Six NV colorization techniques are briefly reviewed in Section 2. Next, four image quality metrics are described in Section 3. A new colorization metric, objective evaluation index (OEI), is introduced in Section 4. The experiments and discussions are presented in Section 5. Conclusions are finally drawn in Section 6.

Overview of night vision colorization techniques
All color mapping methods described in Subsections 2.2-2.6 are performed in lαβ color space. Thus the color space conversion from RGB to lαβ must be done prior to color mapping, and then the inverse transformation to RGB space is necessary after the mapping. The details of lαβ color space transformation are given elsewhere (Toet, 2003;Zheng & Essock, 2008). Certainly, two images, a source image and a target image, are involved in a color mapping process. The source image is usually a color fusion image (in Subsections 2.2-2.5) or a falsecolored image (in Subsection 2.6); while the target image is normally a daylight picture containing the similar scene. The target image may have a different resolution as depicted in Subsections 2.2-2.5; however, the LUT described in Subsection 2.6 is established using the registered target (reference) image.

Channel-based color fusion (CBCF)
A fast color fusion method, termed as channel-based color fusion (CBCF), was introduced to facilitate realtime applications (Zheng, 2011). Notice that the term of "color fusion" means combing multispectral images into a color-version image with the purpose of resembling natural scenes. Relative to the "segmentation-based colorization" (Zheng & Essock, 2008), color fusion trades the realism of colors with speed.
The general framework of channel-based color fusion is as follows, (i) prepare for color fusion, preprocessing (denoising, normalization and enhancement) and image registration; (ii) form a color fusion image by properly assigning multispectral images to red, green, and blue channels; (iii) then fuse multispectral images (gray fusion) using aDWT algorithm (Zheng et al., 2005); and, (iv) replace the value component of color fusion in HSV color space with the gray-fusion image, and finally transform back to RGB space.
In NV imaging, there may be several bands of images available, for example, visible (RGB), image intensified (II), near infrared (NIR), medium wave infrared (MWIR), long wave infrared (LWIR). Upon the available images and the context, we only discuss two of two-band color fusions of (II ⊕ LWIR), (NIR ⊕ LWIR). The symbol '⊕ ' denotes the fusion of multiband images.
A color fusion of NIR and LWIR is formulated by,  (Zheng et al., 2005). Although the limits given in contrast stretching are obtained empirically according to the night vision images that we had, it is viable to formulate the expressions and automate the fusion based upon a set of conditions (imaging devices, imaging time, and application location). Notice the transform parameters in Eqs. (1) were applied to all color fusions in our experiments (see Fig. 3d).

Statistic matching
A statistic matching (stat-match) is used to transfer the color characteristics from natural daylight imagery to false color night-vision imagery, which is formulated as: where I C is the colored image, I S is the source (false-color) image in lαβ space; μ denotes the mean and σ denotes the standard deviation; the subscripts 'S' and 'T' refer to the source and target images, respectively; and the superscript 'k' is one of the color components: {l, α, β}.
After this transformation, the pixels comprising the multispectral source image have means and standard deviations that conform to the target daylight color picture in lαβ space. The colored image is transformed back to the RGB space through the inverse transforms (Zheng & Essock, 2008; see Fig. 3e).

Histogram matching (HM)
Histogram matching (i.e., histogram specification) is usually used to enhance an image when histogram equalization fails (Gonzalez & Woods, 2002). Given the shape of the histogram that we want the enhanced image to have, histogram matching can generate a processed (i.e., matched) image that has the specified histogram. In particular, by specifying the histogram of a target image (with daylight natural colors), a source image (with false colors) resembles the target image in terms of histogram distribution after histogram matching.
Histogram matching (hist-match) can be implemented as follows. First, the normalized cumulative histograms of source image and target image (h S and h T ) are calculated, respectively.

New Advances in Image Fusion
where N is the total number of pixels in the image, n k is the number of pixels that have gray level u k , and L is the number of gray (bin) levels in the image. Typically, L = 256 for a digital image. But we can round the image down to m (m < L, e.g., m = 64) levels, and thus its histogram is called m-bin histogram. Clearly, S(u k ) is a non-decreasing function. Similarly, h T = T(v k ) can be computed (see the "Target" curve in Fig. 1c).
Second, considering h S = h T (i.e., S(u k ) = T(v k )) for histogram matching, the matched image is accordingly computed as It is straightforward to find a discrete solution of the inverse transform, T -1 [S()] as both T() and S() can be implemented with look up tables.
Similar to the statistic matching (described in Subsection 2.2), histogram matching also serves for color mapping (see Fig. 3f) and is performed component-by-component in lαβ space. Specifically, with each color component (say the α component, treated as a grayscale image) of a false-colored image, we can compute S(u k ). With a selected target image, T(v k ) can be calculated with regard to the same color component (say α). Using Eq. (5) the histogram matching can be completed regarding the color component (α). Histogram matching and statistic matching can be applied separately or jointly. When applied together, for instance, it is referred to as "statistic matching then histogram matching" (Zheng & Essock, 2008).

Joint histogram matching (JHM)
As described in Subsection 2.3, histogram matching is applied to each color component (plane) separately. It is highly possible to distort the color distributions of the mapped image (see Fig.  3f). To avoid color distortion, we introduce a new color mapping method, joint histogram matching (joint-HM).
In lαβ space, α and β represent the color distributions; while l is the intensity component. A joint histogram (also called 2D histogram) of two color planes (α versus β) is calculated and then matched from source to target. The intensity component (l) is matched individually. The joint histogram is actually the joint (2D) intensity distribution of the two images, which is often used to compute the joint entropy (Hill & Batchelor, 2001) for image registration.
How to calculate the normalized cumulative histogram (denoted as h) from a 2D joint histogram (denoted as H J ) needs further discussion. To do histogram matching, h is expected to be a non-decreasing function. We propose to form a one-dimensional (1D) histogram by stacking H J column-by-column and then perform histogram matching as defined in Eq. (10).
Of course, to correctly index a 1D transform (T -1 ()), the proper calculation of u m (with m bins) using two gray (bin) levels is expected. If H J is computed as (β vs. α), its matching process is denoted as joint-HM(βα). Eventually, the histogram of the mapped image is sort of tradeoff between two histograms, "Source" and "Target". This is expected since we want no color distortion (i.e., preserving its own colors to some extent) during color mapping (see Fig. 3g).

Statistic matching then joint-histogram matching (SM-JHM)
The joint-HM can be applied together with statistic matching such as "stat-match then joint-HM", which usually result a better NV colorization. The statistic matching globally "paints" the image, while the joint-HM colors is more like the daylight picture in details (see Fig. 3h).

Lookup table (LUT)
Hogervorst and Toet (2008) proposed a color mapping method using a lookup table (LUT). The LUT is created using a false-colored image (formed with two-band NV images) and the reference (i.e., target) daylight image. This method yields a colored NV image similar to the daytime image in colors. The implementation of this LUT method is described as follows. 4. Vary the index value from 2 to 65535 and repeat the processes described in Step 3. At the end, the LUT will be established.
Once the LUT is created, the LUT-based mapping procedure is simple and fast (see Fig. 3i), and thus can be deployed in realtime. However, the LUT creation thoroughly relies on the aligned reference image aiming at the same scene. Any misalignment, using a different reference color image, or coloring a different NV imagery (i.e., aiming at different direction), will usually result a poor colorization (see Fig. 5i).

Four image quality metrics
Three image quality metrics for grayscale images and one metric for color images are reviewed in this section. The color-related metrics are defined in the CIELAB space (Malacara, 2002) specified by the International Commission on Illumination. The perceptually uniform CIELAB space consists of an achromatic luminosity component L * (black-white) and two chromatic values a * (green-magenta) and b * (blue-yellow). The coordinates L * a * b * (CIE 1976) can be calculated using the CIE XYZ tristimulus values (Malacara, 2002).

Phase Congruency Metric (PCM)
The phase congruency (PC) model is also called the "local energy model" developed by Morrone et al. (1986).
and the local amplitude at scale n is The one-dimensional (1D) phase congruency metric (PCM) can be computed as where ε is a small positive constant.
In order to calculate the quadrature pair of filters M n e and M n o , Gabor filters (Gabor, 1946) or log-Gabor filters (Mancas-Thillou & Gosselin, 2006) can be applied. In this chapter, we use log-Gabor filters (e.g., wavelets at scale n = 4) due to its following two features: (i) log-Gabor filters, by definition, have no direct current (DC) component; and (ii) the transfer function of the log-Gabor filter has an extended tail at the high frequency end, which makes it more capable to encode natural images than ordinary Gabor filters (Zhang et al., 2011). The transfer function of a log-Gabor filter in the frequency domain is where ω 0 is the filter's center frequency and σ r controls the filter's bandwidth.
To compute the PCM of two-dimensional (2D) grayscale images, we can apply the 1D analysis over several orientations and then combine the results according to some rules. The 1D log-Gabor filters described above can be extended to 2D ones by applying Gaussian function across the filter perpendicular to its orientation (Kovesi, 1999;Fischer et al., 2007;Wang et al., 2008). The 2D log-Gabor function has the following transfer function The two-dimensional PCM at (x, y) is defined as  (15) where ε is a small positive constant. It should be noted that PC 2D (x,y) is a real number within where M×N is the size of the image. The range of PCM is [0,1].

Gradient Magnitude Metric (GMM)
The x y The GM of f(x, y) at pixel (x, y) is defined as 2 2 ( , ) = .
x y G x y G G + The averaged GM over all pixels is called image gradient magnitude metric (GMM), where M×N is the size of the image.

Image Contrast Metric (ICM)
An image with excellent contrast has a wide dynamic range of intensity level and appropriate intensity. Both the dynamic range of intensity level or the overall intensity distribution of the image can be provided by a histogram. A global contrast metric is proposed using the histogram character. The histogram of image with levels in the range [0, N-1] is a frequencydistribution function defined as the overall intensity distribution of an image where X k is the k-th level of input and n k is the number of the pixels in the image having level x k . The probability density function (PDF) is computed by where n is the total number of the pixels of the image. The dynamic range value β is defined as The dynamic range matrix α of histogram is defined as where α ∈ [0,1] and a larger value of α means a wider dynamic range in the histogram, which leads to better contrast. The image contrast metric is defined as For color images, the image contrast metric is determined by both gray contrast and color contrast. Because human perception is more sensitive to the luminance on contrast evaluation, we employ L * channel in the CIELAB space to evaluate the color contrast. Thus, image contrast is determined by the histogram of gray intensity and the histogram of color luminance L * (see Fig. 1). For the gray intensity I, the gray contrast metric is defined as (26) where α I and P(I k ) can be calculated as above for gray intensity. For L * channel, the color contrast metric is (27) where α c and P(L k ) can be calculated as above for L * channel. The global image contrast metric (ICM) is defined as where ω 1 and ω 2 are the weights of C g and C c . For simplicity, we choose ω 1 =ω 2 = 0.5. ICM varies within [0,1]. The evaluation of image contrast metric of color fusion image is shown in Fig. 1.

Color Natural Metric (CNM)
Given a daylight image f 1 (x, y) and a colorized image f 2 (x, y), if a colorized image is similar to the daylight image then the colorized image is considered as of a good quality. Since a human is sensitive to hue in addition to luminance, we compare the a * and b * channels of the reference image with that of the colorized image using the gray relational analysis (GRA) theory (Ma et al., 2005).
We first convert two images, f 1 and f 2 , to L * a * b * space. L i (x,y), a i (x,y), and b i (x,y) are the L * a * b * values of f i at pixel (x, y). The gray relation coefficient between a 1 and a 2 at pixel ( where ε is a small positive constant. The gray relation coefficient between b 1 and b 2 at pixel (x, y) is defined as In the definitions of ξ a (x,y) and ξ b (x,y), min() and max() are operated over whole image.
However, it is possible that min() and max() are operated over a small neighborhood of (x, y).
The gray rational degrees of a * and b * information for two images are defined as x y x y w x å (32) where ω ( x,y ) is the weight of the gray rational coefficient, which satisfies ( , ) ( , ) = 1.

Objective Evaluation Index (OEI)
With the four metrics defined in Section 3, a new objective evaluation index (OEI) is proposed to quantitatively evaluate the qualities of colorized images. Given the reference image f 1 and the colorized image f 2 , the OEI is calcualted in two steps. First the local similarity maps of the two images are computed, and then the similarity maps are integrated into a single similarity score.
The two images are first converted into L * a * b * space. For L * information, the PC maps are calculated and denoted as PC 1 and PC 2 for f 1 and f 2 images, respectively. The similarity measure between PC 1 and PC 2 at pixel (x, y) is defined as where λ 1 and λ 2 are parameters to adjust the relative importance of PC and GM features.
With the aid of the similarity S L (x,y) at each pixel (x, y), the overall similarity between f 1 and f 2 can be calculated with the averaged S L (x,y) over all pixels. However, the image saliency (i.e., local significance) usually varies with the pixel location. For example, edges convey more crucial information than smooth areas. Specifically, a human is sensitive to phase congruent structures (Henriksson et al., 2009), and thus a larger PC(x, y) value between f 1 and f 2 implies a higher impact on evaluating the similarity between f 1 and f 2 at location (x, y). Therefore, we use PC max (x,y)=max PC 1 (x,y ), PC 2 (x,y) to weigh the importance of S L (x,y) in formulating the overall similarity. Accordingly, the objective evaluation index (OEI) between f 1 and f 2 is defined as follows where CNM is previously defined and K 3 and γ i (i = 1,2,3) are positive constants. The diagram of calculating OEI is shown in Fig. 2. The range of OEI is [0,1]. The larger the OEI value of a colorized image is, the more similar (i.e., the better) the colorized image is to the reference image. Error pooling is the integration of methods with tradeoffs between γ 1 , γ 2 , and γ 3 .

Experimental results and discussions
In our experiments, five triplets of multispectral NV images (as shown in Figs. 3-7; collected at Alcorn State University), color RGB, near infrared (NIR) and long wave infrared (LWIR), were colorized by using six different coloring methods as described in Section 2. The threeband input images are shown in Figs. 3-7a, b and c, respectively. The image resolutions and its taken time are given in figure captions. The RGB images and LWIR images were taken by a FLIR SC620 two-in-one camera, which has LWIR camera (of 640×480 pixel original resolution and 7.5~13 μm spectral range) and an integrated visible-band digital camera (2048×1536 pixel original resolution). The NIR images were taken by a FLIR SC6000 camera (640×512 pixel original resolution and 0.9~1.7 μm spectral range). Two cameras (SC620 and SC6000) were placed on the same fixture and turned to aim at the same location. The images were typically captured during sunset time and dusk time during a fall season. One exception is shown in Fig. 7, which was taken at noon time. Fig. 3. Night-vision coloring comparison (Case# AT008 -taken at sunset time; 640×480  Figure 3. Night-vision coloring comparison (Case# AT008 -taken at sunset time; 640×480 pixels): (a-c) Color RGB, NIR, and LWIR images, respectively; (d-f) The colorized images using channel-based color fusion of (NIR⊕ LWIR), statisticmatching, and histogram-matching, respectively; (g-i) The colorized images using joint-HM, stat-match then joint-HM, and LUT-mapping, respectively. The settings in the color-mappings of (e-i) are source = (d) and target = (a). Notice that the contrasts of all color images were increased by 10%, and the brightness of (a) and (i) were increased by 10%.
Of course, image registration and fusion (Hil & Batchelor, 2001) were applied to the three band images shown in Figs. 3-7, where manual alignment was employed to the RGB image shown in Figs. 5-6a since they are so dark and noisy. To better present the color images (including the daylight RGB images and the colorized NV images), contrast and brightness adjustments (as described in figure captions) were applied. Notice that piecewise contrast stretching (Eq. (2)) was used for NIR enhancement. As referred in Eq. (1d), the fused images (shown elsewhere (Zheng & Essock, 2008)) were obtained using the aDWT algorithm (Zheng et al., 2005). The channel-based color fusion (CBCF, defined in Eqs. (1)) was applied to the NIR and LWIR images (shown in Figs. 3-7b & c), and the results are illustrated in Figs. 3-7d. The resulted images from two-band color fusion (Figs. 3-7d) resemble natural colors, which makes scene classification easier. The paved ground appears reddish since they have strong heat radiations (at dusk time) and thus causes strong responses in LWIR images. In the color-fusion images, the trees, buildings and grasses can be easily distinguished from ground (parking lots) and (d-f) The colorized images using channel-based color fusion of (NIR⊕ LWIR), statistic-matching, and histogram-matching, respectively; (g-i) The colorized images using joint-HM, stat-match then joint-HM, and LUT-mapping, respectively. The settings in the color-mappings of (e-i) are source = (d) and target = (a). Notice that the contrasts of all color images were increased by 10%, and the brightness of (a) was increased by 10%.
New Advances in Image Fusion 94 sky. For example, the car is clearly identified in Fig. 5d, where the water area (between ground and trees and shown in cyan color) is certainly noticeable. However, it is hard to realize any water area in the original images (Figs. 5a-c).
All color mapping methods were applied to the five triplets and their results are presented in Figs (d-f) The colorized images using channel-based color fusion of (NIR⊕ LWIR), statisticmatching, and histogram-matching, respectively; (g-i) The colorized images using joint-HM, stat-match then joint-HM, and LUT-mapping, respectively. The settings in the color-mappings of (e-i) are source = (d) and target = Fig. 8(a) due to the dark RGB image in (a). Notice that the contrasts of all color images were increased by 10%, and the brightness of (a) and (i) were increased by 20% and 10%, respectively.
images are better than the color fusions but preserve much reddish colors (existed in source images). The "stat-match then joint-HM" (SM-JHM) means that a joint-HM is performed with inputs of (source = the SM-colored image in Fig. 3e; target = the RGB image in Fig. 3a). The SM-JHM results are presented in Figs. 3-7h, which sometimes are better than the results from either stat-match or joint-HM (e.g., Fig. 3h  (d-f) The colorized images using channel-based color fusion of (NIR⊕ LWIR), statisticmatching, and histogram-matching, respectively; (g-i) The colorized images using joint-HM, stat-match then joint-HM, and LUT-mapping, respectively. The settings in the color-mappings of (e-i) are source = (d) and target = Fig. 8(b) due to the dark RGB image in (a). Notice that the contrasts of (d-i) were increased by 10%, and (a) was increased by 20%. The brightness of (a) and (i) were increased by 20% and 10%, respectively.  (a) (b) Fig. 8. Color RGB images for night-vision colorization (taken  Fig. 6, ST029). Notice that their contrasts were increased by 10%. (d-f) The colorized images using channel-based color fusion of (NIR⊕ LWIR), statisticmatching, and histogram-matching, respectively; (g-i) The colorized images using joint-HM, stat-match then joint-HM, and LUT-mapping, respectively. The settings in the color-mappings of (e-i) are source = (d) and target = (a). Visual inspections of colorized images can generally tell which one is better or the best when there are big enough differences between several versions of colorized images. For example, casual inspections may easily confirm that, top 3 methods are SM, SM-JHM, and LUT; HM and JHM are poor; and CBCF is medium. However, the subjective evalutions become more and more difficult with a larger number of color images and also hard with small or diverse differences. In other words, it is hard for subjective evalutions to give an exact order of six colroziation methods. Let us examine the objective evaluations.
The objective evaluations using the OEI metric defined in Eq. (14) (refer to Section 4) are presented in Table 1 (corresponding to Figs. 3-7 respectively), where the orders of metric values (1 for the smallest OEI) are given within round parentheses. Keep in mind that, the larger the OEI value of a colorized image is, the better quality (i.e., the higher order number) the colorized image has. According to the OEI values in Table 1, the quality order of colorized images varies with figures (cases). To have an overall impression, the sums of the order numbers in five cases (i.e., Figs. 3-7) are calculated and shown at the rightmost column in Table 1. The quality order of each colorization method (6 for the best) is given within the curly brackets. The order of colorization methods from the best to the worst: SM  The subjective evaluations of night vision coloration are based on casual visual inspections. More qualitative measurements, subjective evaluations (by a group of subjects), and statistical analysis will be introduced in the future. The quantitative (objective) evaluations using the objective quality index (OEI) require a reference (daylight) image. Thus we will continuously improve the OEI metric by relaxing the requirement of a reference image. We will further conduct more comprehensive comparisons.

Conclusions
In this chapter, we review six night-vision colorization techniques, a channel-based color fusion (CBCF) procedure; statistic matching (SM), histogram matching (HM), joint histogram matching (JHM), and stat-match then joint-HM (SM-JHM) method, and LUT-based approaches. An objective evaluation metric for NV colorization, objective evaluation index (OEI), is introduced. The experimental results with five case analyses showed the order of colorization methods from the best to the worst: SM, SM-JHM, LUT, CBCF, HM, JHM. The order of objective evaluations comply with the order of subjective evaluations.
The accurate objective metric such as OEI will help develop, select, and/or tune up a better NV colorization technique. The ideally colorized NV imagery can significantly enhance the night vision targeting by human users and will eventually lead to improved performance of remote sensing, nighttime perception, and situational awareness.