Open access peer-reviewed chapter

Image Enhancement Methods for Remote Sensing: A Survey

Written By

Nur Huseyin Kaplan, Isin Erer and Deniz Kumlu

Reviewed: 23 May 2021 Published: 18 August 2021

DOI: 10.5772/intechopen.98527

Chapter metrics overview

632 Chapter Downloads

View Full Metrics

Abstract

The quality of the images obtained from remote sensing devices is very important for many image processing applications. Most of the enhancement methods are based on histogram modification and transform based methods. Histogram modification based methods aim to modify the histogram of the input image to obtain a more uniform distribution. Transform based methods apply a certain transform to the input image and enhance the image in transform domain followed by the inverse transform. In this work, both histogram modification and transform domain methods have been considered, as well as hybrid methods. Moreover, a new hybrid algorithm is proposed for remote sensing image enhancement. Visual comparisons as well as quantitative comparisons have been carried out for different enhancement methods. For objective comparison quality metrics, namely Contrast Gain, Enhancement Measurement, Discrete Entropy and Average Mean Brightness Error have been used. The comparisons show that, the histogram modification methods have a better contrast improvement, while transform domain methods have a better performance in edge enhancement and color preservation. Moreover, hybrid methods which combine the two former approaches have higher potential.

Keywords

  • Remote Sensing
  • Image Enhancement
  • Histogram Modification
  • Transform Domain Methods
  • Image Decomposition

1. Introduction

Widely used remote sensing applications, such as mapping, classification, soil moisture detection, target detection and tracking, etc. require high quality images. To meet the increasing need for higher quality images, image enhancement methods which improve the contrast and edge information of the input image are applied to the raw input images.

Images provided by remote sensing devices have to be enhanced by special methods instead of standard enhancement methods. Since applications like classification, target detection and target tracking are automated applications, the original reflectance values of the input image should be preserved as much as possible, which makes enhancing the remotely sensed image a challenging problem [1, 2]. Remote sensing image enhancement techniques should improve the visibility, contrast and edge information of the image while preserving the original reflectance values.

In recent years, many remote sensing image enhancement methods have been developed to increase the quality of these images. Image enhancement methods can be divided into two main groups as direct and indirect methods [3, 4, 5]. Direct methods aim to enhance the images by using a defined contrast measure [6, 7, 8, 9], while the indirect methods try to improve the dynamic range of the images without a contrast measurement [10, 11, 12, 13, 14, 15].

In direct methods, contrast measurements can be global or local. In general, local measurements have better results [9]. Dhnawan et al. [6] proposed a local contrast function based on the relative difference between a central region and a neighboring region for a given pixel. Beghdad and Negrate [7] introduced an improvement of [6] by defining the contrast with the consideration of edge information. Laxmikant Dash and Chatterji [8] proposed an adaptive contrast enhancement method where contrast amplification is based on the brightness estimated by local image statistics. Cheng and Xu [9] proposes a another adaptive enhancement method based on the fuzzy entropy principle and fuzzy set theory.

The direct methods have a low computational cost but accordingly show a poor image enhancement performance. The state of art methods are generally indirect methods which provide better enhancement performances compared to the direct methods. The indirect methods can be divided into two sub categories as histogram modification based methods [3, 4, 16, 17, 18, 19, 20, 21, 22] and transform domain methods [1, 2, 21, 23, 24, 25].

The simplest histogram modification method is Histogram Equalization (HE) [16]. In this method, the histogram distribution of the input image is aimed to have uniform distribution. This method is able to improve the contrast. However, the HE based enhanced images generally suffer from undersaturation or oversaturation, which results in poor quality images. To fix this problem, more efficient histogram modification methods have been proposed in recent years such as Bi-Histogram equalization (BHE) [17] based and Recursive Mean-Separate histogram equalization (RMSHE) [18]. In both methods, the original histogram of the input image is divided into sub-histograms. After obtaining the sub-histograms, separate histogram equalizations are applied to these sub-histograms. Finally, the divided histograms are merged to obtain the enhanced image [17, 18]. The images obtained by these methods have higher quality compared to the classical HE method, however the undersaturation and oversaturation problems are not resolved. 2-D histogram based methods have also been proposed for image enhancement [19, 20]. These methods provide better results than the methods aforementioned, however the computational cost of 2-D histogram creation is too high, which makes these methods not suitable for automated applications. Moreover, there are faster methods with higher enhancement performances. Another method proposed in this sub-category is Adaptive Gamma Correction with Weighting Distribution (AGCWD) method [4]. In this method, a weighted distribution of the original histogram of the input image is obtained followed by Gamma correction. The most important benefit of this method is its ability to preserve the original reflectance values which are needed for remotely sensed image enhancement, however this method too suffers from saturation artifacts. Moreover, the edge information is lost especially in the brighter regions [2, 21]. Histogram modification methods have a good performance if the histogram of the input image is smoother. Moreover, these group eliminate the lower-scale details [22]. The histogram modification methods have a higher performance for low resolution images and images containing larger-scale details.

Transform domain based image enhancement methods use certain transformations to decompose the image into subbands and improve the contrast by modifying specific components [1, 2, 23, 24, 25]. The first method in this category uses a combination of discrete wavelet transform and singular value decomposition (DWT-SVD) [23]. In DWT-SVD method, first discrete wavelet transform (DWT) is applied to both the input image and to the equalized input image by a general histogram equalization method. Since the details and edge information are kept in high pass sub bands, the method concentrates on the approximation sub bands. After obtaining both approximation sub bands, a Singular Value Decomposition (SVD) is applied to the approximation sub bands of the input and the equalized images. The singular value calculated from the input image is weighted by the singular value of the equalized image to obtain an enhanced singular value. Finally, inverse SVD followed by inverse DWT are applied to obtain the enhanced image. A more recent transform domain method uses the Bilateral Filtering (BF) for image enhancement [1]. The input image is decomposed into its approximation and detail layers by a multiscale BF. Finally, the obtained detail layers are added to the original image with a weighted manner to obtain edge enhanced image. Another method is the Remote Sensing Image Enhancement based on the hazy image model [2]. In this method, the commonly used hazy image model [26] is adapted for image enhancement applications. Here, the two unknown parameters of the hazy image model, namely airlight and transmission, are estimated with simple statistical properties of the input image to obtain the enhanced image. A more recent work is based on Robust Guided Filtering [24]. In this method, a robust guided filter described in [27] is applied to the input image and the difference between the original image and filtered image is considered as a detail sub-band as in DWT. The detail sub-bands are amplified and added to the original image to obtain the final enhanced image. Although they show a better performance, the methods in this group suffer from blocking artifacts or, in some cases, they are unable to enhance the image globally [22]. The overall performance of transform domain methods is better than the histogram modification methods. Moreover, the performance of this group of methods is significantly better for high resolution images and images containing both low and high scale details. There are hybrid methods combining histogram and transform methods. One hybrid method is based on a Regularized Histogram Equalization and Discrete Cosine Transform (RHE-DCT) [25]. In this technique, first a global enhancement is applied to the input image by a Regularized Histogram Equalization (RHE). Here, the equalization is made by using the sigmoid function. After obtaining the equalized image, Discrete Cosine Transform (DCT) is applied to the equalized image to obtain DCT coefficients. After this, the coefficients are modified to locally improve the contrast of the image. Finally, inverse DCT is applied to obtain the enhanced image.

In addition to all these methods, a hybrid algorithm combining [1] and HIM [2] methods has been proposed. In this proposed hybrid algorithm, the BF method described above is applied to the image to obtain a global enhanced image. Then, the HIM method is applied block by block to this globally enhanced image to obtain a local enhancement.

Advertisement

2. Remote sensing image enhancement methods

The quality of remote sensing images depends upon numerous factors such as noise, illumination or equipment conditions during the acquisition procedure [28]. The data obtained by optic sensors (multispectral, hyperspectral, panchromatic sensors) are degraded by atmospheric effects and instrumental noises, namely thermal (Johnson) noise, quantization noise and shot (photon) noise which cause corruption in the spectral bands by varying degrees [29]. On the other hand, SAR images (radar sensors), which offer many benefits such as working 7/24 and in all weather conditions, suffer from multiplicative speckle noise [28].

These degradations reduce the contrast in the resulting images and can highly affect human perception or the accuracy of computer assisted applications [25]. Thus, contrast enhancement, besides noise removal, constitutes a primary step for various applications of remote sensing image processing for better information representation and visual perception.

2.1 Adaptive gamma correction with weighting distribution (AGCWD)

In this method, a weighting distribution of the original histogram of the input image is obtained followed by Gamma correction.

First an Adaptive Gamma correction is made to the input image as:

Tl=lmaxllmaxγ=lmaxllmax1FlE1

Here, l is the intensity value of the current pixel and lmax is the maximum intensity value of the input image. γ is a varying adaptive parameter which is equal to 1Fl, and Fl is the cumulative distribution function. The reason to use cumulative distribution function for adaptive Gamma correction is to guarantee the Gamma parameter to follow the changes between the pixels of the image.

In order to avoid the adverse effects, a weighting distribution function is used so as to slightly modify the histogram as follows:

fl=fmaxflfminfmaxfminαE2

Here, α is the adjustment parameter, f is the probability density function and fmax and fmin are the maximum and minimum of the f. Using (2), the modified cumulative distribution function F is evaluated by:

Fωk=l=0kfωlfωE3

where

fω=l=0lmaxfωlE4

Finally, the Gamma parameter of (1) is modified as:

γ=1FωlE5

The modified Gamma parameter and Eq. (1) is used to obtain the enhanced image.

2.2 Discrete wavelet transform and singular value decomposition based method (DWT-SVD)

In this method, a combination of discrete wavelet transform (DWT) and singular value decomposition (SVD) are used for enhancement purposes. In the classical one-dimensional (1D) DWT, the input signal is decomposed into its low (L) and high (H) frequency components. In order to perform a two-dimensional (2D) transform, the 1D DWT is applied to the row of the images followed by the columns of the image, or vice versa. After applying the 2D DWT, four different subbands are obtained, namely LL, LH, HL, and HH. The approximation subband LL contains low frequency components, while the diagonal subband HH contains high frequency components for both rows and columns of the image. The horizontal and vertical subbands LH and HL contains low frequency component for the rows and high frequency components for the columns and vice versa, respectively.

SVD is used to decompose a matrix into two orthogonal square matrices (U and V) and a diagonal matrix containing the singular values Σ as shown:

I=UIΣIVITE6

The enhancement method firstly applies a general histogram equalization to the input image I to obtain equalized image I˜. Then discrete wavelet transform is applied to both the input and equalized images so as to obtain the subbands LLI, LHI, HLI, HHI and LLI˜, LHI˜, HLI˜, HHI˜, respectively.

Since the rough information about the images are present in the LL subbands, SVD is applied to these subbands to obtain the singular values. As aforementioned, the singular values contain the intensity of the image. Therefore, equalization is made for the singular values. Here, the Σ components of the LLI and LLI˜ is weighted as to obtain a correction coefficient (ξ).

ξ=maxΣLLI˜maxΣLLIE7

where ΣLLI˜ is the singular value matrix of the equalized image derived from its LLI˜ subband and ΣLLI is the singular value matrix of the input image obtained from its LLI subband. After determining the correction coefficient, the corrected singular value matrix Σ is obtained as:

Σ=ξΣLLIE8

Here, Σ is the corrected singular value matrix. The new LL subband is constructed as:

LL=ULLIΣVLLITE9

After constructing the new LL subband, the enhanced image is obtain by performing the inverse DWT to this new LL subband and detail subbands of the original image.

2.3 Regularized histogram equalization and discrete cosine transform based method (RHE-DCT)

This method basically consists of two steps: Regularized Histogram Equalization (RHE) followed by Discrete Cosine Transform (DCT). The first one performs a global contrast enhancement and the second one enhances the local contrast.

RHE aims to perform a histogram equalization to the input image by a regularized manner as:

fk=sk1+hkE10

Here fk is the probability density function of the equalized histogram, hk is the normalized histogram of the input image, sk is the sigmoid function defined as:

sk=11+ek112E11

By this modification, the minimum value of the equalized image is assured to be equal to 0. The fk obtained is normalized as:

fkfkt=1KftE12

Here K is the number of the gray levels. The cumulative distribution function Fk is obtained as:

Fk=t=1KftE13

and new gray levels are evaluated as:

yk=Fkymaxymin+yminE14

Finally the equalized image is obtained by using a standard lookup table based HE procedure to obtain Yeq.

In order to perform a local enhancement, the DCT coefficients of the globally equalized image is used. For this purpose, first the DCT is applied to the equalized image as:

Cuv=chcωk=0M1l=0N1Yeqklcos2k+12Mcos2l+1ωπ2NE15

ch and cω are computed by:

ch=1M,h=02M,1hM1E16
cω=1N,ω=02N,1ωN1E17

The lower absolute values of C should be adjusted to perform local enhancement while higher values should be maintained to avoid drastic changes. By this way new DCT coefficients are obtained as:

Dhw=Dhw,Dhw>0.01D00αDhw,Dhw0.01D00E18

Here α is the adjustment parameter and is automatically determined as:

α=1+stdYglobalstdX/2B1E19

After obtaining the new DCT coefficients, inverse DCT is applied to obtain the final enhanced image.

2.4 Bilateral filtering based method (BF)

This method is basically based on multiscale bilateral filtering. In classical bilateral filtering, the filter output can be determined as:

BFI=1WqsGσspqGσrIpIqIqE20

where

W=qsGσspqGσrIpIqE21

Here σs and σr are the Gaussian kernels controlling the spatial and range of the input image. Ip is the intensity value of the pixel at location p, Iq is the intensity value of the neighboring pixels within the window S at location q. The difference between the input image and the filter output gives the detail layer of the image.

D1=IBFIE22

Here, D1 is the first detail layer of the image. In order to carry on the decomposition, bilateral filtering is applied again to the filter output. Here, to guarantee the shift invariance, σs is doubled and σr is halved. In order to obtain level detail layer, two adjacent filter outputs are subtracted as:

Dj=BFjIBFj1IE23

Here, j corresponds to the decomposition level.

In order to reconstruct the input image from an L levels of decomposition, one can simply add all detail layers to the final filtering output as:

I=j=1LDjI+BFLIE24

Bilateral filtering based method firstly decomposes the input image by (24).

After obtaining the detail layers for L levels. The details are amplified and added directly to the original image as:

IE=BFLI+j=1LωjDjIE25

Here, IE is the enhanced image and ωj are the weighting factors for the corresponding detail subbands DjI.

The parameter determination is very important in order to achieve a good enhancement result. Therefore, σr, σs, and S parameters of the bilateral filter, as well as the decomposition level and weights have to be determined. To achieve this, a comparison between the enhancement results obtained by differing parameter are made. As a result of this comparison, σr is chosen as 0.6, σs is chosen as 1.8, S is chosen as a window sized 5×5, the decomposition level is chosen as 4, and the weights (ω1, ω2, ω3, ω4) are chosen as 2 [1].

2.5 Adaptive cuckoo search based enhancement algorithm (ACSEA)

In this method, the image enhancement is performed by optimizing a predefined enhancement kernel [30]. The enhancement process of ACSEA is given below:

IEij=μijLα+FijeIi,jcμijLE26

where

Fije=kμGσijL+bE27

Here, F(ij)e is calculated by the mean value and standard deviation of the image and called as the image enhancement function. ij is the location of the current pixel. σ(ij)L is the local standard deviation and μ(ij)L is the local mean value calculated in a window sized N×N centered at ij, while μG is the global mean value. The method focuses on optimizing the parameters abck, where 0a1.5, 0b0.5, 0c1, and 0.5k1.5.

In order to optimize the enhancement formula given in (26), a chaotic initialization is made and an objective fitness function is used as given below:

FIE=loglogEIE8+eNeIE8MNeH0E28

where

IE8=xIE2+yIE2E29

In (28), E. is the expected value operator and H. is the entropy operator. In (29), x and y are the gradients. IE8 is the Sobel edge detected image.

In order to optimize the enhanced image IE, the objective function given in (29) is optimized with a chaotic initialization so as to obtain the best enhancement result.

2.6 Hazy image model based enhancement (HIM)

This method is based on the commonly used hazy image model [26, 31].

I=Jt+A1tE30

where I is the input image, A is the airlight coefficient, t is the transmission map and J is the haze free image. In order to obtain haze free image J, A and t have to be estimated.

For dehazing purposes airlight coefficient is generally estimated from the brighter pixels of the input image. For enhancement, instead of the brighter pixels the mean of the image is assumed to be the airlight coefficient [2].

A=1/KLk=1Kl=1LIklE31

I is the input image with dimensions of K×L and Ikl is the intensity value of the pixel at location kl. In general, in dehazing algorithms, the transmission map is estimated using the airlight coefficient and normalized input image. The normalized image is obtained by estimated airlight coefficient. Following the similar manner, this method also normalizes the image with the estimated airlight and estimate the transformation as:

t=1ω1AE32

Here, ω is an arbitrary coefficient. The coefficient can be determined as the standard deviation σ of the input image [2]. Finally, the enhanced image is obtained by simply taking out J out of (30) as:

J=IA1ttE33

2.7 Robust guided filtering based method (SDF)

This method uses the Robust Guided Filtering described in [27] which uses two guidance images namely dynamic guidance and static guidance. In order to perform Robust Guided Filtering, the following cost function should be minimized:

u=iciuifi2+λΩugE34

Here, f is the input image, u is the dynamic guidance and g is the static guidance. λ is the regularization parameter and ci0 is the confidence level. The regularizer Ωug can be defined as [27].

Ωug=i,jNϕμgigjφvuiujE35

where

φvx=1φvxvandϕμx=eμx2E36

N is the neighborhood size which is 8×8, while μ and v are parameters controlling the smoothness level.

In order to perform image enhancement, a multi-scale decomposition based on Robust Guided Filtering similar to the multi-scale bilateral filtering is proposed in [1]. The filtering output is considered as the first approximation layer of the original image as:

A1=SDFIE37

Here, I is the input image, A1 is the first level approximation layer and SDF operator stands for Robust Guided Filtering. In order to obtain further levels of approximation layers, SDF is applied to previous approximation layer as:

Al=SDFAl1E38

with initial value A1=I. The difference between two adjacent approximation layers give the detail layer of the corresponding level as:

Dl=AlAl1E39

One can obtain the original image by simply adding the detail layers to the final level approximation layer.

I=j=1LDj+ALE40

SDF based enhancement firstly decomposes the input image by using (40).

After obtaining the detail layers, the details are amplified and added directly to the original image as:

I=j=1LωjDj+ALE41

The decomposition level and weights are determined by comparing different number of levels and weights. The best results for different images are applied for all images. Therefore, the decomposition level is chosen as 4. Moreover, the weights ω1ω2ω3ω4 are chosen as 2 [24].

2.8 Hybrid bilateral filtering and hazy image model method (BF-HIM)

The BF based enhancement method [1] has a good enhancement, however the color distortion is present, whereas the HIM method [2] has a good color preservation with a lower enhancement performance. Therefore, a hybrid method combining these two methods can be a good candidate to obtain a good performance for enhancement along with a good color preservation.

The hybrid method first applies the multi-scale bilateral filtering given in (24) to the input image to obtain the bilateral filtering outputs and detail layers. Since, we will add the HIM model, the decomposition level is chosen as 2. Then, the detail layers are amplified as given in (25) to obtain the prior enhancement result. The prior enhanced image is divided into non overlapping blocks. HIM method given above is applied to these blocks separately to perform a local enhancement. Finally, the enhanced blocks are combined to construct the final enhancement result.

Here, the choice of the block size is important. The lower block size is expected to have a better local enhancement result. Therefore, the block size is chosen as 3×3.

Advertisement

3. Evaluation criteria

It is possible to determine the performance of an image enhancement method visually. However, a visual conclusion may not be objective. Therefore, in order to make objective comparisons, evaluation criteria has been developed. Here, the choice of criteria is also important. It is already known that every criterion can give an idea about one property of the resulting image. Therefore, criteria for different properties of the image should be used. Moreover, since each criterion gives an idea for a certain property of the image, all criteria should be considered together to have an overall idea of the image. The criteria presented below gives an idea for the performance of the enhancement methods, however all should be considered together and along with visual results.

3.1 Contrast gain (CG)

The first criteria to measure the performance of enhancement method is Contrast Gain (CG) [32]. This criterion focuses on the contrast improvement of the image as follows:

CG=CYCXE42

where C is average of the local Michelson contrast, which is calculated for 3x3 sized windows within an image and given as:

C=maxminmax+minE43

The higher CG value indicates that the contrast improvement is better.

3.2 Enhancement measurement (EME)

This criterion also considers the contrast improvement within the enhanced image and defined by following [22]:

EMEα,k1,k2φ=1k1k2l=1k1k=1k2αlmaxk,lφlmink,lφ+cαlnlmaxk,lφlmink,lφ+cE44

Here the image I is split into k1×k2 sized blocks. Imax(k,l) and Imin(k,l) are the maximum and minimum values within the block, while c is a small constant to avoid division by zero. EMEα,k1,k2φ is called the Enhancement Measurement of Entropy with respect to transform φ.

The higher EME value indicates that the contrast improvement is better.

3.3 Discrete entropy (DE)

Discrete entropy of an image can be evaluated as:

DE=k=1KpxklogpxkE45

Here, pxk is the probability of the pixel xk. The higher value of DE indicates that a smoother distributed histogram is obtained, which may indicate that the contrast is higher.

3.4 Absolute mean brightness error (AMBE)

Absolute Mean Brightness Error (AMBE) [18] is an error function calculated between image X and image Y as:

AMBE=1MNm=1Mn=1NXmnYmnE46

Here, M and N are the dimensions of the images and mn is the pixel location.

The lower AMBE value indicates that, the brightness preservation is better.

Advertisement

4. Experimental setup

The enhancement described above have been applied to several images. Comparisons of the methods are made both visually and quantitatively. Before applying the enhancement methods, the parameters for each method is determined.

4.1 Visual comparison

Visual comparisons are performed for different images and they are available online.1

The first image used for comparison is a tank image taken by a digital imaging system as shown in Figure 1(a). Figure 1(b)(i) show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. In order to demonstrate closely, the zoomed version of the area inside the red square is given inside the green square. As seen in Figure 1(b), AGCWD method has improved the contrast, however the color preservation of the method is not good. The contrast improvement of DWT-SVD seems to be low, as seen in Figure 1(c). Even though the color preservation seems to be good, the edge information is lost as seen in the zoomed area. RHE-DCT method, shown in Figure 1(d), has a good contrast improvement, however the color preservation is not good. The edge enhancement of RHE-DCT is better than AGCWD and DWT-SVD methods. ACSEA method in Figure 1(e) demonstrates a better color preservation, however the contrast improvement is not as good as the other methods. Moreover, the edge enhancement is lower than RHE-DCT method. As seen in Figure 1(f), BF method preserves the color like as the ACSEA method and has a good edge enhancement performance. Figure 1(g) shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good compared to the BF method. SDF method, given in Figure 1(h) has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in Figure 1(i), the hybrid BF-HIM method preserve the colors closer to the ACSEA and BF methods and enhances the edge information better than the former methods.

Figure 1.

(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.

The second image used for comparison is an aerial image taken by a digital imaging system mounted on an air vehicle as shown in Figure 2(a). Figure 2(b)(i) show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. In order to make a closer look, a zoomed version of the area inside the red square is given in the green square.

Figure 2.

(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.

As seen in Figure 2(b), AGCWD method has improved the contrast, however the color preservation of the method is not good. The car within the zoom area is visible. The contrast improvement of DWT-SVD is better than AGCWD method, as seen in Figure 2(c). The color preservation is lower than AGCWD method and the visibility of the car in the zoomed area is not as good as AGCWD method. RHE-DCT method, shown in Figure 2(d), has a good contrast improvement and better color preservation than AGCWD and DWT-SVD methods. The edge enhancement of RHE-DCT is closer to the AGCWD method as seen in the zoomed area. ACSEA method in Figure 2(e) demonstrates a better color preservation, however the contrast improvement is not as good as the other methods. Moreover, the edge enhancement is lower than RHE-DCT method. As seen in Figure 2(f), BF method preserves the color like as the ACSEA method and has a better edge enhancement performance than RHE-DCT methods, as seen in the zoomed area. Figure 2(g) shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good compared to the BF method. SDF method, given in Figure 2(h) has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in Figure 2(i), the hybrid BF-HIM method preserve the colors closer to the HIM method. A closer look demonstrates that the edge improvement is better than the former methods.

The final image used for comparison is an aerial image of an area containing harbor and airport taken by a digital imaging system mounted on an air vehicle as shown in Figure 3(a). Figure 3(b)(i) show the enhancement results obtained by AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods, respectively. For a closer look, the area shown in red square is zoomed and given within the green square.

Figure 3.

(a) Input image, enhancement results for (b) AGCWD (c) DWT-SVD, (d) RHE-DCT, (e) ACSEA, (f) BF, (g) HIM, (h) SDF, and (i) BF-HIM methods.

As seen in Figure 3(b), AGCWD method has improved the contrast, however the color preservation of the method is not good. Moreover, the edge information is lost as seen in the zoomed area. The contrast improvement of DWT-SVD seems to be low, as seen in Figure 3(c). Even though the color preservation seems to be good, the edges have not been improved as seen in the zoomed area. RHE-DCT method, shown in Figure 3(d), has a good contrast improvement, and the color preservation is good. The edge enhancement of RHE-DCT is better than AGCWD and DWT-SVD methods. ACSEA method in Figure 3(e) demonstrates a good color preservation, and a fine contrast improvement. Moreover, the edge improvement seems to be better than RHE-DCT method. As seen in Figure 3(f), BF method preserves the color better than ACSEA method and has a good edge enhancement performance. Figure 3(g) shows that HIM method has a good color preservation capability. However, the edge enhancement performance is not good enough compared to the BF method. SDF method, given in Figure 3(h) has a very good edge enhancement performance, but the color preservation is lower than ACSEA, HIM and BF methods. As demonstrated in Figure 3(i), the hybrid BF-HIM method preserve the colors closer to the ACSEA methods and enhances the edge information better than the former methods.

For an objective visual evaluation, the profiles of the horizontal lines given in Figure 1(a), Figure 2(a) and Figure 3(a) are constructed for the enhancement methods, and the drawn profiles for the original image are given along with enhancement methods are given in Figure 4(a)(c), respectively.

Figure 4.

Drawn profiles for input and enhanced images for (a) Figure 1, (b) Figure 2, (c) Figure 3.

According to Figure 4(a), DWT-SVD and ACSEA methods cannot follow the changes which means the details of the image are lost for these methods. BF-HIM method can follow the changes better. Moreover, BF-HIM method have increased the intensity range more compared to the other methods, which indicates that the contrast improvement is better for BF-HIM method.

According to Figure 4(b), all three methods seem to follow the pattern of the original image properly, in general. ACSEA method has lost the pattern in some parts. SDF and BF-HIM methods have followed the pattern better than ACSEA method. Moreover, BF-HIM seems to have a slightly wider range than SDF method as well.

According to Figure 4(c), all three methods seem to follow the pattern of the original image properly. AGCWD method have increased the intensity values in general, which results in a brighter region. By this way, the contrast improvement is not good enough. Similarly, HIM method have decreased the intensity values, which results in a darker region. BF-HIM method has improved the contrast better than the other methods.

Therefore, according to the visual comparisons, the higher the detail level is within the image, the better the results for methods like AGCWD and RHE-DCT are, as expected, since both methods use histogram modification.

It can also be concluded that histogram modification methods like AGCWD and RHE-DCT methods have a good performance, if the resolution is low (Figure 1) or/and the input image contains higher-scale edge information (Figure 2), while transform domain methods are generally better for high resolution images or/and images containing small-scale details. Also, transform domain methods seem to have a solid performance for high-scale details.

In addition to this, considering all aspects of the resulting images, in terms of color preservation, contrast improvement, and edge enhancement, the BF, and hybrid BF-HIM methods seem to have better results. Moreover, hybrid BF-HIM method seems to be the best method when looking at all three aspects.

4.2 Quantitative comparison

In order to perform an objective comparison, the criteria aforementioned are evaluated for enhancement results obtained by the methods, the visual results of which are given in Figures 13. The quantitative results are provided in Tables 14 where the best results are emphasized in bold. The first criterion used for comparison is the Contrast Gain (CG). Table 1 shows the CG values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

MethodAGCWDDWT-SVDRHE-DCTACSEABFHIMSDFBF-HIM
Figure 11.06000.74681.64421.43691.90991.31712.25602.0457
Figure 21.53021.16352.92141.66651.89101.65292.01202.0517
Figure 31.05550.55051.31101.75921.95221.16542.02172.0789

Table 1.

CG values obtained for the enhancement methods.

According to Table 1, for Figure 1, the best score is obtained by SDF followed by BF-HIM method. For Figure 2, the best score is obtained by RHE-DCT method followed by BF-HIM method. For Figure 3 is achieved by hybrid BF-HIM method, followed by SDF method. Therefore, it is possible to say that RHE-DCT method has a better contrast gain for images containing high-scale details like Figure 3.

The second criterion used for comparison is the Enhancement Measurement (EME). Table 2 shows the EME values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

MethodAGCWDDWT-SVDRHE-DCTACSEABFHIMSDFBF-HIM
Figure 11.471.91327.054.11329.116.00384.70407.07
Figure 21.297.925.867.441.811.664.946.95
Figure 32.595.014.411.356.912.007.989.29

Table 2.

EME values (104) obtained for the enhancement methods.

According to Table 2, for Figure 1 the best EME scores are obtained by BF-HIM method, followed by SDF method. For Figure 2, the best score is obtained by DWT-SVD method followed by ACSEA method. For Figure 3, the best EME scores are obtained by BF-HIM method for followed by SDF method. Therefore, it is possible to say that DWT-SVD method has a better enhancement performance for images containing high-scale details, likeas in Figure 2.

The third criterion used for comparison is the Discrete Entropy (DE). Table 3 shows the DE values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

MethodAGCWDDWT-SVDRHE-DCTACSEABFHIMSDFBF-HIM
Figure 16.56016.82457.85277.36007.37786.44937.66487.6723
Figure 26.66726.87997.69407.08897.13406.73277.40617.2010
Figure 35.92515.79986.37196.46046.45085.99446.63976.6682

Table 3.

DE values obtained for the enhancement methods.

According to Table 3, for Figure 1, the best score is obtained by RHE-DCT method followed by BF-HIM method. For Figure 2, the best score id obtained by RHE-DCT method followed by SDF method. As it is seen in DE values, the higher the scale of detail is within the images, the higher is the performance of RHE-DCT method. BF-HIM method has better DE values for Figure 3, and has a close score to RHE-DCT method for Figures 1 and 2.

The fourth criterion used for comparison is the Absolute Mean Brightness Error (AMBE). Table 4 shows the AMBE values obtained for AGCWD, DWT-SVD, RHE-DCT, ACSEA, BF, HIM, SDF, and BF-HIM methods.

MethodAGCWDDWT-SVDRHE-DCTACSEABFHIMSDFBF-HIM
Figure 10.27980.06570.18710.01490.06520.10140.06920.0459
Figure 20.09040.10830.11450.15030.03640.12650.03550.0254
Figure 30.11210.03310.08110.06290.07130.06510.06970.0326

Table 4.

AMBE values obtained for the enhancement methods.

According to Table 4, for Figure 1, the best score is obtained by ACSEA method followed by BF-HIM method. For Figure 2, the best score is obtained by BF-HIM method followed by BF method. For Figure 3, the best value is obtained by BF-HIM method followed by DWT-SVD method. Since, AMBE is the error between the original image and enhanced image, the smaller value of AMBE indicates better color preservation. This criterion does not give an idea about enhancement performance.

As a result, even though the visual comparison may give the observer an idea about the enhancement performance, quantitative comparison has to be made to obtain a more objective conclusion. Here, the choice of the quantitative criterion is also important. As it is known, each criteria indicates different aspects for the resulting images. For instance, CG gives an idea about the contrast improvement, while AMBE is about the color preservation. If the aim is to compare the overall performance for the methods aforementioned, all criteria should be considered all together. Thus, the quantitative comparisons, as well as the visual comparisons demonstrate that the hybrid methods combining different methods like BF-HIM result in better enhanced images.

Advertisement

5. Conclusion

The use of image enhancement methods which improve the contrast and edge information of the image is vital for remote sensing applications. In this work, different remote sensing image enhancement methods based on histogram modification techniques (HE, AGCWD) and transform domain methods (DWT-SVD, ACSEA, RHE-DCT, BF, HIM, and SDF) have been reviewed. The resulting images have been compared visually and quantitatively. For quantitative comparison, several image quality criteria have been used. The resolution and the detail scales of the image affects the performance of the enhancement methods. For instance, the detail scales of the input image affect the performance of RHE-DCT and AGCWD methods deeply. Since both methods are histogram modification methods, even though RHE-DCT also uses a transformation, it can be concluded that histogram modification based methods are better if there are higher-scale details within the image or if the image has a lower resolution. The transform domain methods have a better performance for the images with low-scale details, but also the results of these methods are very solid compared to the histogram based methods for high-scale details, as well.

Another contribution of this work is to introduce a hybrid method, which combines the bilateral filtering with hazy image model. The visual and quantitative results demonstrate that using hybrid methods have a superior performance to the methods applied separately. Therefore, future research on remote sensing image enhancement should focus on hybrid methods.

References

  1. 1. Kaplan, N.H, Erer, I, Gulmus, N. Remote sensing image enhancement via bilateral filtering. In: Proceedings of the 8th International Conference on Recent Advances in Space Technologies (RAST17), 19-22 June 2017. Istanbul, Turkey: IEEE;2017. p.139–142.
  2. 2. Kaplan, N.H. Remote sensing image enhancement using hazy image model. Optik - International Journal for Light and Electron Optics. 2018;155: 139–148.
  3. 3. Arici, T, Dikbas, S, Altunbasak, Y. A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on Image Processing, 2009;18(9):1921-–1935.
  4. 4. Huang, S.C, Cheng, F.C, Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Transactions on Image Processing. 2013;22 (3):1032–-1041.
  5. 5. Hanmandlu, M, Jha, D. An optimal fuzzy system for color image enhancement. IEEE Transactions on Image Processing. 2006; 15(10):2956–-2966.
  6. 6. Dhnawan, A.P., Buelloni, G., Gordon, R. Enhancement of mammographic features by optimal adaptive neighborhood image processing. IEEE Trans. Med. Imaging. 1986;5:8–15.
  7. 7. Beghdadi, A, Negrate,A.L. Contrast enhancement technique based on local detection of edges. Computer Vision, Graphics, and Image Processing. 1989;46(2):162–-174.
  8. 8. Laxmikant Dash, Chatterji, B.N. Adaptive contrast enhancement and de-enhancement. Pattern Recognition. 1991;24:289–302.
  9. 9. Cheng, H.D, Xu, H.J. A novel fuzzy logic approach to contrast enhancement. Pattern Recognition. 2000;33(5):809-–819.
  10. 10. Sherrier, R, Johnson,G. Regionally adaptive histogram equalization of the chest. IEEE Transactions on Medical Imaging. 1987; MI-6(January(1)),1–-7.
  11. 11. Sattar, F, Floreby, L, Salomonsson, G, Lovstrom, B. Image enhancement based on a nonlinear multiscale method. IEEE Transactions on Image Processing. 1997;6(6):888–895.
  12. 12. Polesel, A, Ramponi,G, Mathews,V. Image enhancement via adaptive unsharp masking. IEEE Transactions on Image Processing. 2000;9(3):505–-510.
  13. 13. Salari, E, Zhang, S. Integrated recurrent neural network for image resolution enhancement from multiple image frames. IEE Proceedings - Vision, Image and Signal Processing. 2003;150(5):299–305.
  14. 14. Lee, E, Kang, W, Kim, S, Paik, J. Color shift model-based image enhancement for digital multifocusing based on a multiple color-filter aperture camera. IEEE Transactions on Consumer Electronics. 2010;56(2):317–323.
  15. 15. Wong, T, Bouman, C. A, Pollak, I. Image Enhancement Using the Hypothesis Selection Filter: Theory and Application to JPEG Decoding. IEEE Transactions on Image Processing. 2013;22(3):898–913.
  16. 16. Gonzalez, R.C, Woods, R.E. Digital Image Processing, 2nd ed. Addison-Wesley Longman Publishing;2001.
  17. 17. Kim, Y.-T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997;43(1):1-–8.
  18. 18. Chen, S.-D, Raml, A.R. Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans.Consum. Electron.2003;49(4):1301–-1309.
  19. 19. Celik, T, Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Transactions on Image Processing. 2011;20(12):3431–-3441.
  20. 20. Celik, T. Two-dimensional histogram equalization and contrast enhancement. Pattern Recognition. 2012;45(10):3810–-3824.
  21. 21. Demirel, H, Anbarjafari, G. Image Resolution Enhancement by Using Discrete and Stationary Wavelet Decomposition. IEEE Transactions on Image Processing. 2011;20(5):1458–1460.
  22. 22. Agaian, S.S, Silver, B, Panetta, K.A. Transform coefficient histogram-based image enhancement algorithms using contrast entropy IEEE Transactions on Image Processing. 2007;16(3):741–-758.
  23. 23. Demirel, H, Ozcinar, C, Anbarjafari, G. Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition. IEEE Geoscience and Remote Sensing Letters. 2010;7(2):333–-337.
  24. 24. Kaplan, N.H., Erer, I. Remote Sensing Image Enhancement via Robust Guided Filtering. In: Proceedings of the 9th International Conference on Recent Advances in Space Technologies (RAST19), 11-14 June 2019. Istanbul, Turkey:IEEE;2009. p.447-450.
  25. 25. Fu, X, Wang, J, Zeng, D, Huang, Y, Ding, X. Remote sensing image enhancement using regularized-histogram equalization and DCT. IEEE Geoscience and Remote Sensing Letters. 2015;12(11):2301–-2305
  26. 26. Narasimhan, S.G, Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003;25(6):713–-724.
  27. 27. Ham, B, Cho, M, Ponce, J. Robust guided image filtering using nonconvex potentials. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018;40(1):192–-207.
  28. 28. Soni,V., Bhandari,A.K.,Kumar, A., Singh, G.K. Improved sub-band adaptive thresholding function for denoising of satellite image based on evolutionary algorithms. IET Signal Processing. 2013;7(8):720–-730.
  29. 29. Rasti, B., Scheunders, P., Ghamisi, P., Licciardi, G., Chanussot, J. Noise Reduction in Hyperspectral Imagery: Overview and Application. Remote Sens. 2018;10(3):482.
  30. 30. Suresh, S, Lal, S, Reddy, C.S, Kiran, M.S. A Novel Adaptive Cuckoo Search Algorithm for Contrast Enhancement of Satellite Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2017;10(8):3665–3676.
  31. 31. Kaplan, N.H, Dumlu, A, Ayten, K.K. Single image dehazing based on multiscale product prior and application to vision control. Signal, Image and Video Processing. 2017;11(8):1389–1396.
  32. 32. Shin, J, Park, R.H. Histogram-based locality-preserving contrast enhancement. IEEE Signal Process. Lett. 2015;22 (9):1293–-1296.

Notes

  • http://sipi.usc.edu/database/

Written By

Nur Huseyin Kaplan, Isin Erer and Deniz Kumlu

Reviewed: 23 May 2021 Published: 18 August 2021