Open access peer-reviewed chapter

A New Pansharpening Approach for Hyperspectral Images

Written By

Chiman Kwan, Jin Zhou and Bence Budavari

Submitted: June 13th, 2017 Reviewed: September 15th, 2017 Published: December 20th, 2017

DOI: 10.5772/intechopen.71023

Chapter metrics overview

1,765 Chapter Downloads

View Full Metrics

Abstract

We first briefly review recent papers for pansharpening of hyperspectral (HS) images. We then present a recent pansharpening approach called hybrid color mapping (HCM). A few variants of HCM are then summarized. Using two hyperspectral images, we illustrate the advantages of HCM by comparing HCM with 10 state-of-the-art algorithms.

Keywords

  • hyperspectral images
  • pansharpening
  • hybrid color mapping
  • sparsity
  • image fusion

1. Introduction

Hyperspectral (HS) images have found a wide range of applications in terrestrial and planetary missions. NASA is planning a HyspIRI mission [14] that will perform vegetation monitoring for the whole Earth. The spatial resolution of HyspIRI is 60 m. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) [5] hyperspectral (HS) imager with 100-m resolution has been monitoring Mars surface since 2006. Although the above imagers have resolution good enough for their respective missions, many other applications such as drought monitoring, fire damage assessment, etc., require higher resolutions. Other notable applications of multispectral and hyperspectral images include target detection [613], anomaly and change detection [1423], tunnel monitoring [24, 25], and Mars exploration [26, 27].

Pansharpening of hyperspectral images usually refers to the fusion of a high-resolution (HR) panchromatic (pan) band with a low-resolution (LR) hyperspectral image cube. A generalization of the above is the fusion of high-resolution multispectral bands with low-resolution hyperspectral bands. According to Loncan et al. [28], pansharpening techniques for HS images can be classified into the following categories. The first category is the component substitution (CS) approach. Well-known CS approaches include Principal Component Analysis (PCA) [29], Gram-Schmidt (GS) [30], GS Adaptive (GSA) [31], and others. The CS approach is based on the substitution of a component with the pan image. The second category is the multiresolution analysis (MRA) approach. The MRA approach relies on the injection of spatial details that are obtained through a multiresolution decomposition of the pan image into the resampled hyperspectral bands. Some well-known algorithms in this category are Modulation Transfer Function Generalized Laplacian Pyramid (MTF-GLP) [32], MTF-GLP with High-Pass Modulation (MTF-GLP-HPM) [33], Hysure [34, 35], and Smoothing Filter-based Intensity Modulation (SFIM) [36]. The third category contains the hybrid approaches that use concepts from different classes of methods, namely from CS and MRA ones. For example, guided filter PCA (GFPCA) [37] belongs to this group. The fourth category involves Bayesian inference, which interprets the fusion process through the posterior distribution of the Bayesian fusion model. Due to the ill-posed fusion problem, the Bayesian methodology can easily regularize the problem by defining a prior distribution for the scene of interest. Exemplar algorithms include Bayesian naive [38] and Bayesian sparse [39]. The fifth category is known as non-negative matrix Factorization (NMF). Representative methods in this category include the coupled non-negative matrix factorization (CNMF) [40] method.

In addition to the above five categories, we noticed that deep learning-based approaches have been investigated in recent years to address pansharpening for hyperspectral images. In Licciardi et al. [41], the authors proposed the first deep learning-based fusion method, which used autoencoder. In 2015, a modified sparse tied-weights denoising autoencoder was proposed by Huang et al. [42]. The authors assumed that there exists a mapping function between LR and HR images for both pan and MS or HS. During the training process, an LR pan was generated by the interpolated MS, then the mapping function was learned by the given LR pan patches as the input and HR pan patches as the output. In 2016, a supervised three-layer convolutional neural network was proposed by [43] to learn the mapping function between the input HR pan with the interpolated LR MS, and the output HR MS. In Qu et al. [44], a new pansharpening algorithm based on deep learning autoencoder was proposed. Preliminary pansharpening results using both hyperspectral and multispectral images are encouraging.

From the application viewpoint, we categorize the various pansharpening algorithms into three groups. Group 1 methods include coupled non-negative matrix factorization (CNMF) [40], Bayesian naive [38], and Bayesian sparse [39]. These methods require the point spread function (PSF) to be available and they perform better than other methods in most cases. Group 2 methods do not require PSF and contain Principal Component Analysis (PCA) [29], guided filter PCA (GFPCA) [37], Gram-Schmidt (GS) [30], GS Adaptive (GSA) [31], Modulation Transfer Function Generalized Laplacian Pyramid (MTF-GLP) [32], MTF-GLP with High-Pass Modulation (MTF-GLP-HPM) [33], Hysure [34, 35], and Smoothing Filter-based Intensity Modulation (SFIM) [36]. Group 3 methods use only the LR HS images and contain the super-resolution (SR) [45], the bicubic method [46], and plug-and-play alternating direction multiplier method (PAP-ADMM) [47].

This chapter is organized as follows. In Section 2, we review the basic idea of color mapping and its variants. In Section 3, we include extensive experiments to illustrate the performance of various pansharpening algorithms. Finally, we conclude our chapter with some future research directions.

Advertisement

2. Proposed hybrid color mapping algorithm and its variants

2.1. Basic idea of color mapping for generating HR HS images

As shown in Figure 1 , the idea of color mapping is to map a color pixel c(i, j) at location (i, j) with R, G, Bbands to a hyperspectral pixel X(i, j) at the same location. This mapping is represented by a transformation matrix T, that is

Xi,j=Tci,jE1

Figure 1.

System flow of color mapping. LR denotes low resolution; HR denotes high resolution; LR C denotes the set of low-resolution color pixels; LR H denotes the set of low-resolution hyperspectral pixel; HR Hyper denotes high-resolution hyperspectral.

where X(i, j) ∈ RNis a single hyperspectral pixel with Nspectral bands, T ∈ RN × M, c(i, j) ∈ RMis a color pixel with Mspectral bands, and N>>M. Here, Mcan be just one band such as the pan band. Hence, color mapping is quite general as it encompasses pan, color, and MS images. Our goal is to generate a HR HS image given a HR color image and an LR HS image. To determine Tin Eq. (1), we simulate an LR color image by down-sampling the HR color image. The LR color image and the LR HS image are then used to determine T, which is then used to generate the HR HS image pixel by pixel.

Let us denote Has the set of all hyperspectral pixels X(i, j) for all (i,j) in the image and Cas the set of all color pixels c(i, j) for all (i,j) in the image, i=1, …, NR, j= 1, …, NCwith NRthe number of rows and NCthe number of columns in the image. Since X(i, j) and c(i, j) are vectors, Hand Ccan be expressed as

H=X1,1X1,2XNR,NC,C=C11C12CNRNC.E2

We call the mapping (1) the global version and all pixels in Cand Hare used in estimating T.

To estimate T, we use the least-square approach, which minimizes the error

E=HTCF2.E3

Solving Tin Eq. (3) yields [48]

T=HCTCCT1.E4

To avoid instability, we can add a regularization term in Eq. (3). That is,

T=argminTHTCF+λTF.E5

And the optimal Tbecomes [48]

T=HCTCCT+λI1E6

where λis a regularization parameter and Iis an identity matrix with the same dimension as CCT.

2.2. Hybrid color mapping

For many hyperspectral images, the band wavelengths range from 0.4 to 2.5μm. For color images, the R, G,and Bwavelengths are 0.65, 0.51, and 0.475μm, respectively. Because the three color bands may have little correlation with higher-number bands in the hyperspectral image, we found that it is beneficial to extract several higher-number bands from LR HS image and stack them with the LR color bands. This idea is illustrated in Figure 2 . Details can be found in [48]. Moreover, we also noticed that by adding a white band, that is, all pixel values are 1, we can deal with atmospheric and other bias effects.

Figure 2.

System flow of hybrid color mapping.

Using the same treatment earlier, Tcan be obtained by minimizing the mean square error [48]

T=argminTHTChFE7

where His the set of hyperspectral pixels and Chis the set of hybrid color pixels. All the pixels in Hand Chare used.

The optimal Tcan be determined as

T=HChTChChT1E8

With regularization, Eq. (8) becomes [48]

T=HChTChChT+λI1.E9

2.3. Local HCM

We further enhance our method by applying color mapping patch by patch. As shown in Figure 3 , a patch of size p× pis a sub-image in the original image. The patches can be overlapped. This idea allows spatial correlation to be exploited. Our experiments showed that the mapping will become more accurate using this local patch idea. Another advantage of using patches is to split the task into many small jobs so that parallel processing is possible.

Figure 3.

Local color mapping to further enhance the SR performance. The patches apply to LR color and LR hyperspectral images.

2.4. Incorporation of PSF into HCM

Based on observations in [28] and our own investigations [48], some pansharpening algorithms with the incorporation of PSF can yield better performance than those without PSF. This motivates to incorporate PSF into our HCM. Our idea can be illustrated by using Figure 4 . The first component is the incorporation of a single-image super-resolution algorithm to enhance the LR hyperspectral image cube. Single-image super-resolution algorithms are well known [45, 46]. The idea is to improve the resolution of an LR image by using internal image statistics. Our proposed method enhances the LR HS bands and then fuses the results using the HCM algorithm. The second component utilizes the HCM algorithm that fuses a high-resolution color image with an enhanced hyperspectral image coming out of the first component. Recently, HCM has been applied to several applications, including enhancing Worldview-3 images [49], fusion of Landsat and MODIS images [50], pansharpening of Mastcam images [51], and fusing of THEMIS and TES [52].

Figure 4.

An outline of the proposed method. We use hybrid color mapping (HCM) to fuse low-resolution (LR) and high-resolution (HR) images. For LR images, we use a single-image super-resolution algorithm where PSF is incorporated to first enhance the resolution before feeding to the HCM.

This idea was summarized in a paper [53] that was presented in ICASSP 2017. The results are comparable to other state-of-the-art algorithms in the literature [39, 40].

2.5. Sparsity-based variants using L1 and L0 norms

In [54], we propose two variants of HCM. From Eq. (5), we can treat the HCM method as an L2 regularization problem. In [54], we investigate some variants of the regularization. In particular, we would like to apply L1 and L0 norms for the regularization term in Eq. (5). We favor the following two approaches: orthogonal matching pursuit (OMP) [55] and l1-minimization via augmented Lagrangian multiplier (ALM-l1) [56], in which OMP is described as an l0-based minimization problem:

T=argminTHTCFs.t.T0KE10

where Kis the sparsity level of the matrix α (K<<M × N), and ALM-l1 solves for the l1-minimization convex relaxation problem:

T=argminTHTCF+λT1E11

where the positive-weighting parameter λprovides the trade-off between the two terms.

2.6. Application of pansharpening algorithms to debayering

Debayering or demosaicing refers to the reconstruction of missing pixels in the Bayer pattern [57, 58] as shown in Figure 5 . A variant of the Bayer pattern, known as CFA2.0 ( Figure 6 ), was introduced in [59]. Recently, a new approach [60] to debayering was introduced based on pansharpening ideas. A thorough comparative study was performed using benchmark datasets. It was found that the pansharpening-based debayering approach holds good promise.

Figure 5.

Standard Bayer pattern.

Figure 6.

RGBW (aka CFA2.0) pattern.

Advertisement

3. Experiments

3.1. Data

Two hyperspectral image datasets were used in our experiments. One was from the Air Force (AF) and the other one was one of the NASA AVIRIS’ images. The AF image ( Figure 7 ) has 124 bands (461–901 nm). The AVIRIS image ( Figure 8 ) has 213 bands (380–2500 nm). Both of them are natural scenes. The AF image size is 267×342×124 and the AVIRIS image size is 300×300×213.

Figure 7.

Sample band of AF data.

Figure 8.

Sample band of AVIRIS data.

The down-sampled image was used as the low-resolution hyperspectral image that needs to be improved. We picked R, G, and B bands from the original high-resolution hyperspectral image for color mapping. The bicubic method in the following plots was implemented by up-sampling the low-resolution image using bicubic interpolation. The results of bicubic method were used as baseline for comparison study.

3.2. Performance metrics

Similar to [28], five performance metrics are included here.

Time: It is the computational time in seconds. This metric is machine dependent and varies with runs. However, it gives a relative measure of the computational complexity of different algorithms.

Root mean-squared error (RMSE)[28]: Given two matrices Xand X̂, the RMSE is calculated by using

RMSEXX̂=XX̂Ftotal number of pixels.E12

The ideal value of RMSE is 0 if there is perfect reconstruction. To show the performance for each band, we also used RMSE(λ), which is the RMSE value between X(λ) and X̂λfor each band λ.

Cross-correlation (CC)[28]: It is defined as

CCXX̂=1mλi=1mλCCSXiX̂iE13

where mλis the number of bands in the hyperspectral image and CCSis the cross-correlation for a single-band image, given by

CCSAB=j=1nAjμABjμBj=1nAjμA2j=1nBjμB2.E14

The ideal value of CC is 1. We also used CC(λ) = CCSXiX̂i, which is the CCvalue between X(λ) and X̂λfor each band, to evaluate the performance of different algorithms.

Spectral Angle Mapper (SAM)[28]: It is defined as

SAMXX̂=1nj=1nSAMxjx̂jE15

where, for two vectors a, b ∈ Rmλ,

SAMab=arccosabab.E16

a, b⟩ is the inner product between two vectors and ‖•‖ denotes the two norms of a vector. The ideal value of SAM is 0.

Erreur relative globale adimensionnelle de synthèse (ERGAS)[28]: It is defined as

ERGASXX̂=100d1mλk=1mλRMSEkμk2E17

where dis the ratio between the linear resolutions of the PAN and HS images, μkis the sample mean of the kth band of X. The ideal value of ERGAS is 0.

3.3. Advantages of the local mapping approach as compared to the global mapping method

We first compared global color mapping (Section 2.1) with local color mapping (Section 2.2) and bicubic interpolation method. Figures 9 and 10 show the RMSE between the ground-truth hyperspectral images and the super-resolution images produced by different methods. We can see that local color mapping is always better than global color mapping. In the AF image, for lower-number bands where R, G, and B bands reside, both global and local color mapping produce better performance than bicubic method. See Figure 9 . However, for higher-number bands, bicubic method is better. The reason is that the spectral correlation between higher-number bands and R, G, B bands is weak. For the AVIRIS image, the local color mapping results shown in Figure 10 are better than both global color mapping and bicubic results across almost all spectral bands.

Figure 9.

RMSE comparison for AF dataset.

Figure 10.

RMSE comparison for AVIRIS dataset.

3.4. Advantages of hybrid color mapping

Figures 11 and 12 show the performance of hybrid color mapping as well as RGBW (R, G, B, and white bands) color mapping described in Section 2. Wrefers to the white band, that is, an image with all 1s. We can see that adding a white band improves the performance across all bands. Moreover, the hybrid method, which combines more bands from the bicubic interpolated higher-number bands, performed the best in all of the bands. Here, all methods used local patches. The patch size is 4 × 4 and there is no overlapping between the patches.

Figure 11.

RGB color mapping versus hybrid color mapping for AF dataset.Wstands for white band. All methods are local. RGBW means a white band is added to the R, G, B bands.

Figure 12.

RGB color mapping versus hybrid color mapping for AVIRIS dataset.Wstands for white band. All methods are local. RGBW means a white band is added to the R, G, B bands.

3.5. Comparison with state-of-the-art algorithms

Now, we include a comparison with a state-of-the-art pan-sharpening algorithm [61] and another single-image SR algorithm [45]. Both algorithms are recent and compared favorably than their counterparts in their categories. Figures 13 and 14 show the performance of variational wavelet pan-sharpening (VWP) [61], bicubic interpolation, single-image SR [45], and our local hybrid color mapping. We used red color band from ground-truth image as the pan image. For NASA data, we observe that in some bands, VWP is better than bicubic and single-image SR methods. However, for bands far away from the reference band, the error is large. In addition, other methods are always worse than our hybrid color mapping method. The reason that VWP [61] did not perform well in this study is perhaps due to the lack of a pan image which has a spectrum that extends to the high-wavelength regions. If the pan image extends to higher-wavelength regions, it is believed that VWP will perform well. We are grateful to the authors of [45] for sharing their source codes with us. The reason that the single-image SR [45] method did not perform well is because it was designed for color images, not for hyperspectral images.

Figure 13.

Hybrid color mapping versus variational wavelet pan-sharpening (VWP), bicubic, and single-image SR [45] for AF dataset. Scale factor is 3.

Figure 14.

Hybrid color mapping versus variational wavelet pan-sharpening (VWP), bicubic, and single-image SR [45] for AF dataset. Scale factor is 3.

3.6. Pixel clustering enhancement using the SR images

The goal of our research is not only to improve the visual performance of the hyperspectral images by enhancing the spatial resolution but also to enhance the pixel clustering performance as well. Figures 15 18 show clustering results using end members extracted from ground-truth AVIRIS hyperspectral image. We used K-means end member extraction technique to determine the clusters. It should be emphasized that we are not performing land-cover classification, which usually requires land-cover spectral signatures as well as atmospheric compensation. The physical meaning of each cluster is not our concern. Comparing Figures 15 and 16 , one can see that hybrid color mapping is significantly better than the bicubic method in terms of the fine details of the images. Moreover, Figure 16 is a lot closer to the ground-truth image in Figure 17 . The images also show that the hybrid color mapping produces much better clustering than the bicubic method as shown in Figure 18 .

Figure 15.

Bicubic pixel classification.

Figure 16.

Hybrid color mapping pixel classification.

Figure 17.

Ground-truth pixel classification.

Figure 18.

Pixel classification accuracy.

3.7. Additional studies and observations

It should be noted that in all of the above studies, we have the following observations:

  • In the AVIRIS image, our local hybrid color mapping algorithm performed consistently well in all bands. That is, our method is better than bicubic and other methods in all bands.

  • In the AF image, our local hybrid color mapping algorithm performed also better than others across all bands. However, the performance in lower-number bands is better than that in higher-number bands. This is because the correlation between lower-number bands and higher-number bands is not that good.

  • We are curious about what will happen if bands other than R, G, B are chosen as reference image in generating the transformation T. Figures 19 and 20 show the comparison. Figure 19 compares the RMSEs using only R, G, B and the case of using bands 20, 50, and 100, for the AF image. Figure 20 compares the RMSEs using only RGB and the case of using bands 20, 90, and 170, for the AVIRIS image. We can observe that if we pick bands from low-, middle-, and high-number bands assuming that they are available, then the overall performance can be much better than that of using R, G, B bands only. The observation shows that we should utilize not only R, G, B bands information but also other bands information such as infrared image and thermal image if they are available.

  • We investigated the impact of adding regularization to the solution of finding T. In our simulations, we used λ = 10−5 vand vis the maximum singular value of CCTin Eq. (5) or ChCh Tin Eq. (8). From Figures 21 and 22 , it can be seen that regularization can also further enhance the performance of our algorithm.

  • We also investigated the impact of different scaling factors in the down-sampling process. As can be seen in the supplemental materials section, results indicate that bicubic method performs well in low-scaling factors of 1.5 where spatial correlation is high. However, in our applications, we are interested in airborne and satellite images where spatial correlation is weak.

Figure 19.

Local color mapping with RGBW bands versus local color mapping with low-, middle-, and high-number bands for AF dataset.

Figure 20.

Local color mapping with RGBW bands versus local color mapping with low-, middle-, and high-number bands for AVIRIS dataset.

Figure 21.

Regularization improves the stability as well as the performance. Here, local hybrid color mapping has been used for AF dataset.

Figure 22.

Regularization improves the stability as well as the performance. Here, local hybrid color mapping has been used for AVIRIS dataset.

3.8. Comparison of sparsity-based approach with other methods

Here, we focus on a comparison with Groups 1–3 algorithms in the literature. We also compare the performance of different variants of HCM.

3.8.1. Comparison with Group 1 methods

Group 1 methods (BN, BS, CNMF) require PSF. In general, Group 1 methods performed better than Groups 1 and 2. In Section 2.4, we also incorporated PSF into our HCM algorithm. Based on results in Table 1 , we can see that the HCM (L2-norm) yielded better results than those Group 1 methods for the AF data. For the AVIRIS data, BS performed the best.

AFAVIRIS
GroupMethodsTimeRMSECCSAMERGASTimeRMSECCSAMERGAS
1CNMF [34]12.520.59920.99221.43511.722923.7532.28680.94560.95902.1225
Bayes Naive [32]0.580.43570.98811.21411.65880.8667.28790.94740.81362.1078
Bayes Sparse [33]208.820.41330.99001.23951.5529235.5051.70100.96190.76351.8657
2SFIM [30]0.99 +0.71320.984891.49362.20871.56y62.023330.949110.9182.0404
MTF-GLP [26]1.38 +0.81770.983141.60952.45412.25y55.6040.954510.910311.9703
MTF-GLP-HTM [27]1.40 +0.80500.983561.5462.42142.23y55.6410.954510.905441.9718
GS [24]1.05 +2.17830.857922.44217.08071.83y54.88510.955970.934881.9524
GSA [25]1.21 +0.74350.987641.51562.17641.98y32.23310.970160.852111.6525
PCA [23]2.37 +2.38160.838242.63557.71762.98y48.91250.960870.91731.8605
GFPCA [31]1.17 +0.64820.986151.53822.06072.17y62.52830.938581.17362.2566
Hysure [28, 29]117.06 +0.87170.980591.78822.629462.47y38.31310.95981.01811.8586
3PAP-ADMM [42]2144.000.44080.988491.16571.64763368.0066.24810.95310.78481.9783
Super-Resolution [39]279.180.52320.98391.32151.95841329.5986.71540.92630.99702.4110
Bicubic [40]0.040.58520.98071.35542.15600.1092.21430.91181.03692.5728
OursL2 Norm3.70 +0.38890.99651.19281.09864.4732.20940.95530.94531.9101
L0 Norm33.36 +0.42880.99581.36771.209581.1132.18750.95520.93621.9109
L1 Norm1293.94 +0.39170.99651.22591.10646.2232.18750.95520.93621.9109

Table 1.

Comparison of our methods with various pansharpening methods on AF and AVIRIS.

These methods involve PAP-ADMM but we did not include PAP-ADMM’s runtime in order to illustrate the differences.


Bold numbers indicate results from the best algorithms.


3.8.2. Comparison with Group 2 methods

Only the GSA method in Group 2 performed better than others for the AVIRIS. Other methods in Group 2 did not perform well as compared to HCM and Group 2 methods. However, Group 2 methods are generally more computationally efficient, which may be advantageous in some real-time applications.

3.8.3. Comparison with Group 3 methods

Group 3 methods do not require PSF or high-resolution color images. Consequently, the performance is generally poor as compared to others. This is understandable.

3.8.4. Comparison among the HCM variants

From Table 1 , we can see that the L2 version of HCM performed better than other variants. This can also be seen from Figures 23 26 . However, one key advantage of the L1 and L0 variants is that the models are simpler, as the coefficients of L0 are clustered and the coefficients of L1 are much fewer than that of L2. The above can be confirmed by inspecting Figures 27 and 28 . We would like to mention that another advantage of the sparsity formulation is that it can handle noisy measurements in the hyperspectral images [54].

Figure 23.

RMSE comparison of the three variants of HCM (L2, L1, and L0) for the AF dataset.

Figure 24.

CC comparison of the three variants of HCM (L2, L1, and L0) for the AF dataset.

Figure 25.

RMSE comparison of the three variants of HCM (L2, L1, and L0) for the AVIRIS dataset.

Figure 26.

CC comparison of the three variants of HCM (L2, L1, and L0) for the AVIRIS dataset.

Figure 27.

Coefficients in the model of the HCM variants for the AF dataset.

Figure 28.

Coefficients in the model of the HCM variants for the AVIRIS dataset.

Advertisement

4. Conclusions

In this chapter, we review the various pansharpening algorithms for hyperspectral images. We then focus on one recent algorithm known as hybrid color mapping. Several variants are described. The performance of HCM is thoroughly compared with other methods in the literature.

One future direction is to investigate the performance of different pansharpening algorithms in the presence of noise. Another direction is to apply the pansharpened images to different applications such as target detection [1113], border monitoring [24, 25], and anomaly detection [1422, 62].

References

  1. 1. Zhou J, Chen H, Ayhan B, Kwan C. A high performance algorithm to improve the spatial resolution of HyspIRI images. In: NASA HyspIRI Science and Applications Workshop. Washington DC: NASA Jet Propulsion Laboratory; 2012 Oct
  2. 2. Ayhan B, Zhou J, Kwan C. High performance and accurate change detection system for HyspIRI missions. In: NASA HyspIRI Science Symposium, Greenbelt. Maryland: NASA Jet Propulsion Laboratory; 2012 May 16
  3. 3. Kwan C, Yin J, Zhou J, Chen H, Ayhan B. Fast parallel processing tools for future HyspIRI data Processing. In: NASA HyspIRI Science Symposium. Greenbelt. Maryland: NASA Jet Propulsion Laboratory; 2013 April
  4. 4. Ayhan B, Kwan C. Fast target detection framework for onboard processing of multispectral and hyperspectral images. In: HyspIRI Science Symposium. Greenbelt. Maryland: NASA Jet Propulsion Laboratory; 2015 Jun 3
  5. 5. CRISM [Internet]. Available from:http://crism.jhuapl.edu/index.php[Accessed: Sep 6, 2017]
  6. 6. Zhou J, Kwan C, Ayhan B. A high performance missing pixel reconstruction algorithm for hyperspectral images. In: 2nd Int. Conf. on Applied and Theoretical Information Systems Research. Taipei: Academy of Taiwan Information Systems Research; 2012 Dec 27
  7. 7. Nguyen D, Tran T, Kwan C, Ayhan B. Endmember extraction in hyperspectral images using l-1 minimization and linear complementary programming. In Proceedings of SPIE. Orlando: SPIE; 2010 Apr 23 (Vol. 7695, pp. 76951-76951)
  8. 8. Kwan C, Ayhan B, Chen G, Wang J, Ji B, Chang CI. A novel approach for spectral unmixing, classification, and concentration estimation of chemical and biological agents. IEEE Transactions on Geoscience and Remote Sensing. 2006 Feb;44(2):409-419
  9. 9. Kwan C, Schmera G, Smulko JM, Kish LB, Heszler P, Granqvist CG. Advanced agent identification with fluctuation-enhanced sensing. IEEE Sensors Journal. 2008 Jun;8(6):706-713
  10. 10. Kish LB, Chang HC, King MD, Kwan C, Jensen JO, Schmera G, Smulko J, Gingl Z, Granqvist CG. Fluctuation-enhanced sensing for biological agent detection and identification. IEEE Transactions on Nanotechnology. 2011 Nov;10(6):1238-1242
  11. 11. Ayhan B, Kwan C. On the use of radiance domain for burn scar detection under varying atmospheric illumination conditions and viewing geometry. Signal, Image and Video Processing. 2017 May 1;11(4):605-612
  12. 12. Nguyen D, Ayhan B, Kwan C. A comparative study of several supervised target detection algorithms for hyperspectral images. In: IEEE Ubiquitous Computing Electronics & Mobile Communication Conference, New York City: IEEE; October 2017
  13. 13. Zhou J, Kwan C, Ayhan B. Improved target detection for hyperspectral images using hybrid in-scene calibration. Journal of Applied Remote Sensing. 2017 Aug;11(3):035010
  14. 14. Zhou J, Ayhan B, Kwan C, Eismann M. New and fast algorithms for anomaly and change detection in hyperspectral images. In: International Symposium on Spectral Sensing Research. Springfield: Missouri State University; 2010 Jul
  15. 15. Kwon H, Nasrabadi NM. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2005 Feb;43(2):388-397
  16. 16. Wang W, Li S, Qi H, Ayhan B, Kwan C, Vance S. Identify anomaly component by sparsity and low rank. In: Proceedings of IEEE WHISPERS. Tokyo: IEEE; 2015 Jun. p. 1-4
  17. 17. Li S, Wang W, Qi H, Ayhan B, Kwan C, Vance S. Low-rank tensor decomposition based anomaly detection for hyperspectral imagery. In: 2015 IEEE International Conference on Image Processing (ICIP). Quebec City: IEEE; 2015 Sep 27. p. 4525-4529
  18. 18. Qu Y, Guo R, Wang W, Qi H, Ayhan B, Kwan C, Vance S. Anomaly detection in hyperspectral images through spectral unmixing and low rank decomposition. In: 2016 IEEE International, Geoscience and Remote Sensing Symposium (IGARSS). Beijing: IEEE; 2016 Jul 10. p. 1855-1858
  19. 19. Qu Y, Qi H, Ayhan B, Kwan C, Kidd R. Does multispectral/hyperspectral pansharpening improve the performance of anomaly detection? In: 2017 IEEE International, Geoscience and Remote Sensing Symposium (IGARSS). Fort Worth: IEEE; 2017 Jul. p. 6130-6133
  20. 20. Ayhan B, Kwan C, Li X, Trang A. Airborne detection of land mines using mid-wave infrared (MWIR) and laser-illuminated-near infrared images with the RXD hyperspectral anomaly detection method. Fourth International Workshop on Pattern Recognition in Remote Sensing. Hong Kong: International Association of Pattern Recognition; 2006 Aug 26
  21. 21. He L, Luo J, Qi H, Kwan C. A comparative study of several unsupervised endmember extraction algorithms to anomaly detection in hyperspectral images. In: International Symposium on Spectral Sensing Research. Missouri: Missouri State University; 2010 Jul
  22. 22. Zhou J, Kwan C. Fast anomaly detection algorithms for hyperspectral images. Journal of Multidisciplinary Engineering Science and Technology. 2015 Sep;2(9):2521-2525
  23. 23. Dao M, Kwan C, Ayhan B, Tran TD. Bum scar detection using cloudy MODIS images via low-rank and sparsity-based models. In: IEEE Global Conference on 2016 Signal and Information Processing (GlobalSIP). Washington DC: IEEE; 2016 Dec 7. p. 177-181
  24. 24. Dao M, Kwan C, Koperski K, Marchisio G. A joint sparsity approach to tunnel activity monitoring using high resolution satellite images. In: IEEE Ubiquitous Computing Electronics & Mobile Communication Conference. New York City: IEEE; 2017
  25. 25. Perez D, Banerjee D, Kwan C, Dao M, Shen Y, Koperski K, Marchisio G, Li J. Deep learning for effective detection of excavated soil related to illegal tunnel activities. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference. New York City: IEEE; 2017
  26. 26. Ayhan B, Dao M, Kwan C, Chen HM, Bell JF, Kidd R. A novel utilization of image registration techniques to process mastcam images in Mars rover with applications to image fusion, pixel clustering, and anomaly detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. IEEE. 2017 Jul 19; (99). pp. 1-12
  27. 27. Dao M, Kwan C, Ayhan B, Bell JF. Enhancing Mastcam images for Mars rover mission. In: International Symposium on Neural Networks. Cham: Springer; 2017 Jun 21. p. 197-206
  28. 28. Loncan L, de Almeida LB, Bioucas-Dias JM, Briottet X, Chanussot J, Dobigeon N, Fabre S, Liao W, Licciardi GA, Simoes M, Tourneret JY. Hyperspectral pansharpening: A review. IEEE Geoscience and Remote Sensing Magazine. 2015 Sep;3(3):27-46
  29. 29. Chavez P, Sides SC, Anderson JA. Comparison of three different methods to merge multiresolution and multispectral data – Landsat TM and SPOT panchromatic. Photogrammetric Engineering and Remote Sensing. 1991 Mar;57(3):295-303
  30. 30. Laben CA, Brower BV, inventors; Eastman Kodak Company, assignee. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. United States patent US 6,011,875. 2000 Jan 4
  31. 31. Aiazzi B, Baronti S, Selva M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Transactions on Geoscience and Remote Sensing. 2007 Oct;45(10):3230-3239
  32. 32. Aiazzi B, Alparone L, Baronti S, Garzelli A, Selva M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogrammetric Engineering & Remote Sensing. 2006 May 1;72(5):591-596
  33. 33. Vivone G, Restaino R, Dalla Mura M, Licciardi G, Chanussot J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Geoscience and Remote Sensing Letters. 2014 May;11(5):930-934
  34. 34. Simões M, Bioucas-Dias J, Almeida LB, Chanussot J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Transactions on Geoscience and Remote Sensing. 2015 Jun;53(6):3373-3388
  35. 35. Simoes M, Bioucas-Dias J, Almeida LB, Chanussot J. Hyperspectral image superresolution: An edge-preserving convex formulation. In: 2014 IEEE International Conference on Image Processing (ICIP), 2014 Oct 27 pp. 4166-4170. Paris: IEEE
  36. 36. Liu JG. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. International Journal of Remote Sensing. 2000 Jan 1;21(18):3461-3472
  37. 37. Liao W, Huang X, Van Coillie F, Gautama S, Pižurica A, Philips W, Liu H, Zhu T, Shimoni M, Moser G, Tuia D. Processing of multiresolution thermal hyperspectral and digital color data: Outcome of the 2014 IEEE GRSS data fusion contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2015 Jun;8(6):2984-2996
  38. 38. Hardie RC, Eismann MT, Wilson GL. MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor. IEEE Transactions on Image Processing. 2004 Sep;13(9):1174-1184
  39. 39. Wei Q, Bioucas-Dias J, Dobigeon N, Tourneret JY. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Transactions on Geoscience and Remote Sensing. 2015 Jul;53(7):3658-3668
  40. 40. Yokoya N, Yairi T, Iwasaki A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Transactions on Geoscience and Remote Sensing. 2012 Feb;50(2):528-537
  41. 41. Licciardi GA, Khan MM, Chanussot J, Montanvert A, Condat L, Jutten C. Fusion of hyperspectral and panchromatic images using multiresolution analysis and nonlinear PCA band reduction. EURASIP Journal on Advances in Signal processing. 2012 Dec 1;2012(1):207
  42. 42. Huang W, Xiao L, Wei Z, Liu H, Tang S. A new pan-sharpening method with deep neural networks. IEEE Geoscience and Remote Sensing Letters. 2015 May;12(5):1037-1041
  43. 43. Masi G, Cozzolino D, Verdoliva L, Scarpa G. Pansharpening by convolutional neural networks. Remote Sensing. 2016 Jul 14;8(7):594
  44. 44. Qu Y, Qi H, and Kwan C. Deep learning based pansharpening algorithm for hyperspectral and multispectral images. Submitted to Computer Vision and Pattern Recognition (CVPR) Conference. 2017
  45. 45. Yan Q, Xu Y, Yang X, Nguyen TQ. Single image superresolution based on gradient profile sharpness. IEEE Transactions on Image Processing. 2015 Oct;24(10):3187-3202
  46. 46. Keys R. Cubic convolution interpolation for digital image processing. IEEE Transactions on Acoustics, Speech, and Signal Processing. 1981 Dec;29(6):1153-1160
  47. 47. Chan SH, Wang X, Elgendy OA. Plug-and-Play ADMM for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging. 2017 Mar;3(1):84-98
  48. 48. Zhou J, Kwan C, Budavari B. Hyperspectral image super-resolution: A hybrid color mapping approach. Journal of Applied Remote Sensing. 2016 Jul 1;10(3):035024
  49. 49. Kwan C, Budavari B, Bovik AC, Marchisio G. Blind quality assessment of fused WorldView-3 images by using the combinations of pansharpening and hypersharpening paradigms. IEEE Geoscience and Remote Sensing Letters. 2017 Aug;24
  50. 50. Kwan C, Budavari B, Gao F. A hybrid color mapping approach to fusing MODIS and Landsat images for forward prediction. Submitted to MDPI Journal of Remote Sensing. 2017
  51. 51. Kwan C, Budavari B, Dao M, Ayhan B, Bell JF. Pansharpening of Mastcam images. In: 2017 IEE International, Geoscience and Remote Sensing Symposium (IGARSS). Fort Worth: IEEE; 2017 Jul. p. 5117-5120
  52. 52. Kwan C, Ayhan B, Budavari B. Fusion of THEMIS and TES for accurate Mars surface characterization. In: Proceedings of IEEE International Geoscience and Remote Sensing Symposium. Fort Worth: 2017. p. 3381-3384
  53. 53. Kwan C, Choi JH, Chan S, Zhou J, Budavari B. Resolution enhancement for hyperspectral images: A super-resolution and fusion approach. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans: IEEE; 2017 Mar 5. p. 6180-6184
  54. 54. Kwan C, Budavari B, Dao M, Zhou J. New sparsity based pansharpening algorithm for hyperspectral images. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference. New York City: IEEE; 2017
  55. 55. Tropp JA. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information theory. 2004 Oct;50(10):2231-2242
  56. 56. Yang J, Zhang Y. Alternating direction algorithms for l-1-problems in compressive sensing. SIAM Journal on Scientific Computing. 2011 Feb 3;33(1):250-278
  57. 57. Bayer BE, inventor; Eastman Kodak Company, assignee Color imaging array. United States patent US 3,971,065. 1976 Jul 20
  58. 58. Losson O, Macaire L, Yang Y. Comparison of color demosaicing methods. Advances in Imaging and Electron Physics. 2010 Dec 31;162:173-265
  59. 59. Compton JT, Hamilton Jr JF, DeWeese TE, inventors; OmniVision Technologies, Inc., assignee. Image sensor with improved light sensitivity. United States patent US 8,194,296. 2012 Jun 5
  60. 60. Kwan C, Chou B, Kwan LM, Budavari B. Debayering RGBW color filter arrays: A pansharpening approach. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference. New York City: IEEE; 2017
  61. 61. Möller M, Wittman T, Bertozzi AL, Burger M. A variational approach for sharpening high dimensional images. SIAM Journal on Imaging Sciences. 2012 Jan 24;5(1):150-178
  62. 62. Zhou J, Kwan C, Ayhan B, Eismann MT. A novel cluster kernel RX algorithm for anomaly and change detection using hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing. 2016 Nov;54(11):6497-6504

Written By

Chiman Kwan, Jin Zhou and Bence Budavari

Submitted: June 13th, 2017 Reviewed: September 15th, 2017 Published: December 20th, 2017