Open access peer-reviewed chapter

Automatic Adaptive Lossy Compression of Multichannel Remote Sensing Images

Written By

Vladimir Lukin, Alexander Zemliachenko, Ruslan Kozhemiakin, Sergey Abramov, Mikhail Uss, Victoriya Abramova, Nikolay Ponomarenko, Benoit Vozel and Kacem Chehdi

Submitted: March 11th, 2016 Reviewed: July 18th, 2016 Published: November 23rd, 2016

DOI: 10.5772/64944

Chapter metrics overview

1,387 Chapter Downloads

View Full Metrics

Abstract

In this chapter, we consider lossy compression of multichannel images acquired by remote sensing systems. Two main features of such data are taken into account. First, images contain inherent noise that can be of different intensity and type. Second, there can be essential correlation between component images. These features can be exploited in 3D compression that is demonstrated to be more efficient than component-wise compression. The benefits are in considerably higher compression ratio attained for the same or even less distortions introduced. It is shown that important performance parameters of lossy compression can be rather easily and accurately predicted.

Keywords

  • adaptation
  • automation
  • lossy compression
  • multichannel
  • remote sensing
  • image processing

1. Introduction

Remote sensing (RS) is an application area where compression of images acquired on-board of an aircraft or a spacecraft is a very important task [1]. Its actuality is explained by continuous tendencies of improving sensor spatial resolution, more frequent observation of sensed terrains, larger number of exploited channels (e.g., in multi- and, especially, hyperspectral sensing), etc. [2]. Meanwhile, the communication channel bandwidth and time of data transferring can be limited [1, 3, 4]. Facilities of data processing on-board can be restricted. Possibilities of image compression in a lossless manner are often limited as well [4]. Even the best existing methods of lossless compression applied to hyperspectral data and fully exploiting interband correlation inherent for such images provide a compression ratio (CR) of about 4.5 [4, 5], and this is often not enough. Thus, there is a need in efficient methods for lossy compression of acquired multichannel images.

There are several peculiarities of lossy compression with application to multichannel remote sensing images. First, if it is performed on-board, full or partial automation is required [1, 6]. Second, lossy compression is reasonable and useful only if introduced losses do not have essential impact on the value of compressed data, i.e., if accuracy and reliability of information extracted from compressed images are approximately at the same level as from original (uncompressed, compressed in a lossless manner) data. In this sense, introduced losses should be smaller, or in the worst case, comparable to the original image distortions due to noise [7]. This means that image-processing (compression) methods should be adaptive to noise characteristics. Meanwhile, noise in images acquired by modern multichannel RS sensors is not additive and has more complicated nature [811]. Thus, either blind estimation of its characteristics or attraction of available a priori information is needed. Third, adaptation to other specific properties of subband images is desired. Here, we mean that images in channels might have considerably different dynamic ranges, signal-to-noise ratios, and interchannel correlation factors [8, 12, 13].

All these influence efficiency of lossy compression and open perspectives of its improvement. Meanwhile, all or some of the aforementioned peculiarities of multichannel RS images are often ignored in the design of lossy compression techniques.

On the one hand, it is well understood that high interchannel correlation should be exploited for more sparse representation of data and reaching higher CR than for component-wise compression [1416]. On the other hand, there are many different ways to realize this. Different transforms can be used [1720]. Component image grouping can be organized in different manner [15, 21, 22] and till the moment there are no strict rules what is the best way to do this and what benefit can be maximally achieved compared to component-wise compression in the sense of CR under condition of the same or smaller distortions introduced.

Noise characteristics and different dynamic ranges of data in component images are often not taken into account in lossy compression as well. Little attention has been paid to these aspects in the design of lossy compression techniques for the considered application although it is clear that they are important and restrict applicability of methods designed for other types of multidimensional data [3, 20, 23].

Requirements to lossy compression of multichannel images and their priority have to be taken into consideration as well. The main requirements [1, 3, 20] are the following. First, introduced distortions should not negatively influence the efficiency of solving further tasks of multichannel image processing such as classification, object detection, visual inspection, etc. Only under aforementioned condition, the compressed data remain to be practically of the same value as original images. This means that introduced distortions should be less or of the same order as noise in each component (channel) image. Second, there can be a necessity to provide CR not smaller than some limit value or a desire to provide as large CR as possible. Third, lossy compression and operations associated with it (preliminary analysis of data, some transformations, and/or normalizations, etc.) have to be quite simple, especially if one deals with lossy compression on-board. Fourth, there can be some recommendations or restrictions imposed on standardization of lossy compression or mathematical basis. Currently, there are no standards for lossy compression of multichannel RS images although special efforts are put toward its creation [3]. In addition, it is understood that most of the aforementioned requirements can be met on the basis of 2D or 3D orthogonal transforms under condition of proper preparation of multichannel images to compression [20].

In this chapter, we focus on the aspects of automation and adaptation of lossy compression with application to multichannel image processing. First, we show that noise is signal dependent where its signal-dependent component is either of the same order as signal independent (additive) or is dominant [6, 8, 9]. Second, we show how this property can be taken into account at lossy compression stage by applying proper variance stabilizing transform (VST) in component-wise manner [20, 24]. Third, we analyze peculiarities of lossy compression in the neighborhood of the so-called optimal operation point (OOP) where introduced losses characterized by mean square error are of the same order as equivalent noise variance [25]. Fourth, we demonstrate that there is quite strict relation between OOP existence and compression ratio (CR) in it and some statistical parameters of noisy images [25, 26]. Moreover, there are quite easy methods to provide a desired CR by exploiting this statistics [27]. Fifth, we discuss and compare component-wise and 3D compression. Advantages of the latter approach have been paid special attention [28, 29] and more discussion on group size is provided.

Advertisement

2. Image and noise models and their parameters

While 10–20 years ago, it was usually assumed that noise is additive in all components of multichannel remote sensing data [30], studies carried out by different researchers [9, 10] indicate that the following image/noise model is more adequate

Ikijnoisy=Ikijtrue+nkij(Ikijtrue),i=1,,I,j=1,,J,k=1,,KE1

Here, Ikijnoisy denotes the ijth sample of the kth component for a considered multichannel image, nkij is the ijth value of the signal-dependent noise in the kth component image. To indicate that noise is signal dependent, we use notation nkij(Ikijtrue) where Ikijtrue is the true value for the kijth voxel, I and J define the data size, and K denotes the number of components. For multi- and hyperspectral images, model (1) transforms to

Ikijnoise=Ikijtrue+NkijSI+NkijSD,E2

where NkijSD and NkijSIdenote signal-dependent (SD) and signal-independent (SI) noise components. SI is usually associated with dark and electronic noise and is assumed zero mean, white, and Gaussian. The situation with the SD component is more complicated since it is associated with wave power estimation by sensors and system calibration. For photon-counting detectors it can be assumed that this noise component is also zero mean, white, and its variance is proportional to Ikijtrue. Thus, one gets the following model for the noise variance:

σkij2=σk2+γIkijtrue,E3

where σk2 denotes the SI noise variance and γ is the SD noise proportionality factor. Then, it becomes possible to determine the input MSE for each component images as

MSEkinp=i=1Ij=1J(IkijnoiseIkijtrue)2/(IJ),k=1,,KE40

and the input PSNR

PSNRkinp=10log10(Dk2/MSEkinp),k=1,,K,E4

where Dk determinates image dynamic range.

One can estimate equivalent noise variance for SD component as

σeqSD2(k)=i=1IImj=1JImγ(k)Ikijtrue/(IImJIm)i=1IImj=1JImγ^(k)Ikijn/(IImJIm)=γ^(k)Imean(k),E5

where IIm,JIm denotes the image size, γ^(k) is the SD noise component parameter estimate assumed accurate enough, and Imean(k) is the image mean. If there is also an estimate of SI component variance σk2, then one can obtain an estimate of the input MSE as

MS^EkinpσeqSD2(k)+σk2,E6

where

denotes the SI noise variance estimate (assumed accurate enough) for a kth component of multichannel image.

It is important to know how large is relative contribution of SD noise component into the input MSE. To get imagination about this, the values

have been derived and graphically compared to
[25]. The plots for AVIRIS [31] (224 subbands in optical visible and infrared ranges) and Hyperion [32] (242 subbands in the same ranges, for some very noisy subbands the estimates have not been obtained) sensors are represented in Figure 1 in logarithmic scale (since there is a very wide limit of variation for these estimates). The values of
for subbands for which negative estimates of γ^(k) have been obtained by the method [10] are assigned unity values (zero values in logarithmic scale).

Figure 1.

Variance estimation results for Hyperion dataset EO1H2010262004157110KP (a) and for AVIRIS images Lunar Lake (b), Cuprite (c), and Moffett Field (d).

Analysis of data presented in Figure 1 shows the following. For Hyperion data (Figure 1a), in most subbands of visible and near infrared ranges (these are subbands with indices from 13 to 61),

Is larger than
, i.e., the SD component contribution is prevailing. In infrared range (these are subbands with indices from 78 to 230; Figure 1a), there is approximately equal percentage of subbands where the influence of SD or SI components is dominant. According to our experiments, similar conclusions can be drawn for other real-life images acquired by the Hyperion sensor.

The results for three widely known test datasets acquired by the sensor AVIRIS are given in Figure 1(b)(d). Their analysis allows concluding the following. All three dependences of the same type (for instance,

) are very similar to each other. Then, if hyperspectral images are acquired during the same session, one can assume that noise characteristics do not change. In addition,
are larger than
for most images acquired by visible range AVIRIS sensor (spectrometer A, indices 1, …, 32). The same conclusion is valid for most subbands of the second AVIRIS spectrometer (B, indices 33, …, 96). Contributions of the considered noise components are comparable for the third spectrometer images (C, indices 97, …, 160). SI noise is dominating for most subband images acquired by the fourth AVIRIS spectrometer (D, indices 161, …, 224). Thus, contributions of noise components depend upon wavelength and sensor used in a hyperspectral system. But in any case, assumption on additive character of the noise is not valid. Moreover, for hyperspectral imaging, there is a tendency to increasing the relative contribution of SD component [33].

Figure 2.

Dependence of cross-correlation factor R on k for the 166th subband image.

One more important property of multichannel RS images is that signal components in them are often cross-correlated. Meanwhile, the cross-correlation factor also depends upon noise intensity in both images and decreases if noise is intensive in one or both component images. Keeping this in mind, we have chosen for analysis the subband image with k = 166 that corresponds to far-infrared range and is acquired by the fourth spectrometer of AVIRIS. This image is quite noisy and the input PSNR for it is less than 30 dB (dynamic range of this image is small and this is the second reason of low input PSNR). The dependence of the cross-correlation factor R on k is presented in Figure 2.

The factor R(166) = 1 and for this subband image the signal-independent noise component is prevailing with the input MSE about 11. But we are more interested in R(k) for other subbands. Analysis of data presented in Figure 2 shows that R(k) varies in rather wide limits. On average, values of R(k) are the largest for the subband images acquired by the fourth spectrometer of AVIRIS imager for which k > 160. Meanwhile, cross-correlation factors are large enough for subbands relating to other ranges as well.

Although cross-correlation of images in multichannel images is often high, there can also be sufficient variation in the dynamic range Dk usually defined as Dk=IkmaxIkmin where Ikmax and Ikmin are maximal and minimal values in the kth subband image, respectively. For hyperspectral data, the values Dk and Dk+1, i.e., for neighbor subbands, are usually close enough. As it follows from analysis of noise components in Figure 1, neighbor channels commonly have quite close values of input MSEs (equal to noise variance σk2 if the noise is pure additive). Thus, input PSNR values determined as PSNRkinp=10log10(Dk2/MSEkinp) are usually close to each other for neighbor images of hyperspectral and multispectral data.

Advertisement

3. Considered performance criteria and peculiarities of lossy compression of noisy images

After lossy compression of a multichannel image, one obtains {Ikijc , i = 1, …,I, j = 1,…,J, k = 1,…,K}. If one deals with lossy compression of a noise-free image, then quality of compressed image is worse for a larger compression ratio (smaller bpp, larger quantization step or scaling factor for DCT-based coders). The reason is that more distortions are introduced for larger CR.

Meanwhile, many researchers [3436] have stressed that there are peculiarities in lossy compression of noisy images. Lossy compression leads to a specific noise removal effect that can be large enough under certain conditions. Due to this, it might be possible that MSE for compressed image

MSEkc=i=1Ij=1J(IkijcIkijtrue)2/(IJ),k=1,,KE7

is less than MSEkinp and MSEkc be minimum for some value of a parameter that characterizes compression for a given method. This can be quantization step (QS), scaling factor (SF), or bits per pixel (bpp)—this depends on a coder used. Then, such a parameter is associated with the so-called optimal operation point (OOP). Figure 3 presents dependences of

PSNRkc=10log10(Dk2/MSEkc),k=1,,KE8

Figure 3.

Dependences PSNRc(QS) for the coder AGU and test images Airfield, Aerial, and Frisco corrupted by AWGN with noise variance equal to 100.

on QS for the lossy DCT-based coder AGU [37] applied to three standard grayscale test noisy RS images, Airfield, Aerial, and Frisco. All three images were corrupted by additive white Gaussian noise (AWGN) with variance 100. Note that the test image Frisco has a simpler structure while the test images Aerial and Airfield have more details. This is the reason why the denoising effect of lossy compression is considerably greater for the image Frisco and the dependence for it has an obvious global maximum. This is OOP according to the metric PSNRc that coincides with OOP according to the metric MSEc—see Figure 8. For the test image Aerial, the OOP is not so “obvious” although it exists. Finally, for the test image Airfield, there is no OOP formally but the dependence PSNRc(QS) has local maximum. Note that in all cases maxima occur QSOOP4σ. This is the choice recommended for the coder AGU [24] and for a more complex coder ADCT [38]. This recommendation allows compressing lossy images by the aforementioned DCT-based coders component-wise in one iteration under assumption that noise characteristics for component images are known or preestimated with an appropriate accuracy. Note that there are many modern methods for blind estimation of parameters of additive noise [39, 40] and signal-dependent noise [4143]. Availability of these techniques gives rise to fully automatic compression based on noise parameter estimation [20, 44].

For the recommended QSOOP4σ(or QSOOP4σequiv, σequiv2=MSEinp, used if noise is signal dependent and VST is not applied before compression), it can be interesting to study such parameters as PSNRkOOP and δPSNRkOOP determined as

δPSNRkOOP=PSNRkOOPPSNRkinpE9

where positive δPSNRkOOP means that OOP exists according to a corresponding metric. Such a study has been carried out recently [26]. It has been established that δPSNRkOOP can vary in rather wide limits, from about –3 to 5–6 dB. Negative values show that OOP does not exist and this happens if a compressed image has a rather complex structure and/or noise is not intensive. On the contrary, positive values indicate that there is OOP and it takes place for quite simple structure images and/or quite intensive noise. It has also been shown in [26] that δPSNRkOOP can be quite accurately predicted before compression using different statistics of DCT coefficients determined for a limited number of 8 ×8 pixel blocks.

Let us consider dependences of CR on QS for the same test noisy images as in Figure 3. These dependences are represented in Figure 4. The first observation is that lossy compression with the recommended QS leads to sufficiently different compression ratios for different images. Recall that the recommended QS is equal to 40 for σ2=100 and to 56 for σ2=200. Simpler structure and/or noisier images are compressed with larger CR. For QS=40 (σ2=100), one has CR about 17 for the image Frisco and about 7 for two other test images. If noise intensity is greater (QS=56, σ2=200), larger CR values are attained: about 26 for Frisco and 14 for other two images. Thus, noisier images are compressed in OOP with larger CR. This means that image complexity and noise intensity should be taken into account in practice. Some ways to do this will be described in the next section.

Figure 4.

Dependences of CR on QS for three test images of different complexity for two values of noise variance σ2 (100 and 200).

Advertisement

4. Efficiency for 3D compression

4.1. Main dependences and benefits

As mentioned above, the compression of multichannel RS images can be carried out component-wise and using variants of 3D approach. In the former case, there are certain benefits. First, it is easier to handle data. For example, QS or bpp can be set individually for each component image. Second, a part of operations can be performed in parallel. For example, orthogonal transforms and quantization of coefficients can be performed separately for each component image and, thus, this part of processing can be parallelized. In the latter case, 3D compression can be applied to multichannel image as a whole [14] or as to a set of component image groups [15, 22]. Each variant has its own positive features and drawbacks. If groups are used, it is easier to parallelize computations (since processing can be partly performed separately in each group) and adjust compression parameters.

Figure 5.

Noise-free (a) and noisy (b) three-channel test image in pseudocolor representation.

Let us analyze some peculiarities of 3D compression for a rather simple three-channel test image (presented in Figure 5a). This image has been considered noise free and it has been composed from three channels of visible range of Landsat RS data associated with red, green, and blue components for visualization. The noisy image with artificially added AWGN having the same variance equal to 130 in all components is shown in Figure 5(b). Noise is seen well in quasihomogeneous regions.

The plots of MSEkc(QS) are presented in Figure 6(a). Notation 2d relates to two-dimensional, i.e., component-wise compression. For all three components, the plots almost coincide and, therefore, we present the averaged dependence. In turn, notation 3d concerns 3D compression using 3D version of AGU coder [15]. Again, the dependence averaged for all three components is given.

There are several interesting observations for these plots. If QS is rather small, e.g., less than 2σ, the dependences MSEkc(QS) practically coincide, i.e., there is almost no difference between 2D (component-wise) and 3D (volumetric) compression. Then, for larger QS, differences start to appear. And they become quite large for QS about 4σ, i.e., when OOP can be observed. First, OOP is observed (see Figure 6a) for both 2D and 3D compression. But MSEkc(QS) in OOP for 3D compression is considerably (by almost two times) smaller. This means that noise-filtering effect due to 3D compression is sufficiently better compared to 2D compression. This can be noticed in Figure 7 that presents images compressed in OOP for 2D and 3D AGU coder versions. Second, OOP in the case of 3D compression is observed for the same conditions as for 2D compression. More examples confirming this can be found in the paper [29].

Figure 6.

Averaged dependences MSEc(QS) (a) and CR(QS) (b) for 2D and 3D compression.

Figure 7.

Images compressed in OOP for 2D (a) and 3D (b) compression.

Consider now the plots of CR(QS) represented in Figure 6(b). For QS less than 2σ there are almost no benefits of 3D compression. However, for larger QS, the benefits become obvious. CR provided by 3D compression occurs to be almost twice larger than for component-wise processing. A question is why this happens? Another question is can we predict CR and situations when 3D compression might be beneficial compared to component-wise coding.

4.2. Prediction of compression parameters

There are two main compression parameters for which prediction is desirable for compression in OOP neighborhood, namely, δPSNRkOOP and CR. An approach to predict δPSNRkOOP for component-wise compression has been recently proposed [26]. Its essence is the following. Suppose that one has a parameter that is able to jointly characterize complexity of image to be compressed and noise intensity in it. Suppose also that this (input) parameter can be calculated easily (quickly, considerably faster than compression is performed) and it is tightly connected to output (predicted) parameter (indicator) that characterizes compression from a desired viewpoint. This connection is expressed either as some analytical dependence allowing to determine (predict) output parameter easily and quickly. Then, it becomes possible to estimate the input parameter for a considered image, to use it as argument for calculating the output parameter, and to carry out some decision based on this prediction [26].

Having described the general strategy of prediction, let us give some details. First of all, there are many parameters that can be used as inputs [4547]. Under condition that noise parameters (variance) are known in advance or preestimated with appropriate accuracy, statistical parameters of the family Pασ can be used. These are mean probabilities that absolute values of DCT coefficients calculated in Nbl blocks of size 8 × 8 pixels are less than a threshold ασ where α is the parameter (in our experiments it has been equal to 0.5, 1.0, 1.5, or 2.0). The input parameter Pασ is indirectly connected with number of zeroed DCT coefficients in image filtering [45] that influences denoising efficiency performed by lossy compression applied to noisy images. There is also parameter P0q—mean probability that DCT coefficients calculated in Nbl blocks of size 8 × 8 pixels are equal to zero after quantization with a used QS.

Obtaining dependence between output and input parameters is a special stage performed in advance (offline). This stage presumes getting a scatterplot where the horizontal axis corresponds to an input parameter and vertical relates to a predicted output parameter. Scatter-plot points correspond to a test image corrupted by AWGN with a certain variance compressed in a specified way. An example of the scatterplot is shown in Figure 8. P2σ serves as an input parameter and δPSNROOP as an output parameter.

Having such a scatterplot, curve fitting is applied to obtain a desired dependence. At this substage, several subtasks should be solved. They can be, in general, treated as providing good fit and include choice of proper type and parameters of approximating functions, accounting for restrictions, etc. Different criteria of fitting quality can be used [48] where R2 (goodness of fit that has to approach unity for good fit) is one of the commonly employed parameters. The example for 2D image compression presented in Figure 8 shows that the scatter-plot points are not spread a lot and it can be assumed that the dependence is a smooth function. Then, polynomials of the fourth and fifth orders and some other functions provide appropriate results (the fitted polynomial expression is presented in Figure 8). The performance of prediction for different input parameters should be analyzed and compared since considerably different values of R2 can be potentially and practically produced [27]. Some analysis has been already carried out [27] but this study is far from completeness.

Figure 8.

Scatterplot of δPSNROOP on P2σ and the fitted fourth-order polynomial.

Figure 9.

Scatterplots of CR vs P1σ (a) and P0q (b) and the fitted curves.

Similar strategy has been applied in the prediction of CR for 2D compression. The first attempt to predict CR for lossy compression of noisy images in OOP for AGU and ADCT [38] coders has been made in 2015 [26] using two input parameters, P2σ and P2.7σ. A more thorough study has been carried out later [27]. Below we present two scatterplots from the aforementioned paper (Figure 9). As it can be seen, both scatterplots have small spread and, according to their visual inspection, CR has the tendency to increase if the input parameters P1σ or P0q (considered here as example) become larger. Fitting is excellent in both cases although the results for P0q are slightly better. Here, tight connection of CR with P0q is easily understood. A larger P0q shows that there are more zeros in a sequence to be coded and this in turn [49] leads to a larger CR.

Figure 10.

Dependences P0q on QS for 2D and 3D compression.

Thus, we can expect that benefits of 3D compression compared to 2D deal with more zeros after 3D DCT (better decorrelation of the data) than in component-wise compression. To check this hypothesis, we have determined P0q for 3D compression and for 2D component-wise compression for the test image in Figure 5(b). The results are given as dependences of P0q on QS for 3D AGU (notation 3D) and for two components, red and green (notations 2D-R and 2D-G, respectively). As it can be seen, the curves for “red” and “green” channels practically coincide. In 3D compression, more zeros are observed for any QS. For optimal QS=56 one has P0q about 0.87; the predicted CR is about 13 (see Figure 9b) and this practically coincides with the value of practically attained CR (Figure 6b). In turn, for 3D compression, P0q is about 0.92; the predicted CR is over 20 (see Figure 9b), and this is in good agreement with the value of practically attained CR (Figure 6b). Certainly, a more thorough study is needed. However, we can expect that the prediction of CR using P0q can work for 3D DCT-based compression as well (Figure 10).

4.3. Experimental data

The observations described above have also been verified for two types of multichannel images. The first type is Landsat TM data [50]. Different variants of uniting eight images of the same resolution into groups for further 3D compression have been considered. It has been shown that there are benefits in CR (it sufficiently increases for the same level of introduced distortions) only if images combined into a group are highly correlated and have similar dynamic range [50]. Then, there is an increase in the percentage of zeros P0q for 3D coder compared to P0q for component images within this group. This increase can serve as an indicator of expedience to apply 3D compression. Meanwhile, there are component images (e.g., in channel nine, wavelength 1360–1390 nm) for which separate compression is expedient since adding it to any group does not improve the compression performance.

The second type of analyzed data is hyperspectral images acquired by the Hyperion sensor (the dataset EO1H1800252002116110KZ). Hyperion produces bad-quality (very noisy) data in some bands (for example, in subbands with indices q=1–12). These component images are usually discarded in analysis and we have not processed them too.

Hyperspectral data can be compressed with and without utilizing VST to take into account signal-dependent nature of the noise. Below we consider data obtained for the procedure that employs VST for both 2D and 3D compression. In both cases, after determining the parameters of the noise in all subbands (if needed), the generalized Anscombe transform and/or normalization is carried out [20]. Note that original data are presented as 16-bit values and this is taken into account in CR calculation and prediction.

We have considered four variants to compress the data. The first variant is to perform component-wise compression. The second is to divide this hyperspectral image into two groups. The first group includes subbands with indices from 13 to 57 while the second one contains subband images with indices from 83 to 224. The third variant is to use groups of size eight subbands. The fourth is to apply 16-channel groups. Some subbands left in both ranges formed groups of smaller size. CR for all subbands of each group is assumed to be the same since all images are compressed jointly.

The obtained results are presented in Figure 11. Their analysis shows several interesting facts. First, CR for component-wise compression is, on average, sufficiently smaller than for any of 3D compression variants. If component-wise compression is applied, CR for neighbor subbands are close to each other although the total range of CR variation is rather wide—from about 4 to about 27. In general, there is correlation between CR values for 2D and 3D compression. If CR for 2D compression is larger, CR for variants of 3D compression is usually larger too. However, there are a few exceptions when CR for a particular subband image compressed separately is larger than for 3D compression. This happens for subband images with low-input SNR and low correlation with data in neighbor subbands [50].

It is difficult to understand from visual inspection of plots (see Figure 11) what variant of 3D compression is preferable. More thorough analysis has shown that the CRs for groups of size 8 (18.34 and 12.72) and 16 (20.81 and 14.65) subbands are quite close. CRs for the case of using only two large unequal size groups are slightly smaller (17.43 and 13.00).

We have also determined the percentage of zeros for 3D compression in groups of size 8 and 16 subbands. The results are presented in Figure 12. As can be seen, there is tight correlation between CR for a group and the corresponding percentage. This allows expecting that it is possible to predict CR for 3D compression in groups by analyzing P0q for these groups. Note that P0q varies from 0.3 (30%) for subbands compressed with small CR till almost 0.9 (90%) for subbands compressed with very large CR.

Figure 11.

CR values for subbands of hyperspectral Hyperion data for component-wise and 3D lossy compression, 8 channels (blue), 16 channels (yellow), and all channels in group(violet) and component-wise (red).

Figure 12.

Percentages of zero values for quantized DCT coefficients for hyperspectral data compression in groups of size 8 and 16 channels.

Examples of real-life images before and after compression for particular subbands can be found in the paper [22]. If noise intensity is high and noise is visible, lossy compression provides noticeable filtering effect. If noise is invisible, original and compressed images look almost the same.

Advertisement

5. Conclusions

The task of lossy compression of multichannel remote sensing images is considered. It is shown that this type of data has some peculiarities to be taken into account in compression. The main peculiarities consist in signal-dependent nature of the noise, wide limits of variation of data dynamic range and SNR in subband images, and sufficient correlation of data in neighbor channels. Lossy compression should be carried out in automatic manner especially if it has to be performed on-board. Then, it has to adapt to noise properties where the simplest adaptation mechanism is to set QS proportional to noise standard deviation (before or after VST depending upon whether it is applied or not). A good decision in compression of noisy images is to perform compression in the neighborhood of optimal operation point. It is shown that OOP exists for both component-wise and 3D compression where the latter approach is preferable since it produces better denoising and considerably larger CR. Parameters of compression can be predicted rather easily before execution compression with quite high accuracy. This allows adapting compression to image and noise properties and to undertake decision does compression performance meet requirements.

Meanwhile, there are several tasks to be solved in the future. The main of them could be adaptive grouping. Another task is QS adjusting to provide a desired CR.

References

  1. 1. Christophe E. Hyperspectral Data Compression Tradeoff in Optical Remote Sensing. In: Prasad S., Bruce L.M., Chanussot J., editors. Advances in Signal Processing and Exploitation Techniques. 8th ed. Springer; Berlin Heidelberg 2011. pp. 9–29.
  2. 2. Schowengerdt R. Remote Sensing: Models and Methods for Image Processing. 3rd ed. Academic Press; Orlando USA 2006. p. 560.
  3. 3. Blanes I., Magli E., Serra-Sagrista J. A tutorial on image compression for optical space imaging systems. IEEE Geoscience and Remote Sensing Magazine. 2014;2(3):8–26.
  4. 4. Yu G., Vladimirova T., Sweeting M.N. Image compression systems on board satellites. Acta Astronautica. 2009;64(9–10):988–1005.
  5. 5. Magli E., Olmo G., Quacchio E. Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC. IEEE Geoscience and Remote Sensing Letters. 2004;1(1):21–25.
  6. 6. Lukin V., Abramov S., Ponomarenko N., Krivenko S., Uss M., Vozel B., Chehdi K., Egiazarian K., Astola J. Approaches to automatic data processing in hyperspectral remote sensing. Telecommunications and Radio Engineering. 2014;73(13):1125–1139.
  7. 7. Aiazzi B., Alparone L., Barducci A., Baronti S., Pippi I. Estimating noise and information of multispectral imagery. Journal of Optical Engineering. 2002;41:656–668.
  8. 8. Abramov S., Uss M., Abramova V., Lukin V., Vozel B., Chehdi K. On Noise Properties in Hyperspectral Images. In: Proceedings of IGARSS;; Milan, Italy; July 2015. pp. 3501–3504. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325672)
  9. 9. Meola J., Eismann M.T., Moses R.L., Ash J.N. Modeling and estimation of signal-dependent noise in hyperspectral imagery. Applied Optics. 2011;50(21):3829–3846.
  10. 10. Uss M.L., Vozel B., Lukin V., Chehdi K. Image informative maps for component-wise estimating parameters of signal-dependent noise. Journal of Electronic Imaging. 2013;22(1). DOI: 10.1117/1.JEI.22.1.013019 (see http://electronicimaging.spiedigitallibrary.org/article.aspx?articleid=1568204)
  11. 11. Uss M., Vozel B., Lukin V., Chehdi K. Maximum likelihood estimation of spatially correlated signal-dependent noise in hyperspectral images. Optical Engineering. 2012;51(11). DOI: 10.1117/1.OE.51.11.111712 see http://opticalengineering.spiedigitallibrary.org/issue.aspx?journalid=92&issueid=24230
  12. 12. Lukin V., Ponomarenko N., Fevralev D., Vozel B., Chehdi K., Kurekin A. Classification of pre-filtered multichannel remote sensing images. In: Escalante-Ramirez B., editor. Remote Sensing—Advanced Techniques and Platforms. In-Tech, Austria; 2012. pp. 75–98.
  13. 13. Zhong P., Wang R. Multiple-spectral-band CRFs for denoising junk bands of hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2013;51(4):2269–2275.
  14. 14. Khelifi F., Bouridane A., Kurugollu F. Joined spectral trees for scalable SPIHT-based multispectral image compression. IEEE Transactions on Multimedia. 2008;10(3):316–329.
  15. 15. Ponomarenko N., Zriakhov M., Lukin V., Kaarna A. Improved grouping and noise cancellation for automatic lossy compression of AVIRIS images. In: Editors Jacques Blanc-Talon, ‎Don Bone, and ‎Wilfried Philips Proceedings of ACIVS, Springer, Heidelberg, Australia. LNCS-6475, Part II; 2010. pp. 261–271.
  16. 16. Valsesia D., Magli E. A novel rate control algorithm for onboard predictive coding of multispectral and hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing. 2014;52(10):6341–6355.
  17. 17. Shoba L.L., Mohan V., Venkataramani Y. Landsat image compression using lifting scheme. In: Proceedings of International Conference on Communication and Signal Processing; India; April 2014. pp. 1963–1968.
  18. 18. Thayammal S., Silvathy D. Multispectral band image compression using adaptive wavelet transform—tetrolet transform. In: Proceedings of 2014 International Conference on Electronics and Communication Systems; Coimbatore, India; February 2014. pp. 1–5. DOI: 10.1109/ECS.2014.6892610
  19. 19. Wang L., Jiao L., Bai J., Wu J. Hyperspectral image compression based on 3D reversible integer lapped transform. Electronic Letters. 2010;46(24):1601–1602. DOI: 10.1049/el.2010.1788
  20. 20. Zemliachenko A.N., Kozhemiakin R.A., Uss M.L., Abramov S.K., Ponomarenko N.N., Lukin V.V., Vozel B., Chehdi K. Lossy compression of hyperspectral images based on noise parameters estimation and variance stabilizing transform. Journal of Applied Remote Sensing. 2014;8(1):25. DOI: 10.1117/1.JRS.8.083571
  21. 21. Shinoda K., Murakami Y., Yamaguchi M., Ohyama N. Multispectral image compression for spectral and color reproduction based on lossy to lossless coding. In: Proc. SPIE Image Processing: Algorithms and Systems VIII; SPIE 75320H; February 2010. DOI: 10.1117/12.838843
  22. 22. Zemliachenko A.N., Abramov S.K., Lukin V.V., Vozel B., Chehdi K. Prediction of optimal operation point existence and parameters in lossy compression of noisy images. In: Proceedings of SPIE, Vol. 9244, Image and Signal Processing for Remote Sensing XX; SPIE 92440H; October 15, 2014. DOI: 10.1117/12.2065947
  23. 23. Aiazzi B., Alparone L., Baronti S., Lastri C., Selva M. Spectral distortion in lossy compression of hyperspectral data. Journal of Electrical and Computer Engineering. 2012;20(12):850637. DOI: 10.1155/2012/850637
  24. 24. Zemliachenko A.N., Kozhemiakin R.A., Uss M.L., Abramov S.K., Lukin V.V., Vozel B., Chehdi K. VST-based lossy compression of hyperspectral data for new generation sensors. In: Proceedings of SPIE Symposium on Remote Sensing; Dresden, Germany; September 2013. SPIE Vol. 8892; p. 12. DOI: 10.1117/12.2028415
  25. 25. Zemliachenko A., Abramov S., Lukin V., Vozel B., Chehdi K. Compression ratio prediction in lossy compression of noisy images. In: Proceedings of IGARSS; Milan, Italy; July 2015. pp. 3497–3500.
  26. 26. Zemliachenko A.N., Abramov S.K., Lukin V.V., Vozel B., Chehdi K. Lossy compression of noisy remote sensing images with prediction of optimal operation point existence and parameters. Journal of Applied Remote Sensing. 2015;9(1):095066. DOI: 10.1117/1.JRS.9.095066
  27. 27. Zemliachenko A., Kozhemiakin R., Vozel B., Lukin V. Prediction of compression ratio in lossy compression of noisy images. In: Modern Problems of Radio Engineering, Telecommunications and Computer Science (TCSET); Lviv-Slavske, Ukraine; February 2016. pp. 693–697.
  28. 28. Kozhemiakin R., Abramov S., Lukin V., Djurović I., Vozel B. Peculiarities of 3D compression of noisy multichannel images. In: Proceedings of MECO; Budva, Montenegro; June 2015. pp. 331–334.
  29. 29. Lukin V., Abramov S., Kozhemiakin R., Vozel B., Djurovic B., Djurovic I. Optimal operation point in 3D DCT-based lossy compression of color and multichannel remote sensing images. Telecommunications and Radio Engineering. 2015;20:1803–1821.
  30. 30. Christophe E., L´eger D., Mailhes C. Quality criteria benchmark for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2005;43(9):2103–2114.
  31. 31. Green R.O., Eastwood M.L., Sarture C.M., Chrien T.G., Aronsson M., Chippendale B.J., et al. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sensing of Environment. 1998;65:227–248.
  32. 32. Pearlman J.S., Barry P.S., Segal C.C., Shepanski J., Beiso D., Carman S.L. Hyperion, a space-based imaging spectrometer. IEEE Transactions on Geoscience and Remote Sensing. 2003;41(6):1160–1173. DOI: 10.1109/TGRS.2003.815018
  33. 33. Gao L., Du Q., Zhang B., Yang W., Wu Y. A comparative study on linear regression-based noise estimation for hyperspectral imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2013;6(2):488–498.
  34. 34. Al-Chaykh O.K., Mersereau R.M. Lossy compression of noisy images. IEEE Transactions on Image Processing. 1998;7(12):1641–1652.
  35. 35. Bekhtin Y.S. Adaptive wavelet codec for noisy image compression. In: Proceedings of the 9th East-West Design and Test Symposium; Sevastopol, Ukraine; September 2011. pp. 184–188.
  36. 36. Lukin V., Bataeva E. challenges in pre-processing multichannel remote sensing terrain images. In: Djurovic I., editor. Importance of GEO Initiatives and Montenegrin Capacities in This Area. The Section for Natural Sciences Book No 16 ed. The Montenegrin Academy of Sciences and Arts Book. No 119; 2012. pp. 63–76.
  37. 37. Ponomarenko N.N., Lukin V.V., Egiazarian K., Astola J. DCT based high quality image compression. In: Proceedings of 14th Scandinavian Conference on Image Analysis; Joensuu, Finland; 2005. pp. 1177–1185.
  38. 38. Ponomarenko N., Lukin V., Egiazarian K., Astola J. ADCT: a new high quality DCT based coder for lossy image compression. In: CD ROM Proceedings of LNLA; Switzerland; August 2008. p. 6.
  39. 39. Liu C., Szeliski R., Kang S.B., Zitnick C.L., Freeman W.T. Automatic estimation and removal of noise from a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008;30(2):299–314.
  40. 40. Vozel B., Abramov S., Chehdi K., Lukin V., Ponomarenko N., Uss M., Astola J. Blind methods for noise evaluation in multi-component images. In: Wiley-ISTE Multivariate Image Processing; France; 2009. pp. 263–295.
  41. 41. Abramov S., Zabrodina V., Lukin V., Vozel B., Chehdi K., Astola J. Methods for blind estimation of the variance of mixed noise and their performance analysis. In: J. Awrejcewicz, editor. Numerical Analysis—Theory and Applications. Austria: In-Tech; 2011. pp. 49–70. ISBN 978-953-307-389-7
  42. 42. Anfinsen S.N., Doulgeris A.P., Eltoft T. Estimation of the equivalent number of looks in polarimetric synthetic aperture radar imagery. IEEE Transactions on Geoscience and Remote Sensing. 2009;47(11):3795–3809.
  43. 43. Colom M., Lebrun M., Buades A., Morel J.M. A non-parametric approach for the estimation of intensity-frequency dependent noise. In: IEEE International Conference on Image Processing (ICIP); Paris, France; 27–30 October 2014. pp. 4261–4265. DOI: 10.1109/ICIP.2014.7025865
  44. 44. Lukin V., Abramov S., Ponomarenko N., Uss M., Zriakhov M., Vozel B., Chehdi K., Astola J. Methods and automatic procedures for processing images based on blind evaluation of noise type and characteristics. SPIE Journal on Advances in Remote Sensing. 2011;5(1):27/053502. DOI: 10.1117/1.3539768
  45. 45. Abramov S., Krivenko S., Roenko A., Lukin V., Djurovic I., Chobanu M. Prediction of filtering efficiency for DCT-based image denoising. In: Proceedings of MECO; Budva, Montenegro; June 2013. pp. 97–100.
  46. 46. Kozhemiakin R.A., Zemliachenko A.N., Lukin V.V., Abramov S.K., Vozel B. An approach to prediction and providing of compression ratio for DCT-based coder applied to remote sensing images. Ukrainian Journal of Earth Remote Sensing. Forthcoming. No 9, 2016, pp. 22-29.
  47. 47. Rubel O.S., Kozhemiakin R.O., Krivenko S.S., Lukin V.V. A method for predicting denoising efficiency for color images. In: Proceedings of 2015 IEEE 35th International Conference on Electronics and Nanotechnology (ELNANO); Kiev, Ukraine; April 2015. pp. 304–309.
  48. 48. Cameron C., Windmeijer A., Frank A.G., Gramajo H., Cane D.E., Khosla C. An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics. 1997;77(2):1790–1792. (see http://www.irbis-nbuv.gov.ua/cgi-bin/irbis_nbuv/cgiirbis_64.exe?I21DBN=LINK&P21DBN=UJRN&Z21ID=&S21REF=10&S21CNR=20&S21STN=1&S21FMT=ASP_meta&C21COM=S&2_S21P03=FILA=&2_S21STR=ukjdzz_2016_9_5), No 9, 2016, pp. 22–29.
  49. 49. Rissanen J. Modeling by shortest data description. Automatica. 1978;14(5):465–471. DOI: 10.1016/0005-1098(78)90005-5
  50. 50. Kozhemiakin R., Abramov S., Lukin V., Djurović B., Djurović I., Vozel B. Lossy compression of Landsat multispectral images. In: Proceedings of MECO; Bar, Montenegro; June 2016. pp. 104–107.

Written By

Vladimir Lukin, Alexander Zemliachenko, Ruslan Kozhemiakin, Sergey Abramov, Mikhail Uss, Victoriya Abramova, Nikolay Ponomarenko, Benoit Vozel and Kacem Chehdi

Submitted: March 11th, 2016 Reviewed: July 18th, 2016 Published: November 23rd, 2016