Open access peer-reviewed chapter

Lossy Compression of Remote Sensing Images with Controllable Distortions

Written By

Vladimir Lukin, Alexander Zemliachenko, Sergey Krivenko, Benoit Vozel and Kacem Chehdi

Submitted: 23 August 2018 Reviewed: 02 November 2018 Published: 01 December 2018

DOI: 10.5772/intechopen.82361

From the Edited Volume

Satellite Information Classification and Interpretation

Edited by Rustam B. Rustamov

Chapter metrics overview

1,110 Chapter Downloads

View Full Metrics

Abstract

In this chapter, approaches to provide a desired quality of remote sensing images compressed in a lossy manner are considered. It is shown that, under certain conditions, this can be done automatically and quickly using prediction of coder performance parameters. The main parameters (metrics) are mean square error (MSE) or peak signal-to-noise ratio (PSNR) of introduced losses (distortions) although prediction of other important metrics is also possible. Having such a prediction, it becomes possible to set a quantization step of a coder in a proper manner to provide distortions of a desired level or less without compression/decompression iterations for single-channel image. It is shown that this approach can be also exploited in three-dimensional (3D) compression of multichannel images to produce a larger compression ratio (CR) for the same or less introduced distortions as for component-wise compression of multichannel data. The proposed methods are verified for test and real life images.

Keywords

  • lossy compression
  • remote sensing
  • image processing
  • performance prediction

1. Introduction

A huge amount of data is provided nowadays by existing remote sensing (RS) sensors, both spaceborne and airborne [1, 2]. Data volume is especially large if images are hyperspectral (i.e., having hundreds sub-band images) and/or high resolution ones. Note that both tendencies (to create and exploit multichannel systems as well as to produce high resolution data) are typical for recent years. Volume of acquired data additionally increases due to more frequent observations of sensed terrains [2]—it has become a usual practice to monitor a territory quite often, e.g., each week.

The obtained RS data have to be transferred, stored and/or disseminated. For each of this operation, data compression can be desirable [1, 3, 4]. Meanwhile, there are several obstacles that can prevent efficient execution of these operations. Concerning data transferring: bandwidth of a communication channel used to transfer data can be limited, time for transferring can be restricted, time and power for compression can be limited as well [1, 3]. The same can relate to data dissemination although the limitations are usually less strict compared to downlink data transferring. Memory for RS data storage can be a problem too despite of rapid development of new facilities in recent years [2].

Therefore, it is often desired to compress RS images [4, 5]. As known, there are lossless and lossy image compression techniques [1]. Limits attainable by lossless compression are practically reached [1]. Compression ratio (CR) for the existing methods rarely reaches 5 even for compressing hyperspectral data when inter-band correlation is exploited in full extent [4]. However, larger CR values are required often. Then, lossy compression of acquired RS data has to be applied.

The main peculiarity of lossy compression is that it introduces losses (distortions, degradations) into RS images. Then, it can be useful only under condition that introduced losses do not sufficiently negatively influence the goals the acquired RS data are intended for (terrain classification and/or parameter estimation, specific object detection, etc.). One assumption is that introduced losses have to be of the same level or smaller than degradations due to noise in original data [6]. Therefore, noise characteristics have to be taken into consideration and, thus, they should be known in advance or pre-estimated [7, 8, 9, 10, 11]. This also means that it is necessary to be able to control introduced distortions and/or to provide a desired level of losses. Moreover, often this should be done automatically, e.g., in on-board compression [3, 12].

A slightly other assumption is possible if compressed images are subject to visual inspection and analysis. Then, introduced distortions should be such that they do not degrade image visual quality [13]. Then, one has to take into account both specific properties of component images, e.g., variations of their dynamic range [7, 14, 15] and peculiarities of human vision system (HVS).

Finally, one more assumption is that introduced distortions should be such that they do not have (noticeable) negative impact on classification accuracy or performance of other operations of RS data processing at final stages. Note that classification accuracy reduction is connected with metrics characterizing introduced distortions [16].

Thus, introduced distortions should be controlled for all aforementioned strategies. Here by “controlled” we mean several aspects. First, distortions have to be measured or estimated or predicted to ensure that they are not larger than allowed threshold according to a certain metric (criterion) [17, 18]. Second, introduced distortions can be accurately measured only if compression and decompression are already done. Then, if distortion level has to be changed, coder parameters have to be changed and metric calculation has to be done after next iteration of compression/decompression [18]. This is often impractical, especially on-board. Then, it is more reasonable to talk about distortion estimation or prediction without compression and decompression but with approximate providing of a desired quality of compressed data.

Certainly, CR can be important as well. Then, an appropriate compromise has to be provided between CR and introduced losses. Note that CR also depends upon a used coder and a way data redundancy is exploited. In this sense, it is worth incorporating inter-channel correlation inherent for multichannel RS data that can be done in different ways [19, 20, 21]. It is possible to apply different transforms [11, 22, 23, 24] or to carry out different groupings of component images [11, 25, 26].

Lossy compression of images with taking into account noise type [27] and characteristics has been paid considerable attention [28, 29, 30]. Possible existence of optimal operation point (OOP) and its prediction have been claimed and studied [13, 18]. Problems of CR prediction and its providing for coders based on discrete cosine transform (DCT) have been considered [18, 31]. Meanwhile, problems of prediction of compressed image quality and providing a desired quality have not been thoroughly analyzed yet.

In this direction, a certain work has been done. In particular, an approach to quality prediction for wavelet based compression of remote sensing images has been put forward [32]. Prediction of mean square error (MSE) of introduced losses for JPEG has been done [33]. However, control and prediction of metric values for more advanced coders as AGU [34] and ADCT [35] that outperform JPEG considerably [36] were not developed till last 2 years. Since providing of a desired metric value using iterative (multiple) compression/decompression requires sufficient time and resources [36], it was decided to design a new approach without iterations [37]. Later this approach has been further advanced [38, 39, 40], mainly for single-component (grayscale) images in 8-bit representation and with taking into account possible presence of noise.

In this chapter, we consider application of the designed approach to RS images including multichannel data and keeping in mind the following: (1) dynamic range of component images in multichannel data varies in wide limits and 16-bit representation is often used for them; (2) in many component images of multichannel (e.g., hyperspectral) data, input peak signal-to-noise ratio (PSNR) is high and noise influence is negligible; (3) there is essential correlation of signal component in neighbor sub-band images of multichannel images. We show that by taking into account these properties, it is possible to carry out efficient compression of multichannel RS data with controllable quality.

Advertisement

2. Peculiarities of RS image lossy compression

To understand the problem of lossy compression, some preliminaries are needed.

First, lossy compression introduces distortions due to which a decompressed image differs from the corresponding original one (subject to compression). These distortions are introduced at the stage of quantization of coefficients of a used orthogonal transform: wavelet, DCT or some other [34, 35, 41]. If DCT serves as the basis of lossy compression, quantization step (QS) or scaling factor (SF) serve as parameter that controls compression (PCC). A larger QS or SF leads, in general, to greater introduced distortions and a larger CR [34, 35] but MSE of introduced losses and attained CR values considerably depend upon complexity of a compressed image and noise presence.

Figure 1 presents three images: noise-free image Frisco of low complexity, the same image corrupted with additive white Gaussian noise with zero mean and variance 100, and noise-free image Airfield of quite high complexity (it contains a lot of edges and fine details).

Figure 1.

Noise-free and noisy (σ = 10) test images Frisco and the test image airfield.

Figure 2 shows dependences of mean square error MSEout between original and compressed images on QS for the case the advanced DCT (ADCT) coder [42] is applied. It is seen well that smaller distortions are introduced if an image is noise-free and has a simpler structure. The values of MSEoutQS for the same QS can differ by several times and, thus, i.e., QS itself does not determine MSEoutQS.

Figure 2.

Dependences MSE vs QS for noise-free and noisy images Frisco and noise-free image airfield.

Dependences CRQS for the same images are presented in Figure 3. It is seen that the simple structure noise-free image Frisco is compressed in the best way whilst the complex structure image Airfield is compressed with the smallest CR. The reason is that the percentage of DCT coefficients that are assigned zero values after quantization increases if image complexity is lower, noise intensity is less, and QS is larger [31, 43]. Thus, the rate/distortion curve is individual for each particular image and QS has to be adapted to image and noise properties to provide a desired compromise or to satisfy imposed requirements.

Figure 3.

Dependences CR vs QS for noise-free and noisy images Frisco and noise-free image airfield.

We have already mentioned that compression of noisy images has several peculiarities. Suppose that an acquired (noisy) image in a k-th component is image is represented as [8, 10]

Ikijnoisy=Ikijtrue+nkijIkijtrue,i=1,,I,j=1,,J,k=1,,KE1

where Ikijnoisy is the ijth sample of the kth component image, nkij is the ijth value of the in the kth component image supposed dependent on Ikijtrue- the true value for the kijth voxel, I and J define the image size, K is the number of components. One can determine input MSE for each component image as

MSEkinp=i=1Ij=1JIkijnoisyIkijtrue2/IJ,k=1,,KE2

and, respectively, input PSNR

PSNRkinp=10log10Dk2/MSEkinp,k=1,,K,E3

where Dk is image dynamic range assumed individual for each component image (Dk=IkmaxIkmin where Ikmax and Ikmin are maximal and minimal values in the kth image, respectively).

Earlier analysis [7, 44] has shown that MSEkinp,k=1,,Kand PSNRkinp,k=1,,K in very wide limits for such typical examples of multichannel RS data as images provided by hyperspectral sensors AVIRIS [45] and Hyperion [46]. For more than 80% of component images, input PSNR exceeds 40 dB. This means that, most probably [42], OOPs for these component images do not exist, i.e. MSEkc=i=1Ij=1JIkijcIkijtrue2/IJ steadily increases if QS becomes larger ({Ikijc,i=1,,I,j=1,,J,k=1,,K} denotes compressed image in a k-th channel; OOP exists for a k-th component image if MSEkcQS) has one minimum).

If so, i.e. if quality of the compressed noisy image steadily decreases with QS growth, there should be some reasonable strategy to carry out compression for such an image or a group of images with similar properties. Here it is worth recalling the following. Analysis done in the paper [16] has shown that lossy compression has practically no negative impact on image classification accuracy if the metric PSNR-HVS-M [47] is not less than 42–44 dB.

The metric PSNR-HVS-M (PSNRHVSMkc=10log10Dk2/MSEkHVSM,k=1,,K, MSEkHVSM is MSE with taking into consideration specific features of human vision system (HVS)) takes into account two important peculiarities of human vision system: less sensitivity to degradations in high spatial frequencies and masking effect of textures. One can be surprised that visual quality metric has been used in analysis. This can be explained by the fact that the required values of PSNR-HVS-M > 42 dB mean that quality of a compressed image is such that introduced distortions are invisible. According to PSNR, this happens if PSNRkc exceeds 35–37 dB [48].

Thus, we need to provide a desired (controlled) quality of compressed images. This should be done quickly (desirably, without iterative compression/decompression), rather accurately, and with producing a large CR. We expect that CR increase can be gained due to grouping of component images.

Advertisement

3. An approach to providing controlled losses

Let us start from considering lossy compression of a single-channel noise-free image in 8-bit representation. After compression, one obtains {Ikijc,i=1,,I,j=1,,J,k=1,,K} where quality of this image becomes worse for a larger CR or smaller bpp that takes place for larger QS or SF if a DCT-based coder is applied. Let us see how this happens for JPEG with uniform quantization of DCT coefficients. Suppose that an image to be compressed is divided into N=IJ/4 non-overlapping blocks of the size 8 × 8 pixels. Then, in each block, we have DCT coefficients Dnkln=1Nk=17l=17 . After quantization, we have Dqnkln=1Nk=17l=17. Then, MSE of losses can be determined as

MSE=1Nn=1NMSEn=n=1Nk=07l=07Dqnkl2E4

where

Dqnkl=[Dnkl)/QS,k=0,,7,l=0,,7,
Dqnkl=QS×DqnklDnkl,k=0,,7,l=0,,7.

and [] denotes rounding-off to the nearest integer, n denotes the block index.

A usual assumption concerning distribution of quantization errors is that it is uniform or close to uniform. Then, MSE is about QS2/12. This is true for quite small QS (see data in Figure 2) but, for larger QS, MSE becomes smaller than QS2/12. The main reason is that distributions of alternating current (AC) DCT coefficients differ a lot depending upon an image. Figure 4 presents these distributions using the same scale for the three considered images (Figure 1). Obviously, these distributions differ from Gaussian and from Laplacian (assumed in the paper [33]) as well. For the simple structure image, the distribution is quite narrow and it has heavy tails. If noise is present, the distribution “widens” and becomes closer to Gaussian.

Figure 4.

Distributions of AC DCT coefficients for the noise-free image Frisco (a), noise-free image airfield (b) and noisy image airfield (c), all in the same limits from −200 to 200.

It is seen from analysis of distribution in Figure 4a that if QS is about 10, most of AC DCT coefficients become zeros after quantization. Thus, we have decided to analyze quantization errors more in detail. Histograms of these errors for four cases are given in Figure 5. The histogram in Figure 5a shows that error distribution is close to uniform for the noise-free image Airfield that has wide distribution of AC DCT coefficients (Figure 4c). The distribution is also practically uniform for noisy image Frisco (noise standard deviation equals to 5, Figure 5d). Then, MSE of introduced losses is really close to QS2/12 (see data in Figure 2). In other cases (Figure 5b and c), the distributions sufficiently differ from uniform. This happens for noise-free image Frisco. Thus, introduced losses MSE is less than QS2/12.

Figure 5.

Examples of histograms of quantization error for AC DCT coefficients (see comments under each histogram).

Hence, MSEQS2/12 can be treated as the upper limit of introduced losses. Note that this is valid not only for JPEG but for the coders AGU and ADCT [38, 39, 40]. This means that having a desired (threshold) MSEdes, it is possible to easily calculate QS as 12MSEdes. A question is when the approximation MSEQS2/12 is valid? Note that if MSE is smaller than QS2/12, one can benefit from using a larger QS and providing a larger CR. Clearly, that if a desired PSNRdes has to be provided, it has to be recalculated to MSEdes taking into account dynamic range for a given image as MSEdes=D2/10PSNRdes/10.

Our idea [38, 39, 40] is that MSE can be predicted in one of two ways.

The first way is determined as

MSEpred=1Rr=1RMSEr=164Rr=1Rk=07l=07Dqnkl2E5
Dqrkl=QS×DqrklDrkl,k=0,,7,l=0,,7,r=1,,RE6

where R is the number of analyzed blocks (R ≪ N), C is a correcting factor used for a given coder. In other words, we employ statistics of DCT coefficients calculated in a limited number R of analyzed blocks of size 8x8 pixels. According to our studies [38, 40], it is enough to have R about 500 where analyzed blocks are randomly distributed over area of an image to be compressed to have prediction accurate enough. Taking into account that number of 8 × 8 pixel blocks in compressed images usually exceeds several thousands, prediction occurs to be much faster than even compression by JPEG. Certainly, prediction is much faster than compression by AGU (uses 32 × 32 blocks, efficient coding and deblocking after decompression) and, especially, ADCT (exploits partition scheme optimization).

Expressions (5 and 6) allow predicting MSE for a given QS. But they do not allow direct setting of QS. One has to apply an iterative procedure that starts from QS=12MSEdes. If the predicted MSEpred (5) occurs to be considerably (e.g., by 15–20% or more) smaller than MSEdes, then a larger QS has to be tried with calculating (6) for all analyzed blocks and (5) again. Since the already calculated DCT coefficients are available, the procedure is quite fast.

The second way is the following. Suppose that the predicted MSE can be presented as

MSEpred=QS2/12f0XE7

where f0X is a function of one or two parameters X that can be easily and quickly calculated for DCT coefficients determined in analyzed blocks. Then one has to find such parameter(s) and the function. To solve this task, we have exploited our earlier experience in predicting filtering efficiency [49] and compression ratio [18] by simple analysis of DCT statistics in 8x8 pixel blocks and regression analysis [50, 51].

The prediction strategy is the following. We suppose that there is an input parameter (or a few parameters) that can characterize a compressed image. It is also assumed that output (predicted) parameter (MSE, PSNR, CR, or another metric) is strictly connected with this (these) input parameter(s). This connection (prediction approximation) is available to the moment to carry out prediction, i.e., in our case, the function f0X has been obtained in advance (in off-line mode). Then, one has to calculate input parameter(s) for a given QS and insert it (them) into f0X.

It has been shown in [52] that a good parameter integrally characterizing an image (its complexity) is probability P0 that AC DCT coefficients after quantization become equal to zero (this parameter can be also treated as probability that AC DCT coefficient absolute values are smaller than QS/2). It is obvious that P0 can be very easily calculated. Keeping these properties of P0 in mind, we have obtained scatter plots of 12MSE/QS2 to estimate f0P0. A wide set of test noise-free images has been used that included standard optical images, test RS images and test medical image (this was done to understand does the image nature (origin) influence performance of lossy compression; in fact, very similar results have been obtained for test images of different origin; the main factor is image complexity). Each point of the scatter plot corresponds to one test image compressed with some QS where vertical coordinate is P0 determined for this case).

Figure 6 presents scatter plots obtained for AGU and ADCT coders with examples of fitted curves. The main and very important observation is that the scatter plots behave in a compact manner, i.e. points that have approximately the same arguments have close values of 12MSE/QS2. Another observation is that the scatter plots for two considered coders behave in a very similar manner, i.e. there is a tendency to monotonous decreasing of 12MSE/QS2 if P0 increases. Finally, the scatter plots confirm that, in many practical situations, MSEQS2/12. At least, this is true for P0<0.6.

Figure 6.

Scatter plots for AGU (a) and ADCT (b) coders.

It is worth recalling here that P0<0.6 corresponds to rather small QS. To prove this, Figure 7 presents the scatter plot from [48] and the fitted curve. As it is seen, for P0<0.6, CR does not exceed 5. If P00.6, there is the tendency of reduction of f0P0. The scatter plot points are placed not so compactly here. Thus, prediction using only f0P0 becomes less accurate. Nevertheless, the following prediction procedure can be proposed:

  1. Determine QS=12MSEdes, obtain AC DCT coefficients for analyzed blocks and calculate P0 for this QS.

  2. If P0<0.6, use QS=12MSEdes and stop the procedure.

  3. Otherwise, increase QS by about 5%, calculate P0 and compare QS2/12f0P0 to MSEdes; if QS2/12f0P0MSEdes then stop; otherwise continue till satisfying this condition.

Figure 7.

The scatter plot of CR on P0 and the fitted curve for the coder AGU.

As it is seen, all the operations are very easy and fast since they are performed for a limited number of AC DCT coefficients. Moreover, using the same parameter, it is possible to predict both MSE and CR. Then, it is easy to find a proper compromise depending upon priority of requirements and imposed restrictions.

One question is what curves to fit and what are criteria of fitting quality to be used. There are different approaches but we employed goodness-of-the-fit R2 and RMSE [50] as two main criteria (the former one has to be maximized and the latter one minimized for a given scatter plot). Without going to details, we can state the following. For each scatter plot, usually there are several functions able to provide approximately the same R2 and RMSE. Sums of two exponentials (see an example in Figure 7), polynomials of low order, Fourier series, power functions are good candidates to be tested. Using the corresponding tools of Matlab or Excel, it is possible to quickly find optimal or, at least, appropriately good solution.

Advertisement

4. Peculiarities of compression

4.1 Visual quality metrics

We have already mentioned that it is often desirable to predict visual quality metrics. To check whether or not this is possible, the scatter plot was got for MSEHVSM/QS2/12 vs. P0 (Figure 8). As it is seen, this ratio is about 0.05 for small P0 (this happens for small QS and/or complex structure images), i.e. PSNR-HVS-M is by about 13 dB larger than PSNR. This means that introduced losses are masked by image content well and, most probably, they cannot be noticed visually. The difference in PSNR-HVS-M and PSNR decreases to 5–7 dB for P0>0.5, i.e. typical conditions of lossy compression. The scatter plot and the fitted curve show that MSEHVSM can be predicted well for a given QS. In other words, visual metrics can be predicted too using the proposed approach. Again, the sum of two exponentials (just this case is presented in Figure 8) can serve well as approximation curve with quite small number of varied parameters.

Figure 8.

The scatter plot MSEHVSM/QS2/12 vs. P0 and the fitted curve, AGU coder.

4.2 Experimental data for component-wise compression

Let us present the results of applying the proposed approach to real-life hyperspectral data. Images of Hyperion sensor dataset EO1H1800252002116110KZ have been compressed. Hyperion sensor produces data of bad quality (very noisy) in sub-bands with indices k = 1,…,12 and k = 58,…,76. The images in these sub-bands are often discarded in analysis, so we have not compressed them.

Then, two approaches to compression have been compared. Both presume component-wise compression. The first one has been proposed earlier [11]. Images are compressed after applying variance stabilizing transform that takes into account signal-dependent noise properties and converts this noise to additive with variance approximately equal to unity. Then, the recommended QS = 3.5 (this notation is used in figures below). Inverse transform is applied component-wise after decompression. For the proposed method, the component-wise images have been transformed to the interval from 0 to 255. Then, for each of them, AGU coder has been applied with QS = 17 that approximately corresponds to PSNRdes=34.5 dB (MSEdes2417×17/12). The notation QS = 17 is used for the corresponding data.

The obtained PSNR values calculated between compressed and original component images are presented in Figure 9. As it is seen, PSNR for the method [11] in most sub-bands occurs to be considerably larger than PSNRdes set by us. Only in some sub-bands (indices 165–185) where input PSNR is quite small the determined PSNR values are about 40 dB (i.e., the introduced losses are invisible in decompressed images). For the proposed approach, PSNR for the introduced losses is considerably smaller but, for all sub-band images, PSNR anyway exceeds 35 dB. As it follows from analysis of data in Figure 10, CR for all sub-bands exceeds 5 (a more detailed study shows that P0>0.6 in all cases). Thus, MSE is smaller than QS2/12 (see data in Figure 6) and the provided PSNR is larger than expected.

Figure 9.

PSNR for component-wise compression by the method ([11], QS = 3.5), the proposed component-wise approach (QS = 17), and the proposed 3D compression method (QS = 17, bl = 4).

Figure 10.

CR for component-wise compression by the method ([11], QS = 3.5), the proposed approach (QS = 17), and the proposed 3D compression method (QS = 17, bl = 4).

The main observation for data in Figure 10 is that CR for the proposed method is by several times larger than for the prototype method for almost all sub-bands except the bands with small input PSNR. Thus, we have gained essential benefit in CR sense while introduced distortions remained invisible.

We do not present examples of original and compressed component images because visually they are identical. Note that setting a larger PSNRdes leads to larger PSNR of introduced losses and smaller CR for each component image, respectively. By setting a larger PSNRdes one can ensure that classification accuracy does not make worse.

4.3 3D compression

Consider now possibilities of 3D compression in groups. There are many different options [11]. We have analyzed one of the simplest ones where component images have been transformed to the 8-bit representation limits, then combined in 4-band groups, and then compressed by 3D version of AGU coder. After decompression the images have to be “stretched” to original limits.

As previously in Section 4.2, we have employed QS = 17. For convenience of comparison, the obtained data are also presented in Figures 9 and 10, for 3D compression they are denoted as QS = 17, bl = 4. CR values for the 3D case are shown the same for all components of the same group. As it is seen, CR values for 3D compression are about two times larger than for the proposed component-wise compression. This is an obvious advantage of 3D compression. Meanwhile, there are also very interesting observations stemming from analysis of data for PSNR (Figure 9). As it is seen, there are many sub-bands for which PSNR for 3D compression is considerably larger (and the introduced losses are sufficiently smaller) than for component-wise compression. PSNR values are almost the same if sub-bands with small input PSNR are compressed. This is one more positive feature of 3D compression that should be studied more in detail in the future.

Advertisement

5. Conclusions

We have considered the task of lossy compression of RS images with controllable quality characterized by traditional metrics. It is shown that MSE and PSNR can be predicted for DCT-based coders and, due to this, it is possible to provide a desired MSE or PSNR without compression/decompression iterations quite quickly and accurately. Being applied to compress RS images without visible distortions, this approach allows providing CR considerably larger than for approach based on taking noise properties into account.

Moreover, it is demonstrated that prediction of some visual quality metrics is also possible. It is also shown that 3D compression of images collected into groups provides considerably better results. However, additional studies are needed to predict distortion parameters in this case. Examples for real-life data as hyperspectral image are presented.

This research has been partly supported by the Project M/29–2018 of Ukrainian-French program “Dnipro” and STCU Project No. 6386.

References

  1. 1. Blanes I, Magli E, Serra-Sagrista J. A tutorial on image compression for optical space imaging systems. IEEE Geoscience and Remote Sensing Magazine. 2014:8-26
  2. 2. Schowengerdt R. Remote Sensing: Models and Methods for Image Processing. 3rd ed. Academic Press; 2006. 560 p
  3. 3. Christophe E. Hyperspectral data compression tradeoff in optical remote sensing. In: Prasad S, Bruce LM, Chanussot J, editors. Advances in Signal Processing and Exploitation Techniques. 8th ed. Springer; 2011. pp. 9-29
  4. 4. Yu G, Vladimirova T, Sweeting MN. Image compression systems on board satellites. In: Acta Astronautica. 2009. pp. 988-1005
  5. 5. Magli E, Olmo G, Quacchio E. Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC. IEEE Geoscience and Remote Sensing Letters. 2004:21-25
  6. 6. Aiazzi B, Alparone L, Baronti S, Lastri C, Selva M. Spectral distortion in Lossy compression of hyperspectral data. Journal of Electrical and Computer Engineering. 2012;2012:850637. DOI: 10.1155/2012/850637
  7. 7. Abramov S, Uss M, Abramova V, Lukin V, Vozel B, Chehdi K. On noise properties in hyperspectral images. In: Proceedings of IGARSS; July 2015; Milan, Italy. 2015. pp. 3501-3504
  8. 8. Meola J, Eismann MT, Moses RL, Ash JN. Modeling and estimation of signal-dependent noise in hyperspectral imagery. Applied Optics. 2011:3829-3846
  9. 9. Uss ML, Vozel B, Lukin V, Chehdi K. Image informative maps for component-wise estimating parameters of signal-dependent noise. Journal of Electronic Imaging. 2013;22(1). DOI: 10.1117/1.JEI.22.1.013019
  10. 10. Uss M, Vozel B, Lukin V, Chehdi K. Maximum likelihood estimation of spatially correlated signal-dependent noise in hyperspectral images. Optical Engineering. 2012;51(11). DOI: 10.1117/1.OE.51.11.111712
  11. 11. Zemliachenko AN, Kozhemiakin RA, Uss ML, Abramov SK, Ponomarenko NN, Lukin VV, et al. Lossy compression of hyperspectral images based on noise parameters estimation and variance stabilizing transform. Journal of Applied Remote Sensing. 2014;8(1):25. DOI: 10.1117/1.JRS.8.083571
  12. 12. Lukin V, Abramov S, Ponomarenko N, Krivenko S, Uss M, Vozel B, et al. Approaches to automatic data processing in hyperspectral remote sensing. Telecommunications and Radio Engineering. 2014;73(13):1125-1139
  13. 13. Lukin V, Abramov S, Kozhemiakin R, Vozel B, Djurovic B, Djurovic I. Optimal operation point in 3D DCT-based lossy compression of color and multichannel remote sensing images. Telecommunications and Radio Engineering. 2015;20:1803-1821
  14. 14. Zhong P, Wang R. Multiple-spectral-band CRFs for denoising junk bands of hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2013:2269-2275
  15. 15. Lukin V, Bataeva E. Challenges in pre-processing multichannel remote sensing terrain images. In: Djurovic I, editor. Importance of GEO Initiatives and Montenegrin Capacities in this Area. The Section for Natural Sciences Book No. 16 Ed. The Montenegrin Academy of Sciences and Arts Book. No 119. 2012. pp. 63-76
  16. 16. Popov MA, Stankevich SA, Lischenko LP, Lukin VV, Ponomarenko NN. Processing of hyperspectral imagery for contamination detection in urban areas. In: Proceedings of NATO Workshop on Environmental Security and Ecoterrorism; NATO Science for Peace and Security Series C; Springer Science+Business Media B.V. 2011. pp. 147-156
  17. 17. Christophe E, L’eger D, Mailhes C. Quality criteria benchmark for hyperspectral imagery. IEEE Transactions on Geoscience and Remote Sensing. 2005:2103-2114
  18. 18. Zemliachenko A, Abramov S, Lukin V, Vozel B, Chehdi K. Compression ratio prediction in lossy compression of noisy images. In: Proceedings of IGARSS; July 2015; Milan, Italy. 2015. pp. 3497-3500
  19. 19. Christophe E, Mailhes C, Duhamel P. Hyperspectral image compression: Adapting SPIHT and EZW to anisotropic 3-D wavelet coding. IEEE Transactions on Image Processing. 2008;17(12):2334-2346
  20. 20. Khelifi F, Bouridane A, Kurugollu F. Joined spectral trees for scalable SPIHT-based multispectral image compression. IEEE Transactions on Multimedia. 2008;10(3):316-329
  21. 21. Valsesia D, Magli E. A novel rate control algorithm for onboard predictive coding of multispectral and hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing. 2014;52(10):6341-6355
  22. 22. Thayammal S, Silvathy D. Multispectral band image compression using adaptive wavelet transform-tetrolet transform. In: Proceedings of 2014 International Conference on Electronics and Communication Systems; February 2014; Coimbatore, India. 2014. pp. 1-5. DOI: 10.1109/ECS.2014.6892610
  23. 23. Shoba LL, Mohan V, Venkataramani Y. Landsat image compression using lifting scheme. In: Proceedings of International Conference on Communication and Signal Processing; April 2014; India. 2014. pp. 1963-1968
  24. 24. Wang L, Jiao L, Bai J, Wu J. Hyperspectral image compression based on 3D reversible integer lapped transform. Electronic Letters. 2010;46(24):1601-1602. DOI: 10.1049/el.2010.1788
  25. 25. Ponomarenko N, Zriakhov M, Lukin V, Kaarna A. Improved grouping and noise cancellation for automatic lossy compression of AVIRIS images. In: Proceedings of ACIVS; Australia; LNCS-6475, Part II. Heidelberg: Springer; 2010. pp. 261-271
  26. 26. Shinoda K, Murakami Y, Yamaguchi M, Ohyama N. Multispectral image compression for spectral and color reproduction based on lossy to lossless coding. In: Proceedings of the SPIE; Image Processing: Algorithms and Systems VIII; February 2010; SPIE 75320H. 2010. DOI: 10.1117/12.838843
  27. 27. Vozel B, Chehdi K, Klaine L, Lukin VV, Abramov SK. Noise identification and estimation of its statistical parameters by using unsupervized variational classification. In: Proceedings of ICASSP; Toulouse, France; vol. II. 2006. pp. 841-844
  28. 28. Bekhtin Yu S. Adaptive wavelet codec for noisy image compression. In: Proceedings of the 9th East-West Design and Test Symp.; Sept., 2011; Sevastopol, Ukraine. 2011. pp. 184-188
  29. 29. Al-Chaykh OK, Mersereau RM. Lossy compression of noisy images. IEEE Transactions on Image Processing. 1998;7(12):1641-1652
  30. 30. Kozhemiakin R, Abramov S, Lukin V, Djurović I, Vozel B. Peculiarities of 3D compression of noisy multichannel images. In: Proceedings of MECO; June 2015; Budva, Montenegro. 2015. pp. 331-334
  31. 31. Kozhemiakin RA, Zemliachenko AN, Lukin VV, Abramov SK, Vozel B. An approach to prediction and providing of compression ratio for DCT-based coder applied to remote sensing images. Ukrainian Journal of Earth Remote Sensing. 2016;8:22-29
  32. 32. Jiang H, Yang K, Liu T, Zhang Y. Quality prediction of DWT-based compression for remote sensing image using multiscale and multilevel differences assessment metric. Mathematical Problems in Engineering. 2014;2014:15 Article ID 593213
  33. 33. Minguillon J, Pujol J. JPEG standard uniform quantization error modeling with applications to sequential and progressive operation modes. Electronic Imaging. 2001;10(2):475-485
  34. 34. Ponomarenko NN, Lukin VV, Egiazarian K, Astola JDCT. Based high quality image compression. In: Proceedings of 14th Scandinavian Conference on Image Analysis; Joensuu, Finland. 2005. pp. 1177-1185
  35. 35. Ponomarenko N, Lukin V, Egiazarian K, Astola JADCT. A new high quality DCT based coder for lossy image compression. In: CD ROM Proceedings of LNLA; August 2008; Switzerland. 2008. p. 6
  36. 36. Zemliachenko A, Ponomarenko N, Lukin V, Egiazarian K, Astola J. Still image/video frame lossy compression providing a desired visual quality. Multidimensional Systems and Signal Processing. June 2015:22. DOI: 10.1007/s11045-015-0333-8
  37. 37. Kozhemiakin R, Lukin V, Vozel B. Image quality prediction for DCT-based compression. In: Proceedings of CADSM 2017; Ukraine. February 2017. pp. 225-228. DOI: 10.1109/CADSM.2017.7916121
  38. 38. Vozel B, Kozhemiakin R, Abramov S, Lukin V, Chehdi K. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images. In: Proceedings of the SPIE. 10427, Image and Signal Processing for Remote Sensing XXIII; Warsaw, Poland. September 2017. p. 11
  39. 39. Krivenko S, Zriakhov M, Lukin V, Vozel B. MSE prediction in DCT-based lossy compression of noise-free and noisy remote sensing images. In: Proceedings of TCSET; Lviv-Slavske, Ukraine. February 2018. p. 6. DOI: 10.1109/TCSET.2018.8336338
  40. 40. Krivenko S, Lukin V, Vozel B. MSE and PSNR prediction for ADCT coder applied to lossy image compression. In: Proceedings of The 9th IEEE International Conference on Dependable Systems, Services and Technologies DESSERT’2018; Kiev, Ukraine. May 2018. p. 6. DOI: 10.1109/DESSERT.2018.8409205
  41. 41. Taubman D, Marcellin M. JPEG2000 Image Compression Fundamentals, Standards and Practice. 1st ed. Springer; 2002. DOI: 10.1007/978-1-4615-0799-4
  42. 42. Zemliachenko AN, Abramov SK, Lukin VV, Vozel B, Chehdi K. Lossy compression of noisy remote sensing images with prediction of optimal operation point existence and parameters. Journal of Applied Remote Sensing. 2015;9(1):095066. DOI: 10.1117/1.JRS.9.095066
  43. 43. Rissanen J. Modeling by shortest data description. Automatica. 1978;14(5):465-471. DOI: 10.1016/0005-1098(78)90005-5
  44. 44. Rubel O, Zemliachenko A, Abramov S, Krivenko S, Kozhemiakin R, Lukin V, et al. Processing of multichannel remote-sensing images with prediction of performance parameters, chapter 13. In: Environmental Applications of Remote Sensing. Intech; June 2016. pp. 373-416
  45. 45. Green RO, Eastwood ML, Sarture CM, Chrien TG, Aronsson M, Chippendale BJ, et al. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sensing of Environment. 1998;65:227-248
  46. 46. Pearlman JS, Barry PS, Segal CC, Shepanski J, Beiso D, Carman SL. Hyperion, a space-based imaging spectrometer. IEEE Transactions on Geoscience and Remote Sensing. 2003:1160-1173. DOI: 10.1109/TGRS.2003.815018
  47. 47. Ponomarenko N, Silvestri F, Egiazarian K, Carli M, Astola J, Lukin V. On between-coefficient contrast masking of DCT basis functions. In: CD-ROM Proceedings of VPQM; USA. 2007. 4 p
  48. 48. Lukin V, Ponomarenko N, Egiazarian K, Astola J. Analysis of HVS-metrics’ properties using color image database TID2013. In: Proceedings of ACIVS; Italy. 2015. pp. 613-624
  49. 49. Abramov S, Krivenko S, Roenko A, Lukin V, Djurovic I, Chobanu M. Prediction of filtering efficiency for DCT-based image denoising. In: Proceedings of MECO; June 2013; Budva, Montenegro. 2013. pp. 97-100
  50. 50. Cameron C, Windmeijer A, Frank AG, Gramajo H, Cane DE, Khosla C. An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics. 1997;77:329-342
  51. 51. Rubel O, Abramov S, Lukin V, Egiazarian K, Vozel B, Pogrebnyak A. Is texture denoising efficiency predictable. International Journal on Pattern Recognition and Artificial Intelligence. 2018;32. DOI: 10.1142/S0218001418600054
  52. 52. Zemliachenko A, Abramov S, Lukin V, Vozel B, Chehdi K. Improved compression ratio prediction in DCT-based lossy compression of remote sensing images. In: Proceedings of IGARSS; Beijing, China. 2016. 4 p. DOI: 10.1109/IGARSS.2016.7730817

Written By

Vladimir Lukin, Alexander Zemliachenko, Sergey Krivenko, Benoit Vozel and Kacem Chehdi

Submitted: 23 August 2018 Reviewed: 02 November 2018 Published: 01 December 2018