Open access peer-reviewed chapter

The Discrete Quincunx Wavelet Packet Transform

Written By

Abdesselam Bassou

Submitted: 23 June 2020 Reviewed: 09 November 2020 Published: 15 December 2020

DOI: 10.5772/intechopen.94970

From the Edited Volume

Wavelet Theory

Edited by Somayeh Mohammady

Chapter metrics overview

750 Chapter Downloads

View Full Metrics

Abstract

This chapter aims to present an efficient compression algorithm based on quincunx wavelet packet transform that can be applied on any image of size 128×128 or bigger. Therefore, a division process into sub-images of size 128×128 was applied on three gray-scale image databases, then pass each sub-image through the wavelet transform and a bit-level encoder, to finally compress the sub-image with respect to a fixed bit rate. The quality of the reconstructed image is evaluated using several parameters at a given bit rate. In order to improve the quality in sense of the evaluation quality, an exhaustive search has led to the best packet decomposition base. Two versions of the proposed compression scheme were performed; the optimal version is able to decrease the effect of block boundary artifacts (caused by the image division process) by 27.70% considering a natural image. This optimal version of the compression scheme was compared with JPEG standard using the quality evaluation parameters and visual observation. As a result, the proposed compression scheme presents a competitive performance to JPEG standard; where the proposed scheme performs a peak signal to noise ratio of 0.88dB over JPEG standard at a bit rate of 0.50bpp for a satellite image.

Keywords

  • quincunx wavelet transform
  • wavelet packet
  • quality evaluation parameters
  • reduction factor
  • JPEG standard

1. Introduction

Wavelet is defined as a small wave that can be the base of all physical phenomena; which means that a time and/or space variation of a phenomenon is a sum of multiple wavelets. As examples, the wavelet transform was applied on an electro-cardiogram (ECG) signal in order to extract the QRS complex [1] (time variation), on a video sequence in order to implement a hidden watermark [2] (time and space variation) and on a 2D image in order to reduce its size (compression) [3, 4] (space variation). In this chapter, one considers the application of the wavelet on 2D image compression.

An image is one of the most important sources of information; it provides a visual comprehension of a phenomenon. The image can take several natures as medical, natural, textural or satellite image, each nature is characterized by a proper amount of details. For a digital image, the size in bytes is as bigger as the amount of details; this applies the use of image compression process.

In other words, if one considers a gray-scale image of size 512×512, that means a bit rate of 8 bits per pixel (Rc=8bpp) and a file size of 512×512×8 bits (256 Kbytes). Compressing this image leads to reduce its file size (without changing the image size); for example, to reduce the file size by a factor of 10 (25.6 Kbytes), one have to consider a bit rate of Rc=25.6×1024×8512×512=0.8bpp.

Because of the conservation of all details in the image after decompression, the lossless1 compression algorithms, as Run Length Coding (RLE), Lempel-Ziv-Welch (LZW) and Huffman [5, 6], are by far the ideal methods. However, such a compression algorithms does not provide a significant reduction of image’s file size, and therefore the lossy2 compression algorithms may be more appropriate.

The most known lossy compression algorithm is the standard JPEG (Joint Photographic Experts Group) [7]; it is based, as lossy algorithm, on a discrete transform (Cosine Discrete Transform, DCT in this case). The Discrete Wavelet Transform (DWT) and Quincunx Wavelet transform (QWT) are two other discrete transforms that can be found in the literature [8, 9]; they apply a progressive transformation on the image followed by an encoding process (like Embedded Zerotree Wavelet, EZW or Set Partitioning In Hierarchical Trees, SPIHT [10]) to give the image a bit-level representation.

This chapter aims to propose a QWT-based compression algorithm that can be applied on any image of size 128×128 or bigger. Therefore, the following structure is adopted: In Section 2, the discrete wavelet transform is introduced and the progressive presentation of an image is exposed. Section 3 is dedicated to the quincunx wavelet transform, the QWT extension to wavelet packet (PQWT) and the encoding process employing SPIHT algorithm. The PQWT-based compression algorithm is presented in Section 4, and the results and discussions in Section 5.

Advertisement

2. Discrete wavelet transform

2.1 Definition

As discrete sine and cosine, the DWT is used to represent a digital signal (as an image) with sum of projections over orthogonal functions; these functions are called “wavelet”. Several wavelets are described in the literature; among them, one can find dyadic Daubechies family (represented with scaling and wavelet functions in Figure 1 for four examples [8]).

Figure 1.

Four examples of dyadic daubechies wavelets. Scaling function, Wavelet function.

In order to improve JPEG compression performances (in sense of evaluation parameters presented in Section 4), the researchers have proposed the JPEG 2000 compression algorithm based on a wavelet called CDF 9/7 (Cohen-Daubechies-Feauveau 9-tap/7-tap) [11, 12]. The scaling and wavelet functions, and decomposition and reconstruction low and high filters are shown in Figure 2.

Figure 2.

Wavelet of Cohen-Daubechies-Feauveau 9-tap/7-tap. (a) Scaling and wavelet functions, (b) decomposition and reconstruction filters.

2.2 Wavelet decomposition

As it is mentioned above, a wavelet applies a progressive transformation on the image. This process (called filter bank analysis) is realized by passing an image with coefficients a0k, at time k, through a decomposition low filter h¯0, a decomposition high filter h¯1 and a decimation function (↓2). As a result of level 1 decomposition, one obtain an approximation image of coefficients a1k and a detail image of coefficients d1k. The same process is applied, at level j, on the approximation aj1k to get an approximation ajk and a detail djk. Figure 3 shows a wavelet 3-level decomposition.

Figure 3.

Wavelet 3-level decomposition.

2.3 Wavelet reconstruction

The reconstruction process (called filter bank synthesis) follows the inverse order of decomposition process, which means that, at level j and time k, an approximation ajk and a detail djk are oversampled (↑2) and passed, respectively, through reconstruction low filter h0 and reconstruction high filter h1 to generate an approximation image of coefficients aj1k. Figure 4 shows a wavelet 3-level reconstruction.

Figure 4.

Wavelet 3-level reconstruction.

A perfect reconstruction satisfies the following criteria:

H¯0fH0f+H¯1fH1f=2E1
H¯0f+1/2H0f+1/2+H¯1f+1/2H1f=0E2

where, f is a normolised frequency, H¯if and Hif (i=0,1) are, respectively, the Fourier transform of impulse responses h¯ik and hik.

Figure 5 illustrates the result of applying CDF 9/7 3-level decomposition over gray-scale image ‘Lena’.

Figure 5.

3-level decomposition employing CDF 9/7. (a) Original ‘Lena’ image, (b) decomposed ‘Lena’ image.

Advertisement

3. Quincunx wavelet transform

3.1 Definition

The decomposition and reconstruction processes using QWT remain the same as DWT; however, there are some differences:

  • The diamond McClellan transform [13] is applied to map a 1-D design onto the quincunx structure.

  • The decimation factor is 2 for each direction.

The 2D quincunx refinement and wavelet filters are given respectively by:

Hλejω=22+cosω1+cosω2λ22+cosω1+cosω2λ+2cosω1+cosω2λE3
Gλz=z1Hλz1E4

where, ω=ω1ω2 is 2D pulse, z=ejω is the discrete Fourier transform parameter and λ is filter order. All simulations in this chapter were performed considering λ=5.

The QWT 6-level decomposition of image ‘Lena’ is given in Figure 6.

Figure 6.

6-level decomposition of image ‘Lena’ employing QWT.

3.2 Quincunx wavelet packet transform

The Wavelet Packet Transform (WP) [14] consists on generalizing the decomposition over all parts of the decomposed images (approximations and details) considering the following condition: a detail image is decomposed if its entropy decreases after decomposition. The literature has shown that this technique is more efficient on textural images.

Employing the packet transform on QWT (PQWT) implies that only dyadic parts of QWT decomposition are concerned, which means that the analysis time decreases.

Figure 7 shows the entropy-based PQWT 6-level decomposition of image ‘Lena’.

Figure 7.

6-level decomposition of image ‘Lena’ employing PQWT.

3.3 Set partitioning in hierarchical trees encoder

In order to compress an image employing wavelet-based transform, an encoding step is used to give a bit-level representation to the image. This chapter employs the SPIHT encoder as bit-level representation encoder. Figure 8 summarizes the relationship between decomposition levels. The authors of [15] had proposed a modified version of SPIHT for the wavelet packet transform; this version is adopted for the PQW transform.

Figure 8.

SPIHT algorithm (L denotes low filtering and H denotes high filtering).

Advertisement

4. Proposed QWT/PQWT-based compression algorithm

4.1 Compression scheme

The JPEG standard is based on dividing an image into sub-images of size 8×8, then applying the DC transform on each image. In the proposed approach, one adopts 8-level PQWT as transform algorithm and a size of m2=128×128 for the dividing process. An example of dividing process is given in Figure 9.

Figure 9.

Example of dividing image ‘Lena’ into four images of size m2.

The proposed compression scheme is summarized in Figure 10. It consists on applying, on each image (Il,l=1,2,) constituting the original image, the QW or PQW transform then the SPIHT algorithm with respect of a compression bit rate. The resulting bit streams are gathered to construct the compressed image.

Figure 10.

The proposed compression scheme

In order to test the proposed compression algorithm, three gray-scale image databases were employed. The first database consists of 60 images (20 satellite images, 20 natural images and 20 medical images) of size 512×512 [16], the second database consists of 114 textural images of size 512×512 [17] and the third database consists of Shivang Patel 168 fingerprint images of size 256×256. Each image from databases is divided into sub-images of size 128×128 (16 sub-images in case of 512×512 image and 4 sub-images in case of 256×256 image).

Considering packet quincunx wavelet transform, one has tested the 260 possible 8-level decompositions (called decomposition bases) on the sub-images, in order to select the optimal packet decomposition base (in sense of evaluation parameters). The performance of PQWT is compared with QWT and entropy-based PQWT decomposition base (called proper base).

4.2 Reconstruction scheme

The proposed reconstruction scheme is shown in Figure 11. The compressed image is divided into bit streams according to the number of sub-images. Each bit stream is decoded and transformed using Inverse QW or Inverse PQW transform, to finally obtain the reconstructed sub-image Il, (l=1,2,). All sub-images are then gathered to construct the reconstructed image.

Figure 11.

The proposed reconstruction scheme

4.3 Evaluation parameters of compressed image quality

Choosing a compression bit rate Rc<8bpp for gray-level image, leads to a degradation on the original image. This degradation can be measured using the evaluation parameters of compressed image quality. In this chapter, three evaluation parameters are adopted [10]:

  • Peak Signal to Noise Ratio: the PSNR parameter is given by PSNR=10×log102R12MSE, where, MSE=1N×Mi=1Nj=1NIijÎij2 is the Mean Square Error, I is the original image, Î is the reconstructed image and R designates the resolution of a gray-scale image.

  • Mean Structural SIMilarity index: the MSSIM index is the average over all local windows of the product of three functions as follow MSSIMIÎ=1Mi=1MlIiÎicIiÎisIiÎi, where, l, c and s are the luminance, contrast and structure comparison functions.

  • Visual Information Fidelity: the VIF parameter is a ratio of conditional mutual information measured over all decomposition parts of the image.

Advertisement

5. Results and discussion

The main purpose of this study is to establish a compression strategy using packet quincunx wavelet transform, whatever the type or the size of an image. Therefore, one has begun with applying on the 20 satellite images an exhaustive search among the 260 PQWT decomposition bases. To evaluate the compression quality employing PQWT (for a given bit rate), the relative errors (εPSNR, εVIF and εMSSIM) are used to distinguish between the different performance curves. These parameters are expressed as follow:

εX=mXQWTmXPQWTmXQWT×100%E5

where, X designates an evaluation parameter (PSNR, VIF or MSSIM) and mX is the average of X over all database images. For a negative value of εX, the PQWT outperforms the QWT.

Figure 12 illustrates the best 17 decomposition bases that achieve minimum values of εX. Base 0 refers to QWT decomposition.

Figure 12.

Best 17 decomposition bases

The second step consists on applying these 17 decomposition bases on the other databases, and then evaluates the compressed images quality using the relative errors given in Eq. (5). Table 1 shows for each database, the top 10 decomposition bases; and in green the five common decomposition bases between all databases.

BaseSatellite imagesNatural imagesMedical imagesTextural ImagesFingerprint images
1××××
2××××
3×××××
4×××
5××
6×××××
7×××××
8×××××
9××××
10
11×××××
12×
13××
14×××
15×
16×
17

Table 1.

Top 10 decomposition bases of each database

The evaluation curves in sense of relative error are shown in Figure 13. Each Figure compares the evaluation curves of the top 10 decomposition bases, in addition of the PQWT proper decomposition. It can be observed that the proper decomposition achieves fewer performances in case of satellite, textural and fingerprint images, where a visibility (VIF) average degradation of 8% is measured for fingerprint images. On the other hand, the proper decomposition curves are competitive in comparison with the other decomposition curves in case of natural and medical images, especially for εPSNR at low bit rate values.

Figure 13.

The evaluation curves vs. bit rate in sense of relative error. (a) Satellite images, (b) natural images, (c) medical images, (d) textural images, (e) fingerprint images

Considering the performance curves of fingerprint images (Figure 13.e), negative values of relative errors are observed at low bit rate region; which means that the chosen decomposition bases achieve better performance than base 0 (QWT).

Regarding the five common decomposition bases (marked in green in Table 1), the curves of Figure 13 show that the decomposition base 3 achieves slightly better performance; therefore, in the rest of the chapter, one adopts this decomposition base.

In order to illustrate the compression effect on the database images, one has chosen from each database the image that satisfies the minimum εPSNR. The chosen images are given in Figure 14.

Figure 14.

Original images from databases satisfying the minimum εPSNR.(a) satellite image, (b) natural image, (c) medical image,(d) textural image, (e) fingerprint image.

In Table 2, it is given the performance in sense of peak SNR of the five adopted images, for two values of bit rate (0.25 and 2.00bpp). These results show a tiny superiority of decomposition base 3 in comparison with base 0 and the proper decomposition; except for the medical image, where a difference of 9.35dB is observed between base 3 and base 0 at a bit rate of 2.00bpp.

Table 2.

Performance in sense of peak SNR of the five adopted images.

As observed in JPEG compression scheme, the image division into sub-images causes block boundary artifacts; these artifacts are visible at low bit rates values. This phenomenon is clearer for natural, medical and fingerprint image at a bit rate of 0.25bpp.

To remedy to the problem of block boundary artifacts, one propose to add two processes to the compression scheme (as shown in Figure 15):

  • The two sub-images I1 and I2 overlap and have d common pixels (an example of image division with overlapping is given in Figure 16),

  • Each sub-image is weighted by a 2D Gaussian window defined by the sub-image size m and the minimum amplitude a (as shown in Figure 17).

Figure 15.

Proposed compression scheme employing overlapped sub-images.

Figure 16.

Example of dividing image ‘Lena’ with overlapping.

Figure 17.

2D Gaussian window.(a) Gaussian window with parameters m and a.(b) 2D Gaussian window with m=128 and a=0.834.

To avoid the pixel redundancy causes by the overlapping, the pixel may have ½ value in case of two overlapped sub-image, and ¼ value in case of four overlapped sub-images. Therefore, as summarized in Figure 18, the total size M of a sub-image may be expressed as follow:

  • In case of two overlaps, the size of the sub-image equals M=md21+2×dmd½+d2¼=md22,

  • In case of three overlaps, the size of the sub-image equals M=m2dmd1+2×dmd½+dm2d½+2×d2¼=mdmd2,

  • In case of four overlaps, the size of the sub-image equals M=m2d21+4×dm2d½+4×d2¼=md2.

Figure 18.

Computation of pixel sizes for all possible overlapping cases.

These new sub-image sizes permit to define a bit rate for each sub-image according to the number of overlaps. In other words, the bit rate of an overlapped sub-image equals:

Rc=RcMm2RcE6

where, Rc denotes the bit rate without overlapping and Fr=Mm2 is called the reduction factor of bit rate. The bit rate of the overall image is the average over sub-images bit rates.

In order to demonstrate the effect of the overlapping pixels (d) on bit rates, one has plotted in Figure 19 the reduction factor curves for m=128. It is clear from these curves that a higher order of reduction factor leads to lower compression performance; therefore, a reduction factor threshold of 0.9 has to be respected.

Figure 19.

Reduction factor of bit rate vs. overlapping pixels.

The proposed reconstruction scheme, given in Figure 20, divides the output of the inverse QWT or the inverse PQWT by the same 2D Gaussian windows define in the compression process. The reconstructed sub-images are overlapped with the same manner as the compression process, to construct the reconstructed image.

Figure 20.

Proposed reconstructed scheme.

The evaluation curves in sense of PSNR and VIF parameters are shown in Figure 21. Each Figure compares the evaluation curves of the PQWT with decomposition base 3 and proper decomposition, in addition of the JPEG compression standard. It can be observed that:

  • in case of satellite and fingerprint images (Figure 21.a and Figure 21.e), the proposed PQWT compression scheme presents better performance than JPEG,

  • in case of natural and textural images (Figure 21.b and Figure 21.d), JPEG standard outperforms the proposed compression scheme,

  • in case of medical image (Figure 21.c), both schemes present slightly the same performance.

Figure 21.

The evaluation curves vs. bit rate in sense of PSNR and VIF. (a) Satellite image, (b) natural image, (c) medical image, (d) textural image, (e) fingerprint image.

In Table 3, it is given the performance in sense of PSNR and VIF parameters of the five adopted images, at a bit rate of 0.54bpp for textural image and 0.50bpp for the others. To compare the performance of the proposed scheme (PQWT with a and d parameters), two other schemes are involved: the PQWT with a=1 and d=0 (referring to the first proposed scheme) and JPEG standard.

Table 3.

Performance in sense of PSNR and VIF parameters of the five adopted images

To fix a and d parameters of PQWT – base 3 and PQWT – proper decomposition, an exhaustive search (with respect of the reduction factor threshold) has been performed to get the maximal value of PSNR parameter. It can be observed in Table 3 that the a and d parameters differ from an image to another; therefore, these parameters have to be included in the compressed file, as well as the size of the overall image.

The obtained results show a tiny superiority of PQWT – base 3 in comparison with JPEG and PQWT – proper decomposition; except for natural and medical images, where the JPEG standard is slightly better.

Figure 22 compares the visual side of the five adopted compressed images, where details of size 128×128 from original, PQWT – base 3 and JPEG images are magnified. From these figures, it can be observed that PQWT compressed images present lower block boundary artifacts effect in comparison with JPEG images (especially for satellite and textural images), and preserve the continuity of their detail shapes.

Figure 22.

Magnified detail from the five adopted images. (a) Satellite image – 0.50 bpp, (b) natural image – 0.50 bpp, (c) medical image – 0.50 bpp, (d) textural image – 0.54 bpp, (e) fingerprint image – 0.50 bpp.

For a deep study of the block boundary artifacts effect between both proposed compression schemes employing PQWT – base 3, one have focused only on the overlapping regions in the five adopted images, where bands of 8 pixels per line and column around these regions are extracted in order to measure the effect of block boundary artifacts in sense of PSNR. By denoting PSNR1 and PSNR2 the measured PSNRs of the extracted regions employing, respectively, the first and second proposed schemes, the average block boundary artifacts effect is measured by

Ebba=j=1RPSNR2jPSNR1jPSNR1j×100%E7

where, R values of the bit rate (Rc) from 0.1bpp up to 8.1bpp are employed to evaluate the PSNRs.

In Table 4, the values of Ebba are given for the five adopted images. From these results, it can be concluded that, in comparison with the first compression scheme, the second proposed compression scheme presents a significant reduction of 27.70% and 22.87% of the effect of block boundary artifacts for, respectively, natural and medical images. However, a tiny reduction of Ebba for fingerprint image is observed, which means that further processing on sub-images boundaries is necessary for such an image using local 2D filters [18].

ImageSatellite imageNatural imageMedical ImageTextural ImageFingerprint image
Ebba [%]2.1927.7022.879.410.63

Table 4.

Average block boundary artifacts effect in sense of PSNR for the five adopted images

Advertisement

6. Conclusion

This chapter introduces an image compression scheme that employs quincunx wavelet decomposition improved by wavelet packet. This process permits to focus on both approximation and detail parts of the image decomposition.

Using the concept of image division into sub-images (employed in JPEG standard compression algorithm), the effect of block boundary artifacts has occurred especially at low range of compression bit rates. To overcome this problem, the sub-images are weighted by a 2D Gaussian window and overlapped with respect to the reduction factor of compression bit rate. This means that, in addition of the overall image size, two parameters have to be included in the compressed file: the minimum amplitude of 2D window and the number of overlapped pixels.

To present the proposed compression algorithm as a standard, its performances were compared, in sense of evaluation parameters, to those of JPEG standard. The main improvement was seen in the capacity of the proposed scheme to provide better image visual quality (detail shapes continuity). This means that, in the first hand, it can be possible to reduce the image file sizes without reducing the image visual quality, and increase the storage capacity in photographic devices in the other hand.

As a result, this compression technique permits to create benchmarks and databases with low capacity whatever its nature (satellite, medical, natural or textural image).

In this work, one have focused on gray-scale images in order to present the proposed compression scheme. It is necessary, in future works, to investigate its efficiency on video and color images compression.

Advertisement

Conflict of interest

The author declare that there is no conflict of interest.

Advertisement

Nomenclature

ECG
RLE
LZWLempel-Ziv-Welch
JPEGJoint Photographic Experts Group
DCT
DWTDiscrete Wavelet Transform
QWTQuincunx Wavelet transform
EZWEmbedded Zerotree Wavelet
SPIHTSet Partitioning In Hierarchical Trees
CDFCohen-Daubechies-Feauveau
WPPacket Wavelet
PQWTPacket-based Quincunx Wavelet transform
IQWTInverse Quincunx Wavelet Transform
IPQWTInverse Packet-based Quincunx Wavelet Transform
PSNRPeak Signal to Noise Ratio
MSSIMMean Structural SIMilarity index
VIF

References

  1. 1. Barmase S, Das S, Mukhopadhyay S. Wavelet transform-based analysis of QRS complex in ECG signals. 2013; arXiv preprint arXiv:1311.6460
  2. 2. Al-Enizi F, Al-Asmari A. DWT-based data hiding technique for videos ownership protection [Chapter]. In: Baleanu D. Wavelet Transform and Complexity. IntechOpen; 2019. DOI: 10.5772/intechopen.84963
  3. 3. Kanagaraj H, Muneeswaran V. Image compression using Haar discrete wavelet transform. In: Proceedings of 2020 5th International Conference on Devices, Circuits and Systems (ICDCS’20); 5–6 March 2020; Coimbatore, India. p. 271–274. DOI: 10.1109/ICDCS48716.2020.243596
  4. 4. Patlayenko M, Osharovska O, Pyliavskyi V, Solodka V. Wavelet feature family for image compression. In: Proceedings of 27th National Conference with International Participation (TELECOM’19); 30–31 October. 2019; Sofia, Bulgaria. p. 16–18. DOI: 10.1109/TELECOM48729.2019.8994877
  5. 5. Ziv J, Lempel A. A universal algorithm for sequential data compression. IEEE trans. inform. theory. 1977;23.3:337–343. DOI: 10.1109/TIT.1977.1055714
  6. 6. Huffman D. A method for the construction of minimum redundancy codes. In: Proceedings of IRE; 1098–1101 September 1952. DOI: 10.1109/JRPROC.1952.273898
  7. 7. JPEG website. Available from http://jpeg.org/ [Accessed: 2020-04-10]
  8. 8. Daubechies I. Ten lectures on wavelets. Society for Industrial and Applied Mathematics; 1992. 357 p. DOI: 10.1137/1.9781611970104
  9. 9. Manuela F, Dimitri VD, Michael U. An orthogonal family of quincunx wavelets with continuously adjustable order. IEEE trans. image proc. 2005;14.4:499–510. DOI: 10.1109/TIP.2005.843754
  10. 10. Beladgham M. Construction d’une technique d’aide au diagnostic en imagerie médicale: Application à la compression d’images [thesis]. Tlemcen Algeria: Abou Bakr Belkaid University; 2012
  11. 11. Cohen A, Daubechies I, Feauveau JC. Biorthogonal bases of compactly supported wavelets. Communications on pure and applied mathematics. 1992;45:485–560. DOI: 10.1002/cpa.3160450502
  12. 12. Taubman D, Michael M. Image compression fundamentals, standards and practice. Springer; 2002. 777 p. DOI: 10.1007/978-1-4615-0799-4
  13. 13. Shapiro J. Adaptive McClellan transformations for quincunx filter banks. IEEE trans. signal process. 1994; 42.3:642–648. DOI: 10.1109/78.277856
  14. 14. Linderhed A. Adaptive image compression with wavelet packets and empirical mode decomposition [thesis]. Linköping Sweden: Linköping studies in science and technology; 2004
  15. 15. Sprljan N, Grgic S, Mrak M, Grgic M. Modified SPIHT algorithm for wavelet packet image coding. In: Proceedings of the IEEE International Symposium on Video/Image Processing and Multimedia Communications (VIPromCom’02); 16–19 June 2002; Zadar, Croatia. p. 189–194. DOI: 10.1109/VIPROM.2002.1026653
  16. 16. Allaoui CE, Bassou A, Benyahia I, Khelifi M. Selection of compression test images using variance-based statistical method. Indonesian journal of electrical engineering and computer science. 2019;16.1:243–258. DOI: 10.11591/ijeecs.v16.i1.pp243-258
  17. 17. Brodatz P. Textures: a photographic album for artist and designers. Dover Publications, New York; 1966
  18. 18. Lim JS. Two-dimensional signal and image processing. Prentice-Hall International; 1990. 694 p

Notes

  • The term “lossless” refers to the conservation of all details in the image after reconstruction, which means that the original and reconstructed images are identical.
  • The term “lossy” refers to the loss of details in the image after reconstruction by quantification or truncation, which means that the original image differs from the reconstructed one.

Written By

Abdesselam Bassou

Submitted: 23 June 2020 Reviewed: 09 November 2020 Published: 15 December 2020