Signal models are a cornerstone of contemporary signal and image-processing methodology. In this chapter, a particular signal modelling method, called synthesis sparse representation, is studied which has been proven to be effective for many signals, such as natural images, and successfully used in a wide range of applications. In this kind of signal modelling, the signal is represented with respect to dictionary. The dictionary choice plays an important role on the success of the entire model. One main discipline of dictionary designing is based on a machine learning methodology which provides a simple and expressive structure for designing adaptable and efficient dictionaries. This chapter focuses on direct application of the sparse representation, i.e. image compression. Two image codec based on adaptive sparse representation over a trained dictionary are introduced. Experimental results show that the presented methods outperform the existing image coding standards, such as JPEG and JPEG2000.
- image compression
- dictionary learning
- sparse representation
Signal models are fundamental tools for efficiently processing of the signals of interesting, including audio recordings, natural images, video clips, and medical scans, to name just a few. A signal model formulates a mathematical description of the family of signals of interesting in a way which faithfully captures their behaviour. Designing accurate signal models, which efficiently capture useful characteristics of the signals, has been a crucial aim in the signal processing area for so many years and a variety of mathematical forms has been proposed. Sparsity-based modelling has been used in many applications in which each signal is represented in terms of linear combinations of an underlying set, called
Finding appropriate dictionaries with good predictive power of various signal classes of interest and high compactness ability, especially natural images, has been an active field of research during past decades. The early attempts for designing dictionaries were based on building the model using harmonic analysis of the signal classes and extracting some mathematical functions, resulting in a fixed off-the-shelf dictionary called
A significantly different approach to the sparse modelling, originally introduced by Olshausen and Field , consists of learning a dictionary from some training data. The sparse representation using this trained dictionary is called,
The analysis and synthesis sparse signal modelling has led to the design effective algorithms for many image-processing applications, such as compression [2, 3, 4, 5, 6, 7, 8] and solving inverse problems [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. A straightforward application of the sparse signal modelling in the field of image processing has been image compression due to providing a compact representation of the signal. This chapter presents the theoretical and practical aspects of using the dictionary-based sparse signal modelling for image compression. We address one important question: How to efficiently represent an image over a trained dictionary in order to improve the performance of the image compression? Based on this concept, we present two novel image compression methods. Experimental results show that the introduced methods outperform the existing image coding standards, such as JPEG and JPEG2000.
The remainder of chapter is organized as follows: It starts with introducing the concept of sparsity in signal processing. The use of sparse representation modelling with respect to trained dictionaries is presented. Next, a brief description of a well-known and yet effective dictionary learning algorithm, introduced by Aharon et al. , is given. The end of chapter is devoted to introducing two generic image compression schemes. Finally, we conclude the chapter.
2. Sparsity-based signal models
One of the well-known methods in designing of the signal models is linear approximation. In this modelling technique, given a set of vectors , a signal is represented as a linear combination of basis,
where set consists of representation coefficients. The signal approximation Eq. (1) can be reformulated in a matrix form as
where is the coefficients vector. The matrix is called dictionary and its columns constitute the dictionary atoms.
With the right choice of dictionary, the coefficients vector is expected to be sparse, in the sense that its sorted coefficients decay rapidly. Motivated by this idea, the design of efficient complete dictionaries, in which, i.e., was an active area of research during the last decades of twentieth century. The well-known Fourier transform , used in the JPEG compression standard , and wavelet transform , used in JPEG2000 compression standard , are the results of these works which sparsify uniformly the smooth signals. However, this signal modelling loses its optimality in representation of the image signals due to existence of curve singularities (elongated edges) in these types of signals . As an instance, the images encoded by the JPEG2000 standard suffer from the ringing (smoothing) artefacts near edges.
In an attempt to minimize this weakness of the dictionaries, the design of more general over-complete dictionaries, which have more atoms than the dimension of signal, i.e. , has been investigated over the past decades, and is still intensely ongoing. These dictionaries have a more descriptive ability to represent a wide range of interesting signals, in comparison with the invertible complete dictionaries.
2.1 Sparse modelling using over-complete dictionaries
Compared to the complete case, representation with over-complete dictionaries must be more carefully defined. There are two distinct paths for representing a signal using the over-complete dictionaries: analysis path and synthesis path. The
where the coefficients vector is obtained via the inner products of the signal and the dictionary . This method has the advantage of providing a simple and efficient way to achieve sparse representation over the dictionary. In this case, every signal has an unique representation as a linear combination of the dictionary atoms.
Increasing sparsity, in order to obtain a well-defined representation, requires departure from this linear representation towards a more flexible and non-linear representation. Each signal is represented using a different set of atoms from a pool, called dictionary, in order to achieve the best sparsity. Thus, the approximation process becomes
where the coefficients vector is obtained via a non-linear approach, in contrast to the linear-based approach in the analysis path. This signal modelling approach, called
where is the prior knowledge about the noise level. The penalty function is defined in a way that is tolerant to the large coefficients and aggressively penalizes the small coefficients . The normal choice for this function is norm with . Of specific interest is the case, i.e., , which counts the number of non-zeros in the representation. For this case, the problem Eq. (5) becomes
This problem, known to be NP-hard in general, can be efficiently approximated based on the idea of iterative greedy pursuit. The earliest and yet effective one includes the orthogonal matching pursuit (OMP) .
This formulation Eq. (6) has gained a large success beyond the statistics and signal processing communities and has been extensively employed in different signal processing algorithms. In the image-processing applications, since the size of natural images is too large, it is chosen to partition the image into blocks and the sparse modelling is done on the set of image blocks , each of size pixels, where is an integer value and is lexicographically stacked representation of the -th image patches.
2.2 Dictionary choice
In the discussion so far, it is assumed that the dictionaries of the analysis and synthesis models are known. Choosing the dictionary carefully is an important and involving task, in which substantial research has been invested. Based on analysis and synthesis models, the scientific community has developed two main routes for designing the dictionaries .
The first one is
Designing analytic over-complete dictionaries are formulated as for all . Then, the approach tries to establish an appropriate dictionary by analysing the behaviour of and establishing a decay rate. The curvelet , contourlet , and bandlet  transforms are some of the analytic dictionaries which provide comprehensive frameworks in order to handle the multi-dimensional signals.
Finding the more compact sparse representation has been a major driving force for the continued development of more efficient dictionaries. The synthesis formulation of the sparse representation paved the way to the design of an efficient dictionary, called
In this approach, a dictionary is trained for the sparse representation of small patches collected from a number of training signals. The desire to efficiently train a dictionary for the sparse representation led to developing some algorithms so far [1, 23, 32, 33, 34]. The earlier works on the dictionary learning mostly focused on statistical methods. Given the training image patches , where is the number of training patches, this method finds a dictionary which either maximizes the likelihood of the training data  or the posterior probability of the dictionary . These formulations lead to some optimization problems that are solved in an expectation-maximization fashion, alternating estimation of the sparse representation and the dictionary using gradient descent or similar methods.
In neuroscience area, Olshausen and Field  proposed a significantly different approach for designing the dictionary using the training data, benefiting from modelling the receptive fields of simple cells in the mammalian primary visual cortex. In another attempt, the K-SVD algorithm introduced by Aharon et al.  is one of the well-known methods of dictionary learning. Given a set of examples , the goal of the K-SVD algorithm is to search the best possible dictionary for the sparse representation of the training set through the optimization problem of Eq. (7):
where is a fixed small value and is a matrix of size , consisting of the representation coefficients vectors of the training samples. The expression in Eq. (7) is performed iteratively. First, by considering an initial dictionary, the algorithm tries to find the best coefficients matrix that can be found. Once is known, the penalty, posed in Eq. (6), reduces to a set of sparse representation operations, like the ones seen in (Section 2.1). The OMP algorithm  is used for the near-optimal calculation of the coefficients matrix .
At the next stage, the columns of dictionary are sequentially updated and relevant coefficients in the matrix are simultaneously changed. At a time, one column is updated and the process of updating one column is based on the singular value decomposition (SVD) on the residual data matrices, computed only on the training samples that use this atom. The K-SVD algorithm includes a mechanism to control and rescale the -norm of the dictionary elements. Indeed, without such a mechanism, the norm of would arbitrarily go to infinity. For more details, refer to .
In the image-processing applications, since the size of natural images is too large for learning a full matrix , it is chosen to learn the dictionary on a set of natural image patches , each of size pixels, where is an integer value and is lexicographically stacked representations of the -th image patches.
2.3 Analysis versus synthesis
As mentioned before and also outlined in , some of the most important elements of effective dictionary design include localization, multi resolution, and adaptivity. Modern dictionaries typically provide localization in both the analysis and synthesis routes. However, multi-resolution property is usually better supported by the analytic structures, whereas adaptivity is mostly found in the synthetic methods.
The most important advantage of the analytic dictionaries is the easy and fast implementation. On the other hands, the main advantage of the trained dictionaries is their ability to provide a much higher degree of specificity to the particular signal properties, allowing them to produce better results in many practical applications such as image compression, feature extraction, content-based image retrieval and others. However, some drawbacks might arise after the compactness introduced by synthesis scheme. In this case, only if a few atoms for presenting a signal, the importance of all atoms largely varies. Consequently, any wrong choice of one atom could potentially lead to additional erroneous atoms that are selected as compensation, deviating further from the desired reconstruction. This weakness is usually appeared in the -norm-based non-convex optimization problem of Eq. (6). The convex relaxation approaches from to are more stable for the sparse representation at the expense of computational complexity. In the analysis formulation, however, all atoms take an equal part in describing the signal, thus minimizing the dependence on each individual one, and stabilizing the recovery process.
3. Image compression
Reducing the cost for storage or transmission of image signals with negligible degradation in the quality is the main goal of an lossy image compression algorithm. It tries to remove the redundancies among the image data by adopting different ways, such as transferring the image into a transform domain with compressible coefficients. In the transform domain, a few significant coefficients capture a large part of the image information. A typical lossy image compression algorithm usually encodes these significant transform coefficients to reduce the requirements for image storage or transmission. The analysis and synthesis sparse modellings are two powerful tools for transforming the image into a compressible domain. The JPEG  and JPEG2000 standards  are the results of using the analysis sparse representation of the image by designing analytic dictionaries, e.g. discrete cosine transform (DCT) and discrete wavelet transform (DWT), respectively. Since analytic dictionaries-based image sparse representation is typically over-simplistic, it fails to represent the high-textured images efficiently . Due to this weakness of analysis sparse modelling in the efficient expressiveness, an extensive body of literature has recently focused on various applications of the synthesis sparse signal modelling via a trained dictionary. In this way, the performance can be significantly improved for the image compression application, benefiting the sparse representation of the image over a dictionary specifically adapted to it.
In order to improve the limitations of the traditional sparse representation approaches over a trained dictionary, several studies have been proposed in the literature [37, 38, 39]. In , the authors train a set of dictionaries. In order to improve the compression performance, the sparse representation is achieved by choosing an optimal dictionary among the trained dictionaries. In , a set of dictionaries in a tree structure is trained. At each level of tree, a dictionary is learned via generating the image residual. The residual is obtained by difference between the original and recovered images using the trained dictionary at the previous level of tree. Sparse representation is done by selection of one atom from each dictionary at each tree level. The total number of atoms is determined according to the sparsity level. Authors in  introduce the concept of multi-sample sparse representation (MSR) and incorporate it into the dictionary learning process. Each image patch is encoded with a certain sparsity level. To do this purpose, multiple neighbouring image patches are considered during the sparse representation to explore different sparsity levels. Based on this concept, an MSR-based image coding approach is proposed in .
Using a trained dictionary, each image is represented over the dictionary. The coefficients vector can be obtained via different algorithms such as the basis pursuit algorithms, matching pursuit techniques and other schemes . These conventional approaches usually consider constant sparsity level, i.e. a fixed number of dictionary atoms is considered for representing all the image patches with different characteristics. This approach leads to a weak image compression performance. In this section, an adaptive sparse representation approach is presented. It is based on this fact that the visual significance, called visual saliency, of each image patch varies with its location within the image . In other words, the human visual system (HVS) usually focuses on some parts of the image (salient regions), while other parts of the image have a lower level of visual interest. Therefore, designing an adaptive sparse representation scheme by considering the HVS characteristics plays an important role in designing an efficient image compression algorithm.
3.1 Dictionary learning-based image codec
Figure 1 presents the block diagram of the dictionary learning-based image coding (DLC) framework . The DLC mainly has four main parts, pre-processing, dictionary learning, adaptive sparse representation and entropy coding. First, at the pre-processing step, the input image is divided into non-overlapping image patches . represents a image patch vectorized into a vector of size . As other coding algorithms, the mean values of image patches (DC components) and AC components are separately encoded. The AC components are represented with respect to a trained dictionary of size learned by the K-SVD dictionary learning algorithm . As other dictionary-based image compression methods, the trained dictionary should be shared between encoder and decoder. The OMP algorithm is used for sparse representation step in an adaptive manner using the visual saliency information. Incorporating this adaptive sparse representation step into the image coding framework aims to further reduce the reconstructed errors. Finally, the DC elements and representation coefficients are entropy coded, where denotes the coefficients vector of the -th image patch. In the following sections, the adaptive sparse representation and entropy coding steps are detailed. We ignore the details of decoder. However, it should be noted that the decoder can retrieve the image by a minor application of the above steps.
3.1.1 Adaptive synthesis sparse representation
The sparse representation step has a strong impact on effectively encoding of image patches and thus the rate-distortion performance. Graph-based visual saliency (GBVS) model, proposed in , has an capability to extract the saliency map of an image, i.e. locations within an image where have a high visual interest to a human observer. By incorporating this model into the sparse representation step, we build up an image coding scheme which compresses the image more efficiently than the traditional coders. Please refer to  for the details of the GVBS model. This saliency map of the image is exploited to determine the visual significance of the image patches in order to allocate different sparsity levels to each patch according to its visual significance to the HVS.
It should be noted that we normalize the saliency map values to the range . Therefore, the intensity of each pixel of this normalized saliency map stands for the probability belonging that pixel to a salient region. Some examples of saliency maps are shown in Figure 2 . In the second row, brighter regions represent the salient locations which a human observer pays more attention to, while the darker areas represent the less-saliency regions.
To obtain the saliency value of each image patch, first, the saliency map is partitioned into non-overlapping blocks of size pixels. Then, the average of the saliency values of pixels belonging to each block is considered as the saliency value of the corresponding patch. Let denotes the saliency value of -th image patch. Since saliency value of each image patch varies with its position within the image; therefore, by assigning different sparsity level to each block according to its saliency value leads to an improvement in the rate-distortion performance.
Consider the sparsity level of -th image patch as:
Given the target sparsity level , we aim to allocate a different sparsity level to the -th image patch. It should be noted that the average of sparsity levels of all image patches must be equal (or slightly inferior) to the target sparsity level . The factor can be easily obtained via:
where denotes the set of saliency values. As a result, a set of sparsity levels is obtained via Eq. (8). Based on these new obtained sparsity levels, the sparse representation of each patch over the dictionary is achieved by:
The OMP method in  is used to solve this problem. By assigning different sparsity level to each block, a more effective sparse representation of the image is obtained.
3.1.2 Quantization step and entropy coding
Differential pulse-coded modulation (DPCM) scheme is used to quantize the DC elements. Let denotes the residual between DC values of two neighbouring patches and (here we set as 0). Instead of encoding directly, the residuals between subsequent DC elements, i.e. are quantized and encoded. The quantization is achieved via where is a constant value. A dead-zone quantizer is used for quantization of the residuals .
In order to further remove the redundancy, the Huffman coder is used to entropy-code the quantized DC values. During the entropy coding step, pre-defined code-word tables are used built offline. The same procedure is employed for quantizing and encoding the nonzero coefficients of .
A big part of the bit-stream generated by the above scheme is occupied by encoding the indices of the representation coefficients, . The reason behind this fact is that the non-zero coefficients in have a random structure. To efficiently encode this random pattern, a quad-tree splitting algorithm is employed. First, each vector is divided into two equal parts. At the next step, a binary evaluation is achieved on each part: if each part consists of one nonzero coefficient, the encoder outputs 1; otherwise, the encoder sends 0. Each part including one nonzero coefficient at least, is then divided into two separate parts and the binary evaluation is again performed. This process continues until the maximum depth of partitioning is reached. In comparison with other methods like fixed length coding , this quad-tree splitting algorithm encodes the indices of nonzero coefficients more efficient.
3.2 Down-sampling-based codec
In a basic image compression approach, a down-sampling strategy is employed at the encoder side, as another lossy operation. Then, at the decoder side, an up-scaling algorithm is employed to reconstruct the high resolution (HR) image. In this technique, the high-frequency details within the image are removed by employing the down-sampling operator before the quantization lossy operator. Thus low frequency information within an image, where contain most of the energy image, is encoded by higher bit-rate. This bit allocation scheme leads to the better rate-distortion performance at the low bit-rates. Although the down-sampling operator improves the rate-distortion performance at low bit-rates, it eliminates the high frequency (HF) details. These information should be reproduced in order to recover a high quality image at the higher bit-rates.
In the DLC encoder, presented in previous section, the quantization and sparse representation are the main compression tools. In this section, the down-sampling operator is incorporated into the DLC codec as another lossy operator to further remove redundancies existing within an image. Instead of encoding the whole image, its low-resolution (LR) version is encoded. At the decoder side, an up-scaling algorithm is carefully designed to recover the high-frequency details. As discussed before, this strategy enhances the coding efficiency at the low bit-rates. However, designing an efficient up-scaling algorithm is an important step at the decoder side by which the high frequency information can be correctly recovered. A failure in designing a good up-scaling algorithm leads to a weak rate-distortion performance at the high bit-rates. The presented up-scaling scheme in this section recovers the HR image by encoding and sending the residual image, i.e. difference between the original image and the up-scaled one, as side information. The encoder generates the residual image and encodes it by the sparse representation of the residual image over a trained dictionary. Decoder recovers the final image by combination of the LR decoded image and the reconstructed residual. Both dictionaries used for the sparse representation of LR residual images are trained by a bi-level dictionary learning algorithm. Further, an image analyser is designed and incorporated into the encoder. The goal of this image analyser is for designing an adaptive sparse representation in order to assign higher bit-rate to the salient parts of the image. Experimental results illustrate that this down-sampling-based image compression scheme based on sparse representation (DCSR) achieves better rate-distortion performance when compared with the conventional codecs, such as JPEG and JPEG2000, and the DLC codec, described in Section 3.1.
The block diagram of the DCSR encoder is shown in Figure 3 . This framework mainly consists of five parts, including down-sampling, up-scaling, sparse representation, quantization and entropy coding . An image analyser is also introduced to the core of codec. At the pre-processing step, the down-sampling operator uses a blurring convolution kernel and a simple decimator by three to produce the LR image with only 1/9 of total pixels of the original image. At the up-scaling step, the down-sampled image is restored back to its original resolution. The goal of the up-scaling step is designing an accurate algorithm to improve the rate-distortion performance at the higher bit-rates. A joint sparse representation method is employed to solve this ill-posed, complex inverse problem. This method guarantees to restore the high frequency details which are enough and sufficient for reconstruction of the HR image.
Assume the LR image, , is obtained via , where is the down-sampling operator and denotes the HR image. After the image recovery via the up-scaling algorithm, the relationship between the HR and the reconstructed images is described as , where is the residual image. The LR image and the residual image are separately partitioned into non-overlapping image patches of size . Let us assume and denote the vectorized image blocks in the and , respectively, where and are the number of patches in the and , respectively. Moreover, note that .
At the sparse representation step, the LR image patches are sparsely represented over an over-complete dictionary of size using the OMP algorithm. An image analyser is carefully designed to use the visual saliency information at the sparse representation step. The goal of the analyser is to further reduce the reconstructed errors in the areas within the image where have higher visual interest to the HVS. The same procedure is also applied for the sparse representation of the image patches of the residual image . A different over-complete dictionary of size is used for the sparse representation of the residual patches. The two dictionaries and are trained offline using a bi-level dictionary learning algorithm. It is assumed that these dictionaries are shared between the encoder and decoder. Finally, the obtained sparse coefficients of LR patches and residual patches, i.e. and respectively, are quantized and entropy coded in order to obtain the bit-stream. and denote the coefficients vector of the
At the decoder side, the image is reconstructed by a minor application of the above steps. First, the reconstructed LR image is up-scaled by the up-scaling operator to restore the image of size of the original image. Then, the final restored image is obtained via , where denotes the decoded residual image.
3.2.1 Image analyser for adaptive sparse representation
The main goal of image analyser is to enhance the performance of the DCSR encoder. It reduces the reconstructed error in some parts of the image which have higher visual interest to a human observer. Therefore, the image analyser extracts the salient regions within the image according to its visual significance to the HVS. Then, it allocates higher rates to the salient regions by assigning different sparsity levels to the image patches, located within the salient parts, in both LR and residual images. As mentioned before, the GBVS model  is used to obtain the salient regions within an image. Some examples of saliency maps are shown in Figure 2 .
For finding the sparsity levels of the image patches in the LR image, the obtained saliency map is re-sized to the same size of the LR image. Then, it is divided into blocks to obtain the corresponding saliency value of the -th block by taking the average of the saliency values of pixels within that block. The goal is assign a different sparsity level to each block. Note that the average of the sparsity levels assigned to all blocks within the image should be equal (or slightly inferior) to the target sparsity level . Given the target sparsity level and the set of saliency values , the sparsity level of the -th block is obtained by:
where is the number of blocks in the LR image. Based on this strategy, different sparsity levels are allocated for the sparse representation of the image patches in the LR image. Given the obtained sparsity levels , the sparse representation of the LR patch in the LR image over the dictionary is achieved by :
The same procedure is applied to obtain the sparsity levels for the image patches in the residual image. The set of saliency values is obtained using the saliency map of the original image, as described before. The sparse representation of each patch in residual image with respect to the dictionary is achieved by:
The OMP method is used to solve these problems. At the low bit-rates, DCSR encodes only the LR image. At the higher bit-rates, DCSR encodes the residual image and send it as side information in order to improve the coding efficiency. In this case, the patches within the residual and LR images are represented over the corresponding dictionaries with the same target sparsity level .
3.2.2 Joint dictionary-mapping learning for up-scaling
The goal of up-scaling algorithm is to recover the high frequency information eliminated during the down-sampling operation. One well-known up-scaling approach is to learn a function which maps the LR and HR image patches. This mapping function can be obtained using two training databases of the LR and HR image patches. Instead of finding the direct mapping function between the LR and HR patches, authors in  show that the mapping function can be efficiently learned in the transform domain. In this approach, first, the LR and HR patches are transferred into the representation space over the trained dictionaries and then the relationship between representation coefficients is learned.
Let two sets and consist of training LR and corresponding HR image patches, respectively. Suppose and denote the trained dictionaries for the sparse representation of the LR and HR patches. These two dictionaries are trained jointly such that the sparse representation of all training LR and HR patches are mapped to each other via a linear function . The dictionaries and mapping matrix are obtained by solving the following minimization problem:
where and represent the corresponding sparse representation matrices. The terms , and denote regularization parameters. As proposed in , the minimization problem Eq. (14) is solved in an iterative approach with respect to a variable while all the other variables are considered as fixed parameters.
After training process, the up-scaling is achieved by the dictionary pair (i.e. and ) and the mapping matrix . First, the LR image is divided into overlapping blocks of size . Then, the sparse representation vector of an LR patch is obtained with respect to the trained dictionary . In the minimization problem Eq. (14), we use -norm to train the dictionaries. Given these trained dictionaries, the up-scaled algorithm uses -norm, instead of norm, in order to improve the rate-distortion performance. Thus, the sparse representation of LR patch is obtained by solving the following minimization problem:
where denotes the regularization parameter. At the next step, we derive and restore the HR patch via . At the last step, the restored up-scaled image is obtained by replacing the recovered image patches into the whole image grid and taking the average over the overlapped pixels. For more details about solving the above optimization problems, please refer to .
3.2.3 Bi-level dictionary learning algorithm for image compression
The up-scaling algorithm, described in previous section, leads to a bi-level dictionary learning algorithm which used for image compression at the DCSR encoder. Note that the trained dictionary and the up-scaling operator are used to estimate a large part of signal energy. The remained energy of residual is captured by representing the residual over another trained dictionary. This residual is encoded as side information to improve the quality of the restored image at the higher bit-rates. The bi-level dictionary learning algorithm is achieved by the following steps: first, the training images are down-sampled by a factor 3. Second, these LR images are employed to train a dictionary by the K-SVD algorithm . Third, the LR training images are represented with respect to the trained dictionary . Forth, the reconstructed LR images are then up-scaled by the algorithm described in previous section. Fifth, these up-scaled images are used to create the training residual images which are subsequently used to train the second dictionary by the K-SVD algorithm.
3.2.4 Quantization and entropy coding
Differential pulse-coded modulation (DPCM) is used to quantize the non-zero elements in the sparse representation vectors and . A dead-zone quantizer is used for quantization. Further, the classical Huffman coding is employed to entropy-code the quantized sparse coefficients. Predefined code-word tables, needed for Huffman encoder, are constructed offline and stored at both encoder and decoder sides.
The location of non-zero coefficients in and follow a random pattern. Therefore, encoding the indices of these coefficients occupies a big part output bit-stream. For efficiently encoding these indices , the quad-tree splitting algorithm, explained in Section 3.1.2, is employed.
3.3 Experimental results
In this section, the rate-distortion performance of the DCSR codec is examined by performing a suite of experiments on a set of 8-bit grey-scale standard images. All evaluated images are re-sized to size of pixels in order to produce the LR image with only 1/9 of total pixels of the original image. The well-known peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index are chosen as a measure in the experiments. Further, the rate distortion performance is compared with the JPEG and JPEG2000 standards, as well as the DLC codec, presented in Section 3.1.
Block size is set as for encoding the LR and residual images. Further, two dictionaries and of size are trained using the bi-level dictionary learning algorithm presented in Section 3.2.3. We use the images from the CVG-Granada dataset1 for training. All these training images are partitioned into image patches and 12,000 ones are randomly selected for training. One hundred epochs, each processing 12,000 training vectors, are considered during training process. For up-scaling, the dictionary pair of size and the mapping matrix of size are trained using the algorithm introduced in Section 3.2.2. The regularization parameters , , and are empirically set as 0.1, 0.1, and 0.01, respectively.
Next, we present the rate-distortion graph for the test images in Figures 4 and 5 . Note that we also provide the results for different baseline algorithms, i.e. JPEG,2 JPEG2000,3 and DLC algorithm . As these figures shows DLC and DCSR codecs provide a better performance (in terms of PSNR and SSIM) than the available image coding standards JPEG and JPEG2000. Interestingly, as the Figure 4 demonstrates, the DCSR algorithm enhances the quality of the image over the JPEG2000 codec. The enhancement depends on the statistics of the image, so different enhancement is observed in different images. Please note that SSIM shows the perceived quality of the compressed image, therefore, SSIM curves for DCSR and JPEG2000 are very close to each other. For a better visualization, we removed the rate-distortion performance for JPEG2000 in Figure 5 . As the trained dictionary performs a better capture and presents the contours more accurate, DCSR codec presents a higher performance compared to the analytic dictionaries such as the wavelet transform basis. Moreover, a 0.2 dBs improvement is observed using the DLC algorithm, marginally better for simpler images, like
Several compressed images are shown in Figure 6 for visual comparisons. All images are coded with the same bit-rate (0.15) using JPEG and DCSR algorithm. As it can be seen, the JPEG standard fails to reconstruct images compressed at very low bit rates. In comparison, DCSR algorithm can preserve more image details. It is more effective in reconstruction of both the smooth area and the complex regions, including texture and edges, leading to visually much more pleasant recovery.
As a consequence, the down-sampling-based coding results in the high-frequency details in the image before performing the quantization lossy, hence larger number of bits are assigned low frequency information occupying more energy in the image. The presented bit assigning strategy brings a higher performance for images coded at low bit-rate. In higher bit-rates, the efficiency of coding is enhanced after the residual image is encoded as side information.
A brief summary of the signal modelling methodology has been given at the first part of chapter. We continued with the explicit and straightforward formulation of the sparse representation being more suitable for the compression tasks. We have just focused on the synthesis-based signal modelling because of being mature of the image compression using the analysis-based sparse signal modelling. An adaptive sparse representation over a trained over-complete dictionary was presented to compress the images. More specifically, given the saliency map of the image to be encoded, an image patch could be well represented with the linear combination of atoms selected from an over-complete and trained dictionary based on the sparsity level. Finally, at the end part of this chapter, a down-sampling operation is incorporated into the codes in order to improve the compression performance. The experimental results demonstrated that the presented image compression frameworks outperform image coding standards, such as JPEG and JPEG2000, which use an analytic dictionary.
Olshausen A, Field DJ. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research. 1997; 16(4):3311-3325
Mallat S. A Wavelet Tour of Signal Processing. 3rd ed. Academic Press; 2008
Pennec EL, Mallat S. Bandelet image approximation and compression. SIAM Multiscale Modeling & Simulation. 2005; 4(3):992-1039
Villegas OOV, Elias RP, Villela PR, Sanchez VGC, Salazar AM. Edging out the competition: Lossy image coding with wavelets and contourlets. IEEE Potentials. 2008; 27(2):39-44
Wang S, Shu Z, Zhang L, Liu G, Gan L. Iterative image coding with overcomplete curvelet transform. In: Proceedings of IEEE Congress on Image and Signal Processing (CISP), vol. 1; May 2008; Hainan, China. pp. 666-670
Beferull-Lozano B, Ortega A. Coding techniques for oversampled steerable transforms. In: Proceedings of Asilomar Conference on Signals, Systems, and Computers, vol. 2; October 1999; Pacific Grove, CA. pp. 1198-1202
Akbari A, Mandache D, Trocan M, Granado B. Adaptive saliency-based compressive sensing image reconstruction. In: Proceedings of IEEE International Conference on Multimedia Expo Workshops (ICMEW); July 2016; Seattle, WA. pp. 1-6
Akbari A, Trocan M. Downsampling based image coding using dual dictionary learning and sparse representations. In: Proceedings of IEEE International Workshop on Multimedia Signal Processing (MMSP); August 2018. pp. 1-5
Akbari A, Trocan M. Sparse recovery-based error concealment for multiview images. In: Proceedings of IEEE International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM); October 2015; Prague, Czech Republic. pp. 1-5
Akbari A, Trocan M, Granado B. Synthesis sparse modeling: Application to image compression and image error concealment. In: Proceedings of Signal Processing with Adaptive Sparse Structured Representations Workshop (SPARS); June 2017; Lisbon, Portugal
Mandache D, Akbari A, Trocan M, Granado B. Image compressed sensing recovery using intra-block prediction. In: Proceedings of IEEE International Conference on Telecommunications Forum (TELFOR); November 2015; Belgrade, Serbia. pp. 748-751
Akbari A, Trocan M, Granado B. Image compression using adaptive sparse representations over trained dictionaries. In: Proceedings of IEEE International Workshop on Multimedia Signal Processing (MMSP); September 2016; Montreal, Canada. pp. 1-6
Akbari A, Trocan M, Granado B. Image error concealment using sparse representations over a trained dictionary. In: Proceedings of IEEE Picture Coding Symposium (PCS); December 2016; Nuremberg, Germany. pp. 1-5
Akbari A, Trocan M, Granado B. Residual based compressed sensing recovery using sparse representations over a trained dictionary. In: Proceedings of International ITG Conference on Systems, Communications and Coding (SCC); February 2017; Hamburg, Germany. pp. 1-6
Akbari A, Trocan M, Granado B. Sparse recovery-based error concealment. IEEE Transactions on Multimedia. 2017; 19(6):1339-1350
Akbari A, Trocan M, Granado B. Joint-domain dictionary learning-based error concealment using common space mapping. In: Proceedings of IEEE International Conference on Digital Signal Processing (DSP); August 2017. pp. 1-5
Akbari A, Trocan M, Granado B. Image error concealment based on joint sparse representation and non-local similarity. In: Proceedings of IEEE Global Conference on Signal and Information Processing (GlobalSIP); November 2017
Akbari A, Trocan M, Sanei S, Granado B. Joint sparse learning with nonlocal and local image priors for image error concealment. IEEE Transactions on Circuits and Systems for Video Technology. 2019
Akbari A, Trocan M, Granado B. Image compressed sensed recovery using saliency-based adaptive sensing and residual reconstruction. In: Compressed Sensing: Methods, Theory and Applications. New York: Nova Science Publishers; 2018
Akbari A, Trevisi M, Trocan M, Carmona-Galán R. Compressive imaging using rip-compliant cmos imager architecture and landweber reconstruction. IEEE Transactions on Circuits and Systems for Video Technology. 2019:1-1
Akbari A, Trocan M. Robust image reconstruction for block-based compressed sensing using a binary measurement matrix. In: 2018 25th IEEE International Conference on Image Processing (ICIP); October 2018. pp. 1832-1836
Akbari A, Trevisi M, Trocan M. Adaptive compressed sensing image reconstruction using binary measurement matrices. In: 2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS); December 2018. pp. 659-660
Aharon M, Elad M, Bruckstein A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing. 2006; 54(11):4311-4322
Ahmed N, Natarajan T, Rao KR. Discrete cosine transform. IEEE Transactions on Computers. 1974; C-23(1):90-93
G. K. Wallace, “The jpeg still picture compression standard,” IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii-xxxiv, 1992
Skodras A, Christopoulos C, Ebrahimi T. The jpeg 2000 still image compression standard. IEEE Signal Processing Magazine. 2001; 18(5):36-58
Rubinstein R, Bruckstein AM, Elad M. Dictionaries for sparse representation modeling. Proceedings of the IEEE. 2010; 98(6):1045-1057
Pati YC, Rezaiifar R, Krishnaprasad PS. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In: Proceedings of Asilomar Conference on Signals, Systems and Computers; November 1993; Pacific Grove, CA. pp. 40-44
Candés E, Demanet L, Donoho D, Ying L. Fast discrete curvelet transforms. SIAM Multiscale Modeling & Simulation. 2006; 5(3):861-899
Do MN, Vetterli M. Contourlets: A new directional multiresolution image representation. In: Proceedings of Asilomar Conference on Signals, Systems and Computers, vol. 1; November 2002; Pacific Grove, CA. pp. 497-501
Pennec EL, Mallat S. Sparse geometric image representations with bandelets. IEEE Transactions on Image Processing. 2005; 14(4):423-438
Engan K, Skretting K, Husoy JH. Family of iterative LS-based dictionary learning algorithms, ILS-DLA, for sparse signal representation. Digital Signal Processing. 2007; 17(1):32-49
Vidal R, Ma Y, Sastry S. Generalized principal component analysis (GPCA). IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005; 27(12):1945-1959
Sulam J, Ophir B, Zibulevsky M, Elad M. Trainlets: Dictionary learning in high dimensions. IEEE Transactions on Signal Processing. 2016; 64(12):3180-3193
Lewicki MS, Sejnowski TJ. Learning overcomplete representations. Neural Computation. 2000; 12(2):337-365
Kreutz-Delgado K, Murray JF, Rao BD, Engan K, Lee TW, Sejnowski TJ. Dictionary learning algorithms for sparse representation. Neural Computation. 2003; 15(2):349-396
Gurumoorthy KS, Rajwade A, Banerjee A, Rangarajan A. A method for compact image representation using sparse matrix and tensor projections onto exemplar orthonormal bases. IEEE Transactions on Image Processing. 2010; 19(2):322-334
Mazaheri JA, Guillemot C, Labit C. Learning a tree-structured dictionary for efficient image representation with adaptive sparse coding. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); May 2013; Vancouver, Canada. pp. 1320-1324
Sun Y, Tao X, Li Y, Lu J. Dictionary learning for image coding based on multisample sparse representation. IEEE Transactions on Circuits and Systems for Video Technology. 2014; 24(11):2004-2010
Peyre G. A review of adaptive image representations. IEEE Journal of Selected Topics in Signal Processing. 2011; 5(5):896-911
Borji A, Cheng MM, Jiang H, Li J. Salient object detection: A benchmark. IEEE Transactions on Image Processing. 2015; 24(12):5706-5722
Harel J, Koch C, Perona P. Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems (NIPS); 2006. pp. 545-552
Skretting K, Engan K. Image compression using learned dictionaries by rls-dla and compared with K-SVD. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); May 2011; Prague, Czech Republic. pp. 1517-1520
Wang S, Zhang L, Liang Y, Pan Q. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 2012; Providence, RI. pp. 2216-2223
Huang DA, Wang YCF. Coupled dictionary and feature space learning with applications to cross-domain image synthesis and recognition. In: Proceedings of IEEE Conference on Computer Vision (ICCV); December 2013; Sydney, Australia. pp. 2496-2503