## 1. Introduction

Hiding information concerns the process of integrating information or data elements into music, video, and images [1]. The concept of information hiding had also been proposed to solve problems related to intellectual property protection that depends directly on a team of background work for its realization, and its distribution must be controlled, or even protection of personal information with a high degree of confidentiality such as agreements, contracts, or other information that should be secured. To send information to a file imperceptibly seems totally innocent, at first, and such a file is known as a carrier file or host file (known as a host image or cover image).These files are carriers of embedded information; they can be any digital files such as audio files, video files or image files. For intellectual property protection and for secure delivery of information on unprotected media, various techniques have been implemented that have given rise to the emergence of the science of encryption, watermarking, and steganography. For the first technique, the message will not make any sense unless acquired by the intended recipient or to whom the message is addressed, as this will be the interpretation of each of the written symbols within the message. For watermarking, this is related to protecting the authenticity of media messages exposed, susceptible to cloning or replication. With watermarks, its authenticity is guaranteed. Finally, we have steganography, which is the technique used to hide information in a seemingly innocent means to be sent over an unsecured medium or channel. A common example is the wireless medium, which is fully exposed and for anyone with enough malice to access information, it can be done without any problem. For the goals of steganography to be met, there are three considerations: (i) high capacity for integration, (ii) high quality of stego-image and (iii) full recovery of the inserted information. The images with the inserted information are called stego-image; the information must be inserted in areas that are statistically and perceptually not easy to locate on demand and stego-tools such as analyzers. The stego-image should be, in theory and in practice, identical to the host image. The embedded information in the stego-image can only be removed by the receiver that has the primary key, otherwise, the information cannot be extracted. The stego-image is then sent to the intended recipient. The information contained in the host image is just a distraction to the receiver, so this is not so much of an interest in its full recovery, but the host image must have the minimum quality because any edge, contour,, color or misplaced pixels can cause some suspicion and is susceptible to the extraction of the hidden information without authorization from the transmitter. Most importantly, the hidden information must be fully recoverable. In today’s advanced and modern world, steganography is vital because it is a support tool to copyright protection, for which the authentication processes allow distribution and legal use of different materials. A steganographic technique is usually evaluated in terms of the visual quality and the embedding capacity; in other words, an ideal steganographic scheme should have a large embedding capacity and excellent stego-object visual quality.

One of the main objectives of steganography is not sacrifice image quality as the carrier to be inserted to this data. And likewise keeping track information retrieved data. Because any disturbance visual may cause some reason for inspection of its content through histograms or specialized software.

The more reasonable way to deal with this trade-off situation is probably to strike a balance between the two [2, 3, 4].

The techniques implemented in steganography have evolved as needs related to security level and insertion capability have also increased, due to the growing trend in the digital distribution of files over public networks. There are several proposed techniques applied to steganography, including the space where the most representative method algorithm is the modification of the least significant bit (LSB) of the pixel, and from this algorithm, optimizations have emerged such as the LSBO, among others [5].

Subsequently, frequency techniques for image processing such as the discrete Fourier transform (DFT), discrete cosine transform of (DCT), and the discrete wavelet transform (DWT) have also emerged.

More recently, adaptive methods, take the qualities of the spatial domain and the frequency domain. These latest adjustments have yielded good results; however, few authors consider the three criteria for steganographic algorithms. In adaptive methods, statistical variations are considered image through variance, standard deviation, covariance, etc.

These statistics variations detected in images in conjunction with filtering images using the DWT techniques are useful and important to execute noise detectors within the image. These noisy areas are imperceptible to the human eye, so the eye does not see that some information have been inserted outside the original image. The human eye is less sensitive to detect certain imperfections contained in the processed images, these characteristics are due to the textures contained in the images that are shown as changes in intensity. These abrupt changes in intensity are contained in the high frequency band, which are obtained from the image processing by DWT [6].

In applications where the frequency domain is involved, depending on the nature of the image used as host, this image can be altered significantly when the digital filters applied act as noise detectors. Judging by whether the human vision sensitivity is considered in the design of the embedding algorithm, we can categorize the schemes into three types: (1) high embedding capacity schemes with acceptable image quality [2-4], (2) high image quality schemes with a moderate embedding capacity [2-4], and (3) high embedding efficiency schemes with a slight distortion [7,8]. This chapter explains how to reduce the effects of the filters on images using the scale factor proposed in this research. This scale factor will be working in the wavelet domain. Applying the scaling factor, the energy generated by the host image is preserved to approximate the original image, eliminating any visual disturbance. This chapter is divided into the following sections: Section 2. Materials and methods, Section 3. Theory and calculations, Section 4. Discussion and Section 5. Conclusions.

## 2. Materials and methods

To analyze structures within images of different sizes, shifting the window in Fourier transform (Gabor Transform [9]) is not suitable because it is subject to a fixed size value. Moreover, the calculation is complicated, because the DFT, obtains complex numbers, unlike the DWT [9], which operates only with real numbers. For analysis of images, it is necessary to use time-frequency windows with different times. Instead of choosing a fixed window (Gabor transform), with an analysis window g (t-u) constant size, a resizable window is chosen, that is, a wavelet, a Ψ∈L^{2}function (IR), with an average equal to zero:

The DWT can decompose an image in different resolutions, each of which progressively decreases the size. This presents a certain analogy with the human eye, which draws, at each level, the information that it finds interesting. For example, consider a brick wall. If we observe it from a considerably large distance, we will see a global structure. As we approach the wall, we can look at the successive characteristic details: the divisions between bricks, each brick structure that define and detail the whole structure, losing resolution. Similarly, the DWT extracts information between successive resolutions. Consider the function checks || Ψ || = 1, and is focused on a neighborhood of t = 0. If the value of u and s in the function moves, a movement of the sampling window time-frequency is obtained:

The DWT can be written as a convolution product:

where

The DWT can find localized and structured images with an amplification procedure that progressively reduces the scale parameter. Generally, singularities and irregular structures often contain essential information about the image and refer to places that can be substituted for values, but at first glance, are not identifiable. In an image, intensity discontinuities indicate the presence of edges. It can be proven that the local regularity of a signal is characterized by the decay of the amplitude of the DWT over scales. Thus, the singularities and the edges are detected following the local maximum values of the DWT to finer scales detail. This image of singularities will become detailed as it moves [10]. To characterize the unique structures, it is necessary to quantify the local regularity of the image. For example, an image of n × m pixels generates additional images successively in blocks. All contours, large and small, are present in the original image and do not require any change of resolution to locate them. The issue is in identifying broad contours using conventional operators. It could be escalated to the operator, but what is more efficient is to scale the image because the use of an operator for large contours on a high-resolution image is very complicated from the point of view of computational efficiency. Therefore, it uses coding frequency sub-bands. Coding performs sub-band decomposition of an image or a signal band-limited components (band-pass filter), which gives a redundancy-free representation of an image; this makes it possible to reconstruct the original image without error. Give a band-limited image *x(n,m)* which satisfies [11]:

It is possible to split the image to make a uniform sampling

where *f*_{N} is the Nyquist frequency. For the analysis of the frequency division at intervals within the DWT; you can use a range of frequencies, for this particular case; We employ the value of N / 2 x M / 2, where N and M represent the length and width of the image as shown in Figure 1.

The coding of two channels per sub-band filtering

This results to obtaining the four sub-images at the output of the processing. The image resolution applying DWT is divided into four frequency sub-bands called Low-Low (LL), Low-High (LH), High-Low (HL), and High-High (HH), see Figure 2. The names given were based on the type of filtering applied to the rows and columns; they are obtained from Eqs. 6 and 7.

Each sub-band is a copy of the original image but in different frequency level, which provides a certain amount of energy [8, 9] (Figure 1). To describe a DWT, it is enough to define a discrete impulse response for a low-pass filter, *f(x)* called scaling function. We can also generate, *f(x)*, calculate the mother wavelet *y(x).* If the scaling vector has a finite number of nonzero terms, then *f(x)* and *y(x)* generate wavelets with compact support. Scaling a vector such that [11],

where *2l* represents the displacement of the sample image every 2 pixels. There exists a scaling function for which,

which can be constructed as a sum, from Eq. 11, that is half copy of the image, using *f(x)*, it must be an orthonormal unit in terms of translations [11],

and

where

Determined *f(x)* and

and from this, we obtain the mother wavelet,

where the set of orthonormal wavelets is derived, and the scaling factor obtained for setting the pixel with the new value obtained from the low-pass and band-pass filtering from [11],

It is restricted to case basis functions obtained by means of changing the type of binary scale *2*^{j}, and dyadic translations of the mother wavelet, where a dyadic translation corresponds to a shift, k / 2j; value that is equal to a scale factor which is a multiple of a binary value (2,4,8,...) and therefore, the size of the wavelet, thereby obtaining the image adjustment. When working with images, the most important features for pattern recognition are the edges of the structures. A border can be defined as the set of points where the image has sharp transitions in intensity. However, not all variations of intensity can be defined as edges. Several variations of detection algorithms on images, such as the Canny algorithm [13] are equivalent to detecting a maximum DWT module dyadic bidimensional. For detecting image edges or irregularities, applies irregularities detector based on the exponents of Lipschitz [12].Thus, if *f* has a singularity at a point *v*, this means that it is not differentiable at *v*, and Lipschitz exponent at this point characterizes the singular behavior.

The Lipschitz regularity of edge points is derived from the maximum decrease of along DWT scales. In addition, approaches the image can be reconstructed from these high module without visual degradation. In two dimensions, we try to detect contours. For detection, the problem is the presence of noise. So, if we define the boundary from turning points, they will appear across the surface, due to noise. The application of the values for Lipschitz exponents can find and ratifying the highlighted areas like the edges obtained in step filtering Lipschitz regularity defines the upper limit with non-integer exponents. Thus, the DWT is a powerful tool to measure the minimum local regularity of the tool functions. However, it is not possible to analyze the regularity of f at a particular point v that will decrease from

(16) |

where

The discrete wavelet reconstruction can be computed by an inverse of the procedure of decomposition beginning at the level of resolution lower in the hierarchy. In applying the proposed steganographic algorithm to the sub-band LH is necessary to use a scaling factor that works with 24-bit RGB color images or Luminance, Chromatic blue, Chromatic red (YcbCr) or Hue, Value, Saturation (HSV) color model [14]; this scaling factor is closely related to energy conservation applied in the theory of wavelets. However, in the RGB color images, we propose the following scaling factor,

where *j* is directly dependent on the number of bits that integrates the image.

The proposed steganographic method works in the wavelet domain and provides an analysis of the wavelet coefficients in different decomposition scales to estimate the simple variance field to distinguish areas where the pixels are considered noisy [14]. We propose the following criterion: If the standard deviation of the current wavelet coefficient kernel is smaller than the threshold defined for the standard deviation global, then the respective area from the host image is considered noisy and then in such area, the hidden information can be inserted. Otherwise, the pixels from this area are considered free of noise and it is not possible to insert data of the hidden image. The criterion provides good invisibility for the hidden data and edges and results to fine detail preservation of the stego-image [14]. The standard deviation is computed using the following,

where

The threshold *σ*_{g} is used to select the pixel occupying the position where the data are hidden. This is done through [14],

where

## 3. Theory/Calculations

In the optimization and evaluation of algorithms in digital image processing, the peak signal to noise relation (PSNR) is the criterion most frequently used to evaluate the quality of the imagery [2],

whereThe Normalized Color Deviation (NCD) is used for the quantification of the color perceptual error [2],

Here, *(l,m)* of an image, and

The hiding capacity (HC) dictates the number of bits inserted in the host image [2,14],

We incorporate in the proposed scheme other color spaces such as YCbCr and HSV to ensure that the visual artifacts appearing in the stego-image are imperceptible, and the difference between the cover and stego-image is indistinguishable by HSV by using the proposed scaling factor. To verify and quantify the results obtained in this proposed steganographic algorithm, the results obtained in conjunction with other spatial domain methods were compared with 4-3 LSB and the optimization of this algorithm.

Table 1 shows the performance results in terms of PSNR, MAE (Mean Absolute Error), CORR (Correlation), Q, NCD, HC, in the case of j=2^{8} values in the scaling factor by using the 320 × 320 RGB color image “Mandrill” [15] as the host image and “Lena” [15] as the hidden image. In Table 1, we can see that the best result is presented by the steganographic method proposed here by applying the scale factor adjustment in the wavelet transform, the scale factor for this example has a value of j = 2^{8}. In Table 2, we can see the results of the image recovered "Lena" where you can see that the image quality index (Q) is close to 1, indicating that the recovered image is very close to the one inserted originally.

In Table 3, we show the results obtained from the proposed steganographic algorithm with a scaling factor, with j = 10. We note that PSNR improves by more than 1dB, and conserves the Q index on the recovered image. The PSNR value is enhanced for each of the cases in color spaces, and so it may be said that the energy distribution (inserted data) within the image is homogeneous, thus wanting to be approached by a stego-analyzer to retain a uniform distribution histogram, removing any suspicion of being a carrier of information. In Table 3, we can see that the model of color HSV offers better quality in the stego-image in contrast to the RGB and YCbCr models; nevertheless, the capacity of insertion is lost. It is possible to observe that the model RGB offers good results and the capacity of insertion does not sacrifice itself.

We also present the error images in Figure 4 presents the visual results according with Table 3.

## 4. Discussions

The results only apply steganographic algorithm have shown visual defects, which may cause some suspicion that this is a carrier of information added to this. However, applying the scaling factor, better visual and quantitative results also overcoming the stego-image are obtained. Significantly, a steganographic algorithm must meet the following criteria: Total recovery of the embedded information; good quality of the cover image and the recovered image; and high insertion capability. Finally, in this work, the proposed method yielded better results, with the best result obtained using the scaling factor *j = 10.* The following are the results: PSNR = 41.3900dB, NCD = 2.8906 e-4, MAE = 0.6401, HC = 0,068e3Kb and Q = 99.99% in the cover image. We can also observe that applying the scale factor, the wavelet contraction and expansion is set as close to the original contour of the image, thus, making the data inserted into the noisiest areas of the image; and this is imperceptible to the human eye.

## 5. Conclusions

The RGB, HSV and YCbCr color model images are altered in their energy contribution in each sub-matrix of the wavelet decomposition when the steganographic algorithm is applied. From equations 5, 6 and 14, we propose the use of the scaling factor for adjusting filtered images with DWT. This adjustment will be made to each pixel of the image to achieve the three objectives of steganographic algorithms. For steganographic applications, the digital filter helps to locate areas suitable for inserting information without it becoming visible to the human eye. This filter is generally altered in their energy contribution in each sub-matrix of wavelet decomposition when a steganographic algorithm is applied. It is known that the value of