Open access peer-reviewed chapter

Watermarking-Based Image Authentication System in the Discrete Wavelet Transform Domain

By Clara Cruz Ramos, Rogelio Reyes Reyes, Mariko Nakano Miyatake and Héctor Manuel Pérez Meana

Submitted: November 15th 2010Reviewed: April 9th 2011Published: August 29th 2011

DOI: 10.5772/21571

Downloaded: 4095

1. Introduction

Nowadays, digital images and video are gradually replacing their conventional analog counterparts. This is quite understandable because digital format is easy to edit, modify, and exploit. Digital images and videos can be readily shared via computer networks and conveniently processed for queries in databases. Also, digital storage does not age or degrade with usage. On the other hand, thanks to powerful editing programs, it is very easy even for an amateur to maliciously modify digital media and create "perfect" forgeries. It is usually much more complicated to tamper with analog tapes and images. Tools as digital watermarks help us establish the authenticity and integrity of digital media and can prove vital whenever questions are raised about the origin of an image and its content.

A digital watermarking technique embedsan invisible signal with animperceptible form for human audio/visual systems, which isstatistically undetectable and resistant to lossy compressionand common signal processing operations. So far there somecontent authentication of digital image methods, which can beclassified in two groups: watermarking based technique (Hsu & Wu, 1999) and digitalsignature based technique (Friedman, 1993). Some authors had written aboutdigital image authentication systems (Wong, 1998; Holiman& Memos, 2006; Wong&Memon2001; Celik, et al, 2002; Monzoy,et al, 2007; Cruz,et al, 2008; Cruz,et al, 2009; Hernandez, et al, 2000; Lin& Chang 2001; Maeno, 2006; Hu & Chen, 2007; Zhou,et al, 2004; Lu & Liao2003) and are classifiedin three categories: complete authentication, robustauthentication and content authentication (Liu&Steinebach, 2006). Complete authentication refers to techniques that consider the whole piece of multimedia data and do not allow any manipulation (Yeung&Mintzer, 1997; Wu&Liu, 1998). Because the non-manipulable data are like generic messages, many existing message authentication techniques can be directly applied. For instance, digital signatures can be placed in the LSB of uncompressed data, or the header of compressed data. Then, manipulations will be detected because the hash values of the altered content bits may not match the information in the altered digital signature.

We define robust authentication as a technique that treats altered multimedia data as authentic if manipulation is imperceptible. For example, authentication techniques, that tolerate lossy compression up to an allowable level of quality loss and reject other manipulations, such as tampering, belong to this category.

Content authentication techniques are designed to authenticate multimedia content in a semantic level even though manipulations may be perceptible. Such manipulations may include filtering, color manipulation, geometric distortion, etc. We distinguish these manipulations from lossy compression because these perceptible changes may be considered as acceptable to some observers but may be unacceptable to others.

A common objective for authentication is to reject the crop-and-replacement process that may change the meaning of data. Many robust watermarking techniques in literature are designed to be robust to all manipulations for copyright protection purpose. They usually fail to reject the crop-and–replacement process so that they are not suitable for robust authentication and content authentication.

An authentication system can be considered as effective if it satisfies the following requirements:

  1. Sensibility: The authenticator is sensitive to malicious manipulations such as crop-and-replacement.

  2. Robustness: The authenticator is robust to acceptable manipulations such as lossy compression, or other content-preserving manipulations.

  3. Security: The embedded information bits cannot be forged or manipulated. For instance, if the embedded watermarks are independent of the content, then an attacker can copy watermarks from one multimedia data to another.

  4. Portability: Watermarks have better portability than digital signatures because the authentication can be conducted directly from only received content.

  5. Identification of manipulated area: Users may need partial information. The authenticators should be able to detect location of altered areas, and verify other areas as authentic.

Regardless of security issues, watermarking capacity is determined by invisibility and robustness requirements. There are three dimensions shown in Figure 1. If one parameter is determined, the other two parameters are inversely proportional. For instance, a specific application may determinate how many bits of message are needed. After the embedded amount is decided, it always exists a trade-off between visual quality and robustness which must be considered. Robustness refers to the extraction of embedded bits with an error probability equal to or approaching zero. Watermark imperceptibility (invisibility) represents the quality of watermarked image respect to the original one. In general, if we want to make our watermark more robust against attacks then a longer codeword or larger codeword amplitudes will be necessary to provide better error-resistence. However, visual quality degradation cannot be avoided. Another scenario may be that with a default visual quality, there exists a trade-off between the information quantity of embedded message and robustness. For instance, the fewer the message bits are embedded, the more redundant the code word can be. Therefore, the code word has better error correction capability against noise.

It is difficult for an authenticator to know the purpose of manipulation. A practical approach is to design an authenticator based on the manipulation method. In this work, we design an authenticator which accepts format transformation and lossless compression (JPEG). The authenticator rejects replacement manipulations because they are frequently used for attacks. Our authenticator does not aim to reject or accept, in absolute terms, other manipulation methods because the problem of whether they are acceptable or not depends on applications.

Figure 1.

Parameters of watermarking: Robustness, information quantity of embedded message and invisibility.

2. Previoustechniques for robust authentication and content authentication

In Paquet, Ward& Pitas, 2003,a novel watermarking scheme to ensure the authenticityof digital images is presented. Their authentication technique is able to detect malicious tamperingof images even if they have been incidentally distorted by common image processingoperations. The image protection is achieved by the insertion of a secretauthor’s identification key in the wavelet coefficients by their selective quantization. Their system uses characteristics of the human visual system to maximizethe embedding energy while keeping good perceptual transparency and develop animage-dependent method to evaluate, in the wavelet domain, the optimal quantizationstep allowing the tamper proofing of the image. The nature of multiresolutiondiscrete wavelet decomposition allows the spatial and frequency localization of imagetampering. Experimental results show that system can detect unauthorized modification of images.

Kundur &Hatzinakos (Kundur &Hatzinakos, 1999), presented a fragile watermarking technique for the tamper proofing of still images. A watermark is embedded in the discrete wavelet domain bythe quantization of the corresponding wavelet coefficients. The Haarwavelet is used for the image decomposition and a pseudo-random binary sequence is generated by a secret identification key. The rounding of the DWTcoefficients to even or odd quantization steps embeds the zeros or ones of thewatermark.Theembeddinglocationsarestoredinthecoefficientselectionkey,ckey. In addition, an image-dependent quantization key, qkey, is introduced toimprove security against forgery and monitor specific changes to the image.

In the same line a digital image authentication procedure that allows the detection of malicious modifications, while staying robust to incidental distortionintroduced by compression is presented in Yu, et al., 2000. Abinary watermarkis embedded in the wavelet transform domain. The insertion is again done bythe even or odd quantization of selected wavelet coefficients. To increase therobustness of the scheme to image processing operations, the authors proposed to make the embedded watermark more robust by rounding the mean valueof weighted magnitudes of wavelet coefficients to quantization levels specified by the predetermined functionQ(x,q). The same function is also used in theblinddetectionprocesstoretrievethe watermarkprivatelybyreversedquantization.In order to distinguish malicious tampering from incidental distortion, theamount of modification on wavelet coefficients introduced by incidental versusmalicious tampering is modeled as Gaussian distributions with small vs. large variance. The probability of watermark detection error due to incidentalalterations is shown to be smaller than the probability of watermark detectionerror due to malicious tampering because they produce comparativelysmaller variance difference with the embedded marks. The authors argue thatthis grants a certain degree of robustness to the system and show that theirmethod is able to authenticate JPEG compressed images without any accessto the original unmarked image. However, the degree of image compressionallowed by the detection procedure is not stated and the selection procedureof quantization parameters is not explained either.

In this work we develop a content authentication technique using imperceptible digital watermarking which is robust to malicious and incidental attacks for image authentication, embedding a digital signature as watermark. A digital signature is a set of features extracted from an image, and these features are stored as a file, which will be used later for authentication. To avoid the extra bandwidth needed for transmission of the signature in a conventional way; having extracted the digital signature we applied the discrete wavelet transform (DWT) to the image to embed the watermark in the sub band of lowest frequency, because we want the watermark insertion to be imperceptible to the Human Visual System and robust to common image processing such as JPEG compression and noise contamination. The proposed system is able to extract the watermark in full blind detection mode, which does not have access to the original host signal, and the watermark extracted has to be re-derived from the watermarked signal, this process increases the system security.

In the security community, an integrity service is unambiguously defined as one which insures that the sent and received data are identical. Of course, this binary definition is also applicable to image, however it is too strict and not well adapted to this type of digital document. Indeed, in real life situations images will be transformed, their pixel values will therefore be modified but not the actual semantic meaning. In other words, the problem of image authentication is released on the image content, for example: when modifications of the document may change its meaning or visually degrade it. In order to provide an authentication service for still images, it is important to distinguish between malicious manipulations, which consist of changing the content of the original image (captions, faces, etc.) and manipulations related to the usage of an image such as format conversion, compression, noise, etc.

Unfortunately this distinction is not always clear; it partly depends on the type of image and its usage. Indeed the integrity criteria of an artistic master piece and a medical image will not be the same. In the first case, a JPEG compression will not affect the perception of the image, whereas in the second case it may discard some of the fine details which would render the image totally useless. In the latter case, the strict definition of integrity is required. We applied the proposed algorithms in to grayscale and color no medical images.

3. Proposedwatermarkingalgorithm

The figure 2(a) shows a general block diagram to thewatermark insertion where we can see that original image isdivided in non-overlapping blocks, we extracted a digital signature from eachblock then we insert a signature as watermark in the same block, finally all thewatermarked blocks form the watermarked image. Figure 2(b) shows a general block diagram to the watermarkextraction process from the watermarked block where we can see thatis not necessary to now the original image to extract the digitalsignature. Finally in the verification process we compare the extracted watermark and the digitalsignature to determine if the image has been modified, or not.

Figure 2.

a) Watermark insertion system; (b) Watermark extraction system.

3.1. Digital signature generation

The algorithm used to extract the digital signature was proposed in Fridrich,1999, and used by Chen,et al., 2001.The goal of this algorithm is to make a method for extracting bitsfrom image blocks so that all similarly looking blocks,whether they are watermarked orattacked, will produce almost the same bit sequence oflength N. Method is based on the observation that if alow-frequency DCT coefficient of an image is small inabsolute value, it cannot be made large without causingvisible changes to the image. Similarly, if the absolutevalue of a low-frequency coefficient is large, we cannotchange it to a small value without influencing the imagesignificantly. To make the procedure key-dependent, wereplace DCT modes with low-frequency DC-free (i.e.,having zero mean) random smooth patterns generatedfrom a secret key (with DCT coefficients equivalent toprojections onto the patterns). For each image, wecalculate a threshold Th so that on average 50% ofprojections have absolute value larger than Th and 50%are in absolute value less than Th. This will maximize theinformation content of the extracted N bits.

Given an image I, we divide it into blocks of 16x16 pixels (for large images, larger block sizes could be used) as showed in Figure 3.Using a secret key K (a number uniquely associated withan author, movie distributor, or a digital camera) wegenerate N random matrices with entries uniformlydistributed in the interval [0, 1]. Then, a low-pass filter isrepeatedly applied to each random matrix to obtain Nrandom smooth patterns. All patterns are then made DC-free bysubtracting the mean value from each pattern. Considering theblock and the pattern as vectors, the image block B isprojected on each pattern Pi, 1< i < N and its absolutevalue is compared with a threshold Th to obtain N bits bi:

if|B.Pi|Thbi=0if|B.Pi|Thbi=1E1

Since the patterns Pi have zero mean, the projections donot depend on the mean gray value of the block and onlydepend on the variations in the block itself. The distribution of the projections is image dependent and should be adjusted accordingly so that approximately half the bits bi are zeros and half are ones. This willguarantee the highest information content of the extractedN-tuple. This adaptive choice of the threshold becomesimportant for those image operations that significantlychange the distribution of projections, such as contrastadjustment.

Figure 3.

Digital signature extraction process.

3.2. Wavelettransformforimagesignals

Two-dimensional DWT leads to a decomposition of approximation coefficients at level j in four components: the approximation at level j + 1, and the details in three orientations (horizontal, vertical, and diagonal).

Figure 4 describes the basic decomposition steps for images.

Figure 4.

Subband decomposition using 2D-DWT.

ThesubbandslabeledLH1,HL1,andHH1representthefinestscalewaveletcoefficients.Inthepresentwork,thewavelettransformisrealizedwith Daubechies Wavelets of order 2. Using this wavelets, the image is decomposed into four subbands: LL1, LH1, HL1 and HH1.

3.3. Watermark embedding algorithm

Because we want the embedded watermark to be imperceptible to the Human Visual System (HVS) and robust to common image processing such as JPEG compression and contamination, we implement the algorithm proposed by Inoue, et al. 2000. In this method information data can be embedded in the lowest frequency components of image signals by using controlled quantization process. The data is then extracted by using both the quantization step-size and the mean amplitude of the lowest frequency components without access to the original image.

Once the digital signature is extracted, we applied the discrete wavelet transform (DWT) to embed the watermark, the subbandLL1(i,j)is divided into small subblocksBkwith the size ofbx×by and calculate the meanMk of the wavelet coefficients of Bk. A quantization step-size which is called the embedded intensity Q=5 is used, then we calculate the mean of the wavelet coefficients of Bk. The watermark information is embedded into the subblockBk modifying the quantization value q and adds δMk to the wavelet coefficients of Bk, as described in detail (Inoue, et al. 2000). Finally we construct the watermarked image using the inverse wavelet transform.

Figure 5 illustrates the embedding process; thedatawk =0or1intoasubblockBkwhenbx=by =2 and Q =5.

Figure 5.

Watermark insertion process.

3.4. Watermark extracting algorithm

We can extract the embedded data w by using the parameters n (decompose level), bx, by, Q and LM’. Let I’ bethewatermarked image, we decompose I’ forthe scale 1 andobtainthelowest frequency components LL1’(i,j).Then we divide LL1’(i,j)intosubblocksBk withthesize ofbxxby and compute the mean Mk’ of Bk and find the quantization value S from

S=int[(Mk')Q]E2

Then, we extract the embedded binary datawk as follows: if S is an even number, then wk =0, otherwise wk =1.

3.5. Authentication process

After the watermark wkand the digital signature sequences are extracted from the watermarked image I’, we determines a threshold (Thv) to decide using an XOR operation if the block is tampered or not, which is expressed in equation (3).

if{wk̃bk̃Thvauthenticblockwk̃bk̃ThvmodifiedblockE3

Threshold Thv was determined through trial and error; resulting value of Thv was 4, it means that if bits number of digital signature extracted of the block authenticated has at least 12 of 16 bits equal, the block is consider as authentic else it is consider as modified. Although the block is considered modified, sometimes you do not get the same 16-bit digital signature extracted with respect to the original signature can be caused by any intentional modification, which is why we proposed the following process check.

3.6. Verification process

After the watermark wkfrom the watermarked image I’ is extracted, we compare it with the digital signature extracted from I’. If they have some different blocks we make an “difference image”(Idif).

According to evaluation carried out using 200 images, in authentication process, the following conclusion was reached: when error blocks are present in regions non intentional modified, these blocks are presented in isolation, as shown in figures 6(a,b), however in the case of images modified intentionally error blocks are detected in concentrated form as shown in figures 6(c,d), so when error blocks are detected isolated, means that region is authentic otherwise it is non-authentic. Therefore to establish a criterion to determine whether the change at a block is intentional or unintentional, we define the following rule:

If there are more than three consecutive error blocks in the region of Idifthe image was intentionally modified, otherwise the change was made by common signal processing as JPEG compression or noise. Applying the concept of connectivity between the 8 neighbors of error blocks, it can help us to identify intentionally modified regions of which are not. This criterion is represented mathematically by the equation (4).

region{Authentic ifB˜3Tampered ifB˜3E4

wereB˜represents anerror block, so if there are more than three consecutive error blocks in the region, it has been intentionally modified.

Figure 6.

a,b) Non intentional modified image; (c,d)Intentionally modified image.

4. Experimental results

4.1. Digital signature robustness

To evaluate the robustness of the bit extractionprocedure, we subjected the test image "Barbara" with512x512 pixels and 256 gray levels to various imageprocessing operations available in specialized commercialimage manipulation software (we used Photoshop). The test image "Barbara" had 1024 blocks of 16x16 pixels. We extracted N=16 bits from each block for theoriginal image and the manipulated image and calculatedthe average number of errorover all 1024 blocks. The results are shown in Table 1.

Image nameShine (%)
Average number of error bits
Barb_100100
Barb_200200
Barb_300300
Barb_400400
Barb_500500
Contrast (%)
Barb_10100
Barb_20200
Barb_30300
Barb_40400
Barb_50500
Ecualization0.015
JPEG compression
Quality factorAverage number of error bits
20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95 and 1002.15, 1.98,1.78, 1.69, 1.51, 1.35,1.30, 1.22, 1.11, 1.08,0.98, 0.91, 0.75, 0.67, 0.5, 0.36 and 0.07
Impulsive noise
IntensityPSNRAverage number of error bits
0.001035.82580.58
0.002032.37820.98
0.003030.80601.16
0.004029.65931.23
0.005028.61051.87
0.006027.84832.01
0.007027.17252.22
0.008026.53142.53
0.009026.01212.85
0.010025.60052.92

Table 1.

Average number of error recovered bits out of 16 bits after some image processingoperations.

4.2. Semi-fragile watermark system performance

In order to confirm that the proposed digital watermark system is effective, we implemented some numerical experiments with attacks such as JPEG compression, impulsive and Gaussian noise and photomontage. Experimental results show that the algorithm is capable to determine whether the image has been altered. The algorithm was evaluated using200 standard images. These images are 8 and 24 bits per pixel (bpp) grayscale and color images, which were 512x512 and 128x128 pixels in size showed in figure 7.Another advantage of this algorithm is that the size and texture of the image doesn´t affect on the correct operation of the system.

Figure 7.

Some images used in the experimental process.

4.2.1. Watermarked image quality

In our system we use the peak signal to noise ratio (PSNR) to mesure the degradation of the image quality caused by watermarking, this value is given by (5),

PSNRdB=10log102552δq2E5

whereδq2is the mean square of the difference between the original image and the watermarked one.

Figure 8 shows some examples of original images (in grayscale and color) together with their respective watermarked images and PSNR values, where we can see that watermarked images are to perceptually very similar to the original version. In table 2 PSNR values of some grayscale and color images are shown, where we can observe that the average PSNR value in the grayscale image is 45 dB’s and in the color image is 50 dB’s, so we can conclude that degradation in the watermarked image is not perceptible.

Figure 8.

Watermarked image quality.

Grayscale
watermarked image
PSNR
(dB´s)
Color
watermarked image
PSNR
(dB´s)
Barbara45.059944Plane49.806491
Boat44.966452Mountain49.848597
Bridge45.007931Lake49.810409
Camera45.056509Chiles49.853756
Chiles45.003896People49.848703
Goldhill44.959921Lena49.815038
Lena44.958274Home50.342928
Baboon45.041577Girl49.568472
Bird44.962013

Table 2.

PSNR values of some grayscale and color watermarked tested images.

4.2.2. Robustness against JPEG compression

The authenticator is sometimes expected to pass only those images that are compressed by JPEG up to a certain compression ratio or quality factor (fc). For example, if the image is JPEG compressed below to image quality 75 (The Mathworks, 2008), the image is acceptable, otherwise, if it is more compressed, it will fail the test. The argument for failing highly compress images is that such images usually have poor quality and should not be considered as authentic. To satisfy this need, we calculate the increase of the number of the “different” signature bits after compression (error blocks). The number of the error blocks increases if the image is more compressed. We can set a threshold on this change to reject those images that have too many error blocks.

If the error blocks are isolated, we apply equation (4) to determinate if those blocks are result of a JPEG compression, however, if they are concentrated we are talking about an intentional attack. We called to this process “verification” and it helps us to differentiate between an intentional or non intentional attack.

Figure9 shows the extracted results from the authentication JPEG compressed watermarked images with quality factors higher than 75 and their corresponding verified image; we can see that compressed images with quality factors higher than 75 have their error blocks (white blocks) isolated; consequently, before the verification process they are considered as not attacked.

Figure 9.

Tampered regions detection of the JPEG compressed images.

Table 3 shows some compression ratio where the JPEG compressed watermarked image is consideredas authentic by the system. In this table we can see that in grayscale watermarked images were considered as authentic when their quality factor of JPEG compression was higher than 75 and in the color compressed images with a quality factor higher than 70.

4.2.3. Robustness against additive and Gaussian noise

We contaminate watermarked image with different levels of additive and Gaussian noise to simulate the communication channel noise. Tables 4 and 5 show the highest density and variance value of additive and Gaussian noise in grayscale and color images before the system considers the error blocks detected as intentionally tampered, these results indicate that the system is efficient in front of impulsive noise attacks because it supports a density= 0.002 which produces a PSNR average value equal to 32 dB between watermarked image and contaminated watermarked image;a similar case occurs whit the Gaussian noise; the highest variance that the system accepts is 0.00011 before it considered watermarked contaminated image as intentionally modified.

Quality
factor
Error
blocks
Original size/
compresion
Bits/
pixel
Grayscale
watermarked images
Boat805257k/38.7k1.20
Bridge751865k/14.6k1.79
Camera801465k/10.3k1.26
Chiles7518257k/31.4k0.97
Lena75465k/10.5k1.29
Baboon8024257k/72.1k2.24
Bird801165k/7.56k0.93
Color
watermarked images
Plane7013193k/11k0.45
Home705193k/9k0.37
Girl7517193k/10k0.41
Chiles6517193k/14k0.58
Lake7014193k/14k0.58
Lena652193k/11k0.45
Mountain7512193k/12k0.49
People7011193k/9k0.37

Table 3.

Compression ratio of some JPEG compressed images considered as authentics.

Grayscale
watermarked
image
DensityError
blocks
PSNR
(dB´s)
Color
Watermarked
image
DensityError
Blocks
PSNR
(dB´s)
Barbara0.0022732.6765Plane0.0016933.0454
Boat0.0023032.9256Home0.00161533.5461
Bridge0.0021732.1300Girl0.00151832.4259
Camera0.0023632.4250Chiles0.0024831.4738
Chiles0.0022632.0939Lake0.0018732.0180
Goldhill0.0021132.3100Lena0.00251231.1327
Lena0.0021432.0448Mountain0.00151833.0446
Baboon0.0022432.7793People0.00152632.2085
Bird0.00091835.7967

Table 4.

Test to resistance to impulsive noise from grayscale and color watermarked images.

Grayscale
watermarked
image
VarianceError
blocks
PSNR
(dB´s)
Color
Watermarked
image
VarianceError
blocks
PSNR
(dB´s)
Barbara0.000114839.5594Plane0.000311435.2396
Boat0.00013640.0032Home0.000271535.5625
Bridge0.000142238.8350Girl0.000272035.9495
Camera0.000112939.5884Chiles0.000332135.1859
Chiles0.000113739.5761Lake0.000271035.5499
Lena0.000311635.2449
Mountain0.000272435.5431
People0.000272335.8596

Table 5.

Test to resistance to gaussian noise from grayscale and color watermarked images.

4.2.4. Robustness against photomontage

Of course, an important aspect of our system is its ability to localize tampered regions into the image. For that reason, we have tampered the previously watermarkedBirdand lake images and evaluated the ability of our system to detect. We found that the ability of our system todetect tampering is excellent (Figure 10) becauseour system detectedcorrectly which group of blocks were modified intentionally and which were notinto the watermarked image, based on the assumption explained in section 2.5. To tamper the images we used Photoshop. Figures 10(a) to 10(d) show the results of this evaluation in grayscale images and figures10(e) to 10(g) the results ofcolor images. Figures 10(c)and 10(g) show by white blocks the tampered detected by our system where we can see that its location is correct comparing 10(a)vs. 10(b) and 10(e)vs. 10(f) where the first are the watermarked images and the others are the tampered watermarked images. Finally in figure 10(d) we see that the verification is working well because it eliminates the isolated error blocks which were caused by the processing image.

Figure 10.

Authentication and verification process of a tamper watermarking grayscale and color image.

5. Conclusion

The transition from analog to digital technologies is widely used,with the higher capacity of storage devices and data communication channels, multimedia content has become a part of our daily lives. Difital data is now commonly used in many areas such as education, entertainment, journalism, law enforcement, finance, health services, and national defense. The low cost of reproduction, storage, and distribution has added an additional dimension to the complexity of the problem. In a number of applications, multimedia needs to be protected for several reasons. Watermarking is a group of complementary technology that has been identified by content provider to protect multimedia data.

In this paper we have successfully developed a robust digital signature algorithm which is used as a semi-fragile watermarking algorithm for image authentication. The highest advantage of this combination besides the digital signature robustness and the watermark image imperceptibility, is that is not necessary an additional band width to transmit the digital signature, since this is embedded in the host image as a watermark. Besides to the extraction and authentication process, we propose a verification process, which helps us to differentiate between an intentional or non intentional modification applying the concept of connectivity between the 8 neighbors of error blocks.

Numerical experiments show that this algorithm is robust to JPEG lossy compression, the lowest acceptable JPEG quality factor is 75 for grayscale images and 70 for color images. In the case of impulsive noise, verification system determines that a watermarked image has no-intentional modification if its density value is less than 0.002 which produce a PSNR average value equal to 32 dB between watermarked image and contaminated watermarked image; a similar case occurs with the Gaussian noise; the highest variance that the system accept is 0.00011 before it consider watermarked contaminated image as intentionally modified.

An important characteristic of this system besides its robustness against common signal processing is its capacity to detect the exact tampered locations, which are intentionally modified. Several watermarking systems using digital signature had been reported but they aren’t robust to JPEG compression neither to modifications caused by common signal processing.

Finally it is important to mention that the watermarked images generated by the proposed algorithm are secure because the embedded watermarks are dependent on their own content.

Acknowledgments

This work is supported by the National Polytechnic Institute of México.

© 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Clara Cruz Ramos, Rogelio Reyes Reyes, Mariko Nakano Miyatake and Héctor Manuel Pérez Meana (August 29th 2011). Watermarking-Based Image Authentication System in the Discrete Wavelet Transform Domain, Discrete Wavelet Transforms - Algorithms and Applications, Hannu Olkkonen, IntechOpen, DOI: 10.5772/21571. Available from:

chapter statistics

4095total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Application of Discrete Wavelet Transform in Watermarking

By Corina Nafornita and Alexandru Isar

Related Book

First chapter

Biomedical Applications of the Discrete Wavelet Transform

By Raquel Cervigón

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us