## Abstract

Many proposed image watermarking techniques are sensitive to geometric distortions, such as rotation, scaling and translation. Geometric distortions, even by slight amount, can make watermark decoder disable. In this paper, a geometric invariant blind image watermarking is designed by utilizing the invariant Tchebichef moments. The detailed construction of invariant Tchebichef moments is described. Watermark is generated independent to the original image and inserted into the perceptually significant invariant Tchebichef moments of original image. The watermark decoder extracts watermark blindly utilizing Independent Component Analysis (ICA). The computational aspects of the proposed watermarking are also discussed in detail. Experimental results have demonstrated that the proposed watermarking technique is robust against geometric distortions and other attacks performed by popular watermark benchmark-Stirmark, such as filtering, image compression, additive noise, random geometric distortions.

©2007 Optical Society of America

## 1. Introduction

Digital watermarking technique is an effective means to resolve copyright protection and information security problems by embedding watermark information into digital protected media. In order for a watermark to be useful, it must have robustness against image processing operations and various possible attacks by pirates^{[1]}.

In watermarking applications, the robustness against common signal processing and geometric attacks is essential to watermarking system. Geometric distortions damage inevitably the synchronization which is crucial for the correct detection of watermark techniques. This is due to the fact that minor geometric manipulation to the watermarked image could dramatically reduce the detector’s ability to detect the watermark. It is well known that many existing image watermarking algorithms are vulnerable to geometric attacks ^{[2]}. Despite the efforts and progress made in this direction, the watermark’s robustness against geometric distortions has not been well addressed, and it still remains an open problem. The effect of geometric attacks can be better understood by making an analogy between watermarking system and communication system. The principle of synchronization between encoder and decoder in the communication system holds in the watermarking system as well. Synchronization is a serious problem to any watermarking schemes, especially to blind watermarking techniques.

Some research has been done to deal with the watermark’s vulnerability to geometric distortions. Various methods have been proposed ^{[3,4,5]}, which are summarized and categorized as follows. Fist category is based on distortion inversion. A registration pattern is inserted into the host signal along with watermark ^{[6]} or the watermark is designed with a special structure ^{[7]}, so that in the stage of watermark detection the involved geometric distortions can be identified and measured. Thus the geometric distortions can be removed by an inversion process. The nature of this approach is the incorporation of two watermarks, one for data payload and the other for distortion detection. The second category is based on image normalization ^{[8]}. An image can be normalized to a certain position, orientation and size, which are invariant to image translation, rotation and scaling. The host image is normalized prior to watermark insertion, and the watermarked image is de-normalized back to its original look after watermark insertion. At the watermark detector, the test image has to undergo the same normalization process before watermark detection. An outstanding disadvantage of this approach is that an image has to experience transformations twice in the watermarking process, which inevitably causes extra quality degradation of the image on top of the watermark-induced distortion. The third category is based on the invariance properties of some image features. Different image features have different invariance properties. Image features that have been used for invariance watermarking include Fourier-Mellin transform domain coefficients^{[9]}, geometric moment invariants and Zernike moments^{[10]}.

Moments due to its ability of represent global features have found extensive application in the field of image processing. Geometric moments of digital images are used to capture global features for applications in pattern recognition, image analysis and object classification. Moments of orthogonal basis functions, such as the Legendre and Zernike polynomials introduced by Teague ^{[11]} can be used to represent the image by a set of mutual independent descriptors with a minimal amount of information redundancy^{[12]}. However, these moments present several problems. The most important of these problems is their inaccuracy due to quantization errors induced by the discrete approximation of continuous integrals and the coordinate space. Additionally, these approximations present a very large computational burden. Tchebichef moments were proposed by Mukundan et al^{[13]} to address these problems. They are discrete orthogonal moments and present a number of advantages over moments of continuous orthogonal basis^{[14]}. Mukundan’s study showed that the implementation of Tchebichef moments does not involve any numerical approximation since the basis set is orthogonal in the discrete domain of the image coordinate space. This property makes Tchebichef moments superior to the conventional continuous orthogonal moments in terms of preserving the analytical property needed to ensure information redundancy in a moment set.

Recently, blind source separation by ICA has received much more attention because of its potential applications in signal processing such as in speech recognition systems, medical signal processing and telecommunications. ICA is a general purpose statistical technique, which extracts a linear transformation for given a set of observed data such that the resulting variables are as statistically independent as possible.

In this paper, a novel geometric invariant blind image watermarking is proposed with the introduction of invariant Tchebichef moments. The construction of invariant Tchebichef moments is described in detail. The watermark was generated randomly independent to the original image and embedded by modifying perceptual invariant Tchebichef moments of the original image. ICA is utilized by watermark detector to extract the perfect watermark blindly. The accuracy of watermark detector depends on the secret key used in embedding process and the statistical independence between original image and watermark. The computational aspects of the proposed watermarking are also discussed in detail. Experimental results have demonstrated that the proposed watermarking is robust against various attacks performed by Stirmark.

## 2. Methodology of Invariant Tchebichef Moments Construction

#### 2.1 The Definition of Tchebichef Moments

The discrete Tchebichef moments with order *p* + *q* of image intensity function *f*(*x*,*y*) are defined as ^{[13]}:

Where, $\rho \left(p,N\right)=\sum _{x=0}^{N-1}{\left\{{t}_{p}\left(x\right)\right\}}^{2}=\frac{{N}^{2}\left({N}^{2}-1\right)\left({N}^{2}-{2}^{2}\right)\cdots \left({N}^{2}-{n}^{2}\right)}{2n+1}=\left(2n\right)!\left(\begin{array}{c}N+n\\ 2n+1\end{array}\right).$

The discrete Tchebichef polynomials are expressed as:

The inverse moment transform can be defined as:

#### 2.2 Invariant Tchebichef Moments Construction

In order to construct the geometric invariant Tchebichef moments, the set of weighted Tchebichef polynomial {*t*̃* _{n}* (

*x*)} can be written as:

Where, ${c}_{k,n,N}={\left(-1\right)}^{n-k}\frac{n!}{k!}\left(\begin{array}{c}N-1-k\\ n-k\end{array}\right)\left(\begin{array}{c}n+k\\ n\end{array}\right).$

The explicit expression of the weighted Tchebichef polynomials up to 3 order are expressed as follows:

$$\phantom{\rule{7.0em}{0ex}}{\tilde{t}}_{1}\left(x,N\right)=\frac{1-N}{N}+\frac{2}{N}x$$

$$\phantom{\rule{3.6em}{0ex}}{\tilde{t}}_{2}\left(x,N\right)=\frac{{N}^{2}-3N+2}{{N}^{2}}+\frac{6\left(1-N\right)}{{N}^{2}}x+\frac{6}{{N}^{2}}{x}^{2}$$

$$\phantom{\rule{0.2em}{0ex}}{\tilde{t}}_{3}\left(x,N\right)=\frac{-{N}^{3}+{6N}^{2}-11N+6}{{N}^{3}}+\frac{{12N}^{2}-30N+22}{{N}^{3}}x+\frac{30\left(1-N\right)}{{N}^{3}}{x}^{2}+\frac{20}{{N}^{3}}{x}^{3}$$

The explicit expression of the coefficients {*c _{k,n,N}*} up to 3 order are expressed as follows:

$$\phantom{\rule{12.0em}{0ex}}{c}_{\mathrm{0,1},N}=\frac{1-N}{N}\phantom{\rule{1.8em}{0ex}}{c}_{\mathrm{1,1},N}=\frac{2}{N}$$

$$\phantom{\rule{7.2em}{0ex}}{c}_{\mathrm{0,2},N}=\frac{{N}^{2}-3N+2}{{N}^{2}}{\phantom{\rule{1.2em}{0ex}}c}_{\mathrm{1,2}.N}=\frac{6\left(1-N\right)}{{N}^{2}}{\phantom{\rule{1.2em}{0ex}}c}_{\mathrm{2,2},N}=\frac{6}{{N}^{2}}$$

$${c}_{\mathrm{0,3},N}=\frac{{-N}^{3}+{6N}^{2}-11N+6}{{N}^{3}}\phantom{\rule{.3em}{0ex}}{c}_{\mathrm{1,3},N}=\frac{{12N}^{2}-30N+22}{{N}^{3}}{\phantom{\rule{.3em}{0ex}}c}_{\mathrm{2,3},N}=\frac{30\left(1-N\right)}{{N}^{3}}\phantom{\rule{.3em}{0ex}}{c}_{\mathrm{3,3},N}=\frac{20N}{{N}^{3}}$$

According to Eq.(1) and Eq.(4), the Tchebichef moments can be written as:

Where, $\tilde{\rho}\left(n,N\right)=\frac{\rho \left(n,N\right)}{{\beta \left(n,N\right)}^{2}}=\frac{N\left(1-\frac{1}{{N}^{2}}\right)\left(1-\frac{{2}^{2}}{{N}^{2}}\right)\cdots \left(1-\frac{{n}^{2}}{{N}^{2}}\right)}{2n+1},$
*β*(*n,N*) is a suitable constant which is independent of *x* and the simplest choice is *β*(*n,N*)=*N ^{n}*.

If geometric moments with *p* + *q* order of an image *f*(*x*,*y*) are expressed as^{[15]}:

Then the Tchebichef moments can be simplified as:

Note that Eq. (9) transforms the no orthogonal geometric moments to form the orthogonal Tchebichef moments. It is seen that Tchebichef moments depend on geometric moments up to the same order. The explicit expression of Tchebichef moments in terms of geometric moments up to 2 order (for *β*(*n,N*) = *N ^{n}*) are as follows:

$${T}_{20}=\frac{{30m}_{20}+30\left(1-N\right){m}_{10}+5\left(1-N\right)\left(2-N\right){m}_{00}}{\left({N}^{2}-1\right)\left({N}^{2}-2\right)}\phantom{\rule{.2em}{0ex}}{T}_{02}=\frac{{30m}_{02}+30\left(1-N\right){m}_{01}+5\left(1-N\right)\left(2-N\right){m}_{00}}{\left({N}^{2}-1\right)\left({N}^{2}-2\right)}$$

$$\phantom{\rule{5.2em}{0ex}}{T}_{11}=\frac{{36m}_{11}+18\left(1-N\right)\left({m}_{10}+{m}_{01}\right)+9\left(1-{N}^{2}\right){m}_{00}}{{\left({N}^{2}-1\right)}^{2}}$$

In order to construct geometric invariant Tchebichef moments, the center of the image *f*(*x*,*y*) is translated to its centriod (*x*̄,*y*̄) to obtain *f*(*x*+*x*̄,*y*+*y*̄). Where, $\overline{x}=\frac{{m}_{10}}{{m}_{00}}$ and $\overline{y}=\frac{{m}_{10}}{{m}_{00}}$. Then the translated image is rotated with *θ* angle to obtain *f*(*x*̄ + *x* cos*θ*-*y*sin*θ*,*y*̄+*y*cos*θ*-*x*sin*θ*). Where,
$\theta =\frac{1}{2}{\mathrm{tan}}^{-1}\frac{2{\mu}_{11}}{{\mu}_{20}-{\mu}_{02}}\phantom{\rule{.2em}{0ex}}\cdot \phantom{\rule{.5em}{0ex}}{\mu}_{\mathit{nm}}$ is the central moment defined as^{[15]}:

The value of *θ* is limited to -45° ≤ *θ* ≤ 45° in the experiments. To obtain the angle *θ* in the range of 0° to 360°, modification can be done to limit it to -45° ≤ *θ* ≤ 45°. Finally, The image is scaled with factor *α* to obtain the normalized image *g*(*x*,*y*). Where, $\alpha ={\left(\frac{N}{{2m}_{00}}\right)}^{\frac{1}{2}}.$ Notice that *g*(*x*,*y*) according to above does not fall inside the domain of [0,*N*-1]×[0,*N*-1], which is required by Tchebichef moment. The center of the normalized image *g*(*x*,*y*) is translated to $\left(\frac{N}{2},\frac{N}{2}\right).$ So the relationship of *g*(*x*,*y*) and *f*(*x*,*y*) is:

The geometric moments of the normalized image *g*(*x*,*y*) is computed as:

Note that the geometric moments {$\stackrel{\rightharpoonup}{m}$
* _{nm}*} is rotation, scale and translation invariant

^{[15]}. The new set of Tchebichef moments can be formed by replacing the regular geometric moments {

*m*} by their invariant counterparts {$\stackrel{\rightharpoonup}{m}$

_{nm}*} according to Eq.(9), which is rotation, scale and translation invariant. So the invariant Tchebichef moments*

_{nm}*T*̄

*can be expressed as:*

_{pq}The invariant Tchebichef moments can be modified to according to Eq.(13) in terms of *f*(*x*,*y*):

## 3. Methodology of Geometric Invariant Watermarking

#### 3.1 Watermark Embedding

In this section, the proposed geometric invariant watermarking scheme based on invariant Tchebichef moments will be described. The watermark is imperceptibly embedded into the invariant Tchebichef moments of the original image and the embedding intensity is determined by Just Noticeable Difference incorporates the characteristics of Human Visual System. The embedding steps of the proposed watermarking system are described as follows:

**Step 1:** To create and preprocess the watermark. The watermark is generated independent to the original image having the same size as the original image. Preprocess the watermark before embedding can enhance the security of the watermark.

**Step 2:** Watermark embedding process. A set of low order invariant Tchebichef moments *T _{o}*={

*T*

_{om1},⋯

*T*

*} and Tchebichef moments*

_{omk,nk}*T*={

_{w}*T*

_{wm1},

_{n1}⋯

*T*

*} are constructed from the original image*

_{wmk,nk}*f*(

*x*,

*y*) and watermark

*W*respectively. Watermark is embedded by adjusting the invariant Tchebichef moments of

*f*(

*x*,

*y*) with:

Then a set of adjusted invariant Tchebichef moments *T*̂ = {*T*̂_{m1,n1}, ⋯ *T*̂* _{mk,nk}*} is obtained. The
parameter

*a*is the embedding intensity of the watermark incorporated the characteristics of human visual system.

**Step 3:** Perform the inverse Tchebichef transform to retrieve the watermarked image.

#### 3.2 Watermark Detection

ICA process is the core of the watermark detector accomplished by FASTICA algorithm ^{[16]}. Assume that *n* observed linear mixtures *x*
_{1},⋯*x _{n}* of

*n*independent components can be expressed as:

Without loss of generality, both the mixture variables and the independent components can be assumed to be zero mean. Then it can be expressed as: *X* = *AS*. Where, ** S** is original source vector and

**is the mixed matrix. Both**

*A***and**

*S***are unknown.**

*A***represents all the observed data, and both**

*X***and**

*A***must be estimated from**

*S***. If the separate matrix**

*X***is known, the source signal can be estimated easily by:**

*L**S*=

**LX**. It is helpful to perform preprocessing such as centering and whitening to simplify the ICA process. The process of proposed watermark detector is described as follows:

**Step1:** Preprocess of the test image by centering and whitening. The observed variable *x* is centered by subtracting the mean vector *m*=*E{x}* from the observed variable, this makes *x* a zero-mean variable. This preprocessing is designed to simplify the ICA algorithms. After estimating the mixing matrix ** A** with centered data, the estimation can be completed by adding the mean vector of the original source signal back to the centered estimations of the source data. Another preprocessing is to writen the observed variables. Whitening means to transform the variable

*x*linearly so that the new variable

*x*̃ is white, i.e., its components are uncorrelated, and their variances equal unity. In other words,

*x*̃ is white means the covariance matrix of

*x*̃ equals to the identity matrix:

*E*[

*x*̃

*x*̃

*]=*

^{T}*I*, that is,

*x*̃ is uncorrelated. Whitening can be computed by eigenvalue decomposition of the covariance matrix

*E*[

*x*

*x*

*]=*

^{T}*EDE*. Where,

^{T}*E*is the orthogonal matrix of eigenvector of

*E*[

*x*

*x*

*].*

^{T}**is a diagonal matrix of its eigenvalues, that is**

*D**D*=

*diag*(

*d*

_{1}, ⋯,

*d*). Note that

_{n}*E*[

*x*

*x*

*] can be estimated in a standard way from the availability of*

^{T}*x*.

**Step2:** Perform ICA to the signal that has been centered and whitened, that is to find the separate matrix *L*:

- 1. Choose an initial (e.g., random) weight vector
; Let*L**L*+ =*E*{*yG*(*L*)}-^{T}y*E*{*G*′(*L*)}^{T}y*L*,*L*=*L*^{+}/∥*L*^{+}∥, where,*E*(·) is the mean function.*G(·)*is a non-linear function and the following choices of*G*(·) have been proved to be very useful: G_{1}(*u*) = tanh(*a*_{1}*u*), ${G}_{2}\left(u\right)=u\phantom{\rule{.2em}{0ex}}\mathrm{exp}(-\frac{{u}^{2}}{2}).$ - 2. If the difference between the iterative results is less than the threshold, that is, ∣
*L*^{+}-*L*∣ <*ε*, the conclusion can be drawn that the process is converged and the cycle will be terminated; otherwise, go back to Step 2 until the result is converged. The threshold*ε*can be defined by user and*ε*= 10^{-6}is used in our experiments. If the result is still not converged after 3000 cycles, then the process will be forced to terminate and it can be concluded that there is no independent component for the test image.If there are multiple watermarks in the test image, the extracted watermark must be subtracted before extracting the next one.

**Step3:** Retrieve the perfect watermark with the secret key used in the watermark embedding process.

## 4. Computation Aspects of Watermarking System

In this section, the computational aspects of the proposed watermarking system will be discussd. In the watermark embedding process, the Tchebichef moments must be computed. Since the invariant Tchebichef moments are Tchebichef moments of normalized image, the most of computation time are cost by Tchebichef moment computation. During the watermark detection process, ICA is performed by fast computation-FASTICA.

The symmetry property can be used to considerably reduce in time required for computing the Tchebichef moments. The weighted Tchebichef polynomials have symmetry property satisfying:

This relation suggests the subdivision of the domain of an *N*×*N* image (when *N* is even) into four equal parts and performs the computation of the polynomials only in the first quadrant, where $0\le x,y\le (\frac{N}{2}-1).$ If *N* is odd, the image can be zero-padded to achieve an
even *N*. So the expression for Tchebichef moments can be modified with the help of Eq.(18) as:

And the reconstruction formula can be modified to:

The matrix form of representation is very useful especially for fast computation and implementation in MATLAB. In matrix form the Tchebichef moment can be computed using:

Where, ** T** is the Tchebichef moment matrix, $T={\left\{{T}_{i,j}\right\}}_{i,j=0}^{N-1,N-1}.$

**is the image matrix, $F={\left\{f\left(i,j\right)\right\}}_{i,j=0}^{N-1,N-1}.$**

*F***is the Tchebichef polynomial matrix, $C={\left\{{t}_{i,j}\right\}}_{i,j=0}^{N-1,N-1}.$**

*C*The inverse moment transform is:

Set *β*(*n.N*)=*ρ*(*n,N*)to further simplify the computation. The direct calculation a two-dimensional Tchebichef moments with *p* + *q* order from Eq. (1) requires $\frac{1}{6}N\left(p+q+2\right)\left(p+q+1\right)\left(4\left(p+q\right)+3N-9\right)$ multiplications and $\frac{1}{6}\left(p+q+2\right)\left(p+q+1\right)\left(2N\left(p+q\right)+{3N}^{2}-6N-9\right)$ additions. It is known that the term *ρ*̃(*n,N*) needs only to be calculated at most once per moment. Then for Tchebichef moment with
*p* + *q* order will require $\frac{1}{24}\left(p+q+2\right)\left(p+q+1\right)\left({\left(p+q\right)}^{2}+7\left(p+q\right)+24\right)$ multiplications and $\frac{1}{3}{(p+q)}^{3}+{(p+q)}^{2}+\frac{2}{3}(p+q)$
additions.

## 5. Experimental Results

This section presents the experimental results for the proposed watermarking scheme. In our experiments, all the attacks are performed by the popular watermark benchmark — Stirmark. The constraint is set to * _{pF}* ≤ 10

^{-6}and experiments have been done to many images including Lena image with the size of 256×256. The watermark is produced randomly with the same size as the original image, that is, it is independent to the original image. Experimental results showed that the proposed watermarking technique based on invariant Tchebichef moments can extract the watermark with good robustness without using the original image. Fig.1 gives the original image, watermark and the watermarked image. Normalization Correlation function (NC) is used to express the similarity between the original watermark

*w*and the extracted watermark

*w*′ quantitatively and Peak Signal to Noise Ratio (PSNR) is used to express the difference between the watermarked image

*z*(

*x*,

*y*) and the original image

*f*(

*x*,

*y*). It is observed that the higher NC, the more similarity between the extracted watermark and the original watermark. The definitions of the NC and PSNR are given below:

#### 5.1 Robustness against additive noise

Generally, lower order Tchebichef moments of the image captures the low spatial frequency features of an image, while the higher order captures the high frequency features. Since the noise can be seen as the high frequency features of the image, it is showed that reconstructing an image with just its lower order moments could be remove additive noise.

The model of an image *f*(*x*,*y*) degraded by additive random noise is given by:

Where, *v*(*x*,*y*) represents the signal independent additive random noise uncorrelated with respect to the image. Examples of additive random noise generation include electronic circuit noise, amplitude quantization noise and faulty switching during imaging process. In the experiments, the Gaussian, salt-and pepper and speckle noise are added to test the robustness of proposed watermarking against additive noise. Gaussian noise results from the distribution:

Where, *μ*′ and *σ*
^{2} are the mean and variance respectively. Salt-and-pepper noise affects the image pixels with impulsive noise. And the model of speckle noise is given as:

Where, *λ* is uniformly distributed random noise with mean *μ*′ and variance *σ*
^{2}. The experimental results listed in Table 1 showed that even the watermarked images degraded by Gaussian, salt-and-pepper and speckle noise, the detector can still get a high NC.

#### 5.2 Robustness against JPEG compression

JPEG compression causes the high frequency components of an image to be attenuated. Tchebichef moments belong to the class of discrete orthogonal moments. Therefore, the implementation of Tchebichef moments does not involve any numerical approximation. Moreover, Tchebichef polynomials do not require coordinate space transformations. The image can be reconstructed with a partial lower order Tchebichef moments, so the proposed watermarking technique has a good robustness agaist JPEG compression. Experiments were performed to examine the robustness of the proposed watermarking technique to the JPEG compression produced by Stirmark with different qualify factor Q from 90 to 40. Experimental results are listed in Table 2 with a good robustness against JPEG compression.

#### 5.3 Robustness against geometric distortions

Experiments are also done to test the robustness of proposed watermarking against of geometric attacks performed by Stirmark. Fig.2(a) shows the NC due to the scaling attacks as a function of scaling factor parameters. Fig.2(b) gives the NC due to rotation attack with parameter of rotation angle.

#### 5.4 Robustness against other attacks performed by Stirmark

Experiments are also performed to test the robustness of proposed watermarking against other attacks performed by Stirmark. Fig.3(a) shows NC due to the low pass filtering as a function of cut-off frequency. The number on the abscissa means the multiple of maximum frequency of the host image. Fig.3(b) shows NC due to median filtering as a function of filter length. The number on the abscissa means the length of median filter. Experiments are also done to test the robustness of the proposed watermarking with respect to the attacks performed by Stirmark. Table 3 lists NC between the original watermark and the watermark extracted from the test image which has been attacked by Stirmark.

Experimental results listed in Table 4 are compared with those of ref.^{[17]} and two commercial watermark techniques.

Experimental results are also compared with those of ref ^{[18]}, which adopted ICA in spatial domain. Fig. 4 shows the comparisons with the attacks of scaling and rotation.

## 6. Conclusions

Tchebichef moments are discrete orthogonal moments and present a number of advantages over moments of continuous orthogonal basis. The implementation of Tchebichef moments does not involve any numerical approximation since the basis set is orthogonal in the discrete domain of the image coordinate space. A new geometric invariant blind image watermarking method based on algebraic invariants is proposed with the introduction of the invariant Tchebichef moments. The watermark is generated randomly independent to the original image and embedded by modifying the invariant Tchebichef moments of the original image. ICA is utilized by watermark detector which can extract the watermarks blindly not merely just detect them. The mathematical description has been shown about the invariant moments and the simulation is also performed. The approach proposed in this paper withstands rotation, scaling and translation by being invariant to these transformations. This avoids the need for an exhaustive search for an embedded watermark in a complicated multi-dimensional space. The computation aspects of the proposed watermarking system are also described. Experimental results show that the proposed image watermarking technique is robust against attacks performed by Stirmark, such as geometric distortions, filtering, additive noise, JPEG compression.

## Acknowledgments

This work is partly supported by National Nature Science Foundation of China (Grant No. 60502027) and Foundation of State Key Laboratory of Networking and Switching Technology.

## References and Links

**1. **S. Voloshynovskiy, S. Pereira, and T. Pun, University of Geneva. J.J. Eggers and J.K. Su, “Attacks on digital watermarks: classification, estimation-based attacks and benchmarks,” IEEE Communications Magazine **8**, 2∼10 (2001)

**2. **Choong-Hoon Lee and Heung-Kyu Lee, “Geometric attack resistant watermarking in wavelet transform domain,” OPTICS EXPRESS **13**, 1307–1321 (2005) [CrossRef] [PubMed]

**3. **M. Alghoniemy and A. H. Tewfik, “Geometric invariants in image watermarking,” IEEE Transactions on Image Processing **13**, 145–153 (2004) [CrossRef] [PubMed]

**4. **Y. Xin, S. Liao, and M. Pawlak, “Geometrically robust image watermarking via pseudo-Zernike moments,” IEEE Canadian Conference on Electrical and Computer Engineering **2**, 939–942 (2004)

**5. **Y. Xin, S. Liao, and M. Pawlak, “A multibit geometrically robust image watermark based on Zernike moments,” International Conference on Pattern Recognition **IV**, 861–864 (2004)

**6. **S. Pereira and T. Pun, “Robust template matching for affine resistant image watermarks,” IEEE Transations on Image Processing **9**, 1123–1129 (2000) [CrossRef]

**7. **M. Kutter, “Performance improvement of spread spectrum based image watermarking schemes through M-ary modulation,” Lecture Notes in Computer Science **1728**, 238–250 (1999)

**8. **P. Dong and N. P. Galasanos, “Affine transform resistant watermarking based on image normalization,” IEEE Int. Conf. Image Pro. 480–492 (2002)

**9. **J. O’Ruanaidh and T. Pun, “Rotation, scale and translation invariant spread spectrum digital image watermarking,” Signal Processing **66**, 303–317 (1998) [CrossRef]

**10. **H. S. Kim and H. K, Lee. “Invariant image watermark using Zernike moments,” IEEE Trans. Circuits and Systems for Vid. **13**, 766–775 (2003) [CrossRef]

**11. **M. R. Teague, “Image analysis via the general theory of moments,” J. Opt. Soc. Amer. **70**, 920–930 (1980) [CrossRef]

**12. **Chao Kan and Mandyam D. Srinath, “Invariant character recognition with Zernike and orthogonal Fourier-Mellin moments,” Pattern Recognition **35**, 143–154 (2002) [CrossRef]

**13. **R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Trans, Image Processing **10**, 1357–1364 (2001) [CrossRef]

**14. **R. Mukundan, “Some computational aspects of discrete orthonormal moments,” IEEE Transactions on Image Processing **13**, 1055–1059 (2004) [CrossRef] [PubMed]

**15. **M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Information Theory **IT-8**, 179–187 (1962)

**16. **Hyvarinen A, and Oja E., “Independent component analysis: A tutorial. Notes for International Joint Conference on Neural Networks (IJCNN’99),” Washington D. C. , http://www.cis.hut.fi/projects/ica/ (1999)

**17. **S. Pereira and T Pun, “Robust template matching for affine resistant image watermarks,” IEEE Transactions on Image Processing **9**, 1123∼1129 (2000) [CrossRef]

**18. **D. Yu, F. Sattar, and K.K. Ma, “Watermark detection and extraction using independent component analysis method,” EURASIP Journal on Applied Signal Processing **1**, 92–104 (2002)