Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

A unified iterative denoising algorithm based on natural image statistical models: derivation and examples

Open Access Open Access

Abstract

Lots of prior models for natural image wavelet coefficients have been proposed in the last two decades. Although most of them belong to the Scale Mixture of Gaussian (GSM) models, they are of obviously different analytical forms. As a result, Bayesian image denoising algorithms based on these prior models are also very different from each other. In this paper, we develop a novel image denoising algorithm by combining the Expectation Maximization (EM) scheme and the properties of the GSM models. The developed algorithm is of a simple iterative form and can converge quickly. It only uses the derivative information of a probability density function and is suitable for all GSM-type prior models that have an analytical probability density function. The developed algorithm can be viewed as a unified Bayesian image denoising framework. As examples, several classical and recently-proposed prior models for natural image wavelet coefficients are tested and some new results are obtained.

©2008 Optical Society of America

1. Introduction

Prior models play a key role in the area of wavelet-based Bayesian image denoising [1]. Over the last two decades, a number of models have been developed to characterize the statistical behaviors of natural image wavelet coefficients. The widely-used univariate statistical models include the Generalized Laplacian model [2, 3], Alpha-stable model [4, 5], exponential model [6], Bessel K Form density model [7, 8, 9] and mixture of two distributions [10, 11]. These univariate models are suitable to characterize an empirical statistic of natural images: the filter response of natural images is always sharply peaked at zero with heavy tails [12]. Recently, in order to describe the joint statistical behavior among natural image wavelet coefficients, some multivariate models also have been proposed such as the multivariate Laplacian model [13, 14, 15, 16], multivariate exponential model [17], multivariate Alpha-stable model [18], multivariate Bessel K Form density model [7], multivariate scale mixture of Gaussian (GSM) model [19], multivariate normal inverse Gaussian model [20], and multivariate Elliptically Contoured Distribution (ECD) model [21].

Based on various prior models such as those mentioned above, many Bayesian image denoising algorithms have been developed [2, 4, 5, 9, 10, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22]. Due to the fact that these algorithms use different prior models, as well as different Bayesian inference technologies (such as the Minimum Mean-squared Error (MMSE) estimate or the Maximum a posterior (MAP) estimate), they always present very different forms. Interestingly, although most existing natural image wavelet coefficient models have obviously different analytical forms, they can be categorized in the same class - they essentially belong to the GSM models [23]. Thus, a natural question is: can we develop a unified denoising framework for these models? Such a unified framework, if available, will be very useful not only for the image denoising application itself but also in the research of natural image statistic. For example, using the unified framework, one can fairly compare the performance of these models for image denoising, thus understands which model can characterize the statistical behaviors of natural image wavelet coefficients the best and why. In addition, when a new GSM-type prior model is available, to use it for image denoising, one only needs to simply put it into the unified denoising framework rather than to develop a new estimator based on its specific analytical form.

In this paper, we will develop a novel image denoising algorithm by combining the EM scheme and the properties of the Scale Mixture of Gaussian (GSM) models. The developed algorithm provides a unified image denoising framework for the GSMmodels that have an analytical expression form. The algorithm is of a simple iterative form and only uses the derivative information of a prior model. When a new prior model is available, it can be easily adopted for image denoising with the developed algorithm. As examples, several classical and recently-proposed statistical models of wavelet coefficients of natural images are tested in the denoising framework and some new results are obtained.

The outline of the rest of the paper is as follows. Section 2 gives the derivation of the unified image denoising algorithm. Examples are given that test several classical and recently-proposed statistical models in Section 3. Section 4 reports experimental results and provides a sufficient discussion. Concluding remarks are summarized in Section 5.

2. Unified denoising algorithm: derivation

Assume that noisy image wavelet coefficients have been partitioned into M small neighborhoods, and each neighborhood includes d coefficients. Let Y={Y 1,…,YM} denotes the M observed d-dimensional noisy data and each Yi, 1≤iM, satisfies the model

Yi=Xi+ε,

where Xi, 1≤iM, is the corresponding noise-free wavelet coefficients and ε~N(0⃗, ρε) is the additive Gaussian white noise. Suppose that the noise-free wavelet coefficient Xi, 1≤iM, are i.i.d random variables each having a GSM distribution Xi=dziU, , where U~N(0⃗, ρ) and zi is a hidden positive scalar multiplier. Let Z={z 1, …, zM} and zi, 1≤iM, be statistically independent. Thus, we have P(Z)= P(Z)=Πi=1MP(zi) .

In [24], J. M. Bioucas-Dias proposed a highly effective image deconvolution algorithm based on the univariate GSM models. The algorithm employed a new generalized Expectation Maximization (EM) scheme, where the missing variables were the scalar multipliers of the GSM models and the maximization step of the EM scheme was replaced with a linear stationary second-order iterative method. We will use the standard EM scheme to estimate X={X 1, …,XM} under the observed data Y={Y 1, …,YM} in this paper. Similarly to the technology adopted in [24], we will set the missing variables to be the scalar multipliers of the GSM models. As we will show below, combined with several special properties of the GSM models, the EM scheme will lead to a very simple iterative image denoising algorithm that is suitable for both univariate and multivariate GSM models.

The standard EM scheme works iteratively by alternating two steps: the E-step (expectation step) and the M-step (maximization step). In our setting, Z is the missing (or hidden) variable and (Y,Z) is the so-called complete data. The E-step finds the expected value of the loglikelihood logP(X,Y,Z) given the observed data Y and the current estimate X t={Xt 1, …,XtM},

Q(X,Xt)=EZ{logP(X,Y,Z)|Y,Xt}.

Using the Bayesian rule, we have P(X,Y,Z)=P(Y|Z,X)P(Z,X)=P(Y|X)P(X|Z)P(Z). As a result, Eq. (2) can be rewritten as

Q(X,Xt)=Z[(logP(Y|X)+logP(X|Z)+logP(Z))]P(Z|Y,Xt)dZ
=logP(Y|X)+ZlogP(X|Z)P(Z|Y,Xt)dZ+C(Z,Y,Xt),

where C(Z,Y,X t) is a function of Z,Y and X t. In addition, since

P(Z|Y,Xt)=P(Y|Xt,Z)P(Xt|Z)P(Z)P(Y|Xt)P(Xt)=P(Xt|Z)P(Z)P(Xt)=P(Z|Xt),

we get

Q(X,Xt)=logp(Y|X)+Zlogp(X|Z)P(Z|Xt)dz+C(Z,Y,Xt).

An important property of a GSM model is that it is Gaussian when conditioned on the positive scalar multiplier [19]. Hence, we have Xi|zi~N(0⃗,ziρ), 1≤iM, and

P(XZ)=Πi=1MP(Xizi)=(2π)d2×MΠi=1Mziρ12exp[12i=1MXiT(ziρ)1Xi].

It follows that

Q(X,Xt)=logP(YX)12i=1MXiTρ1Xizzi1P(ZXt)dZ+C(Z,Y,Xt)
=logP(YX)12i=1MXiTρ1Xizizi1P(ziXit)dzi+C(Z,Y,Xt).

The M-step of the EM scheme is to maximize the expectation Eq. (3) we computed in the E-step above. Let Ω(Xit)=zizi1P(zi|Xit)dzi . We have

Xit+1=argmaxXiQ(X,Xt)=argmaxXi[logP(Y|X)12i=1NXiTρ1XiΩ(Xit)]
 =argmaxXi[12i=1M(YiXi)Tρε1(YiXi)12i=1MXiTρ1XiΩ(Xit)]
=argzeroddXi[i=1M(YiXi)Tρε1(YiXi)+i=1MXiTρ1XiΩ(Xit)]
=argzero[2ρε1(YiXi)+2Ω(Xit)ρ1Xi].

Hence, we get

Ω(Xit)ρ1Xit+1=ρε1(YiXtt+1),

or

Xit+1=(Ω(Xit)ρ1+ρε1)1ρε1Yi.

Another important property of a GSM model is that its probability density function can be expressed as a function of the quadratic form of a random vector [21]. The property means P(Xti)=f(r) if we let r=(Xti)T ρ -1 Xti. It follows that Ω(Xit)=2f(r)dfdr (see the Appendix). Now we can rewrite Eq. (4) as

Xit+1=(2f(r)ddrf(r)ρ1+ρε1)1ρε1Yi
=(2ddrlogf(r)ρ1+ρε1)1ρε1Yi.

So far, we have obtained an iterative denoising formula Eq. (5), which only uses the derivative information of a GSM model, namely, ddrlogf(r) . During the derivation above, we utilized two important properties of the GSM models. The first property is that a GSM random variable is Gaussian when conditioned on the scalar multiplier. This property was used in the E-step. The second property is that the probability density function of a GSM random variable can be written as a function of its quadratic form. This is true, since the GSM model is in fact a special subfamily of the Elliptically Contoured Distribution Family (ECDF) [21]. This property was used in the M-step. Considering that most existing models of wavelet coefficients of natural images belong to the GSM models, Eq. (5) hence provides a unified framework for the image denoising application. To verify Eq. (5), we will use it to derive two classical denoising algorithms. One is the Wiener filtering and the other is the Bishrinkage function proposed in [13]. Let t→+∞, Eq. (5) reduces to

Xi=(2ddrlogf(r)ρ1+ρε1)1ρε1Yi.

We first check the Wiener filtering case. Assume Xi obeys the Gaussian distribution, namely, Xi~N(0→,ρ). Since P(Xi)=f(r)=(2π)d2ρ12exp(12r) , we have ddrlogf(r)=12 . Equation (6) hence becomes

Xi=(ρ1+ρε1)1ρε1Yi,

which is just the classical Wiener filtering for Gaussian data contaminated by Gaussian white noise.

In [13], Sendur and Selesnick developed a highly effective denoising algorithm referred to as the Bishrinkage. To develop the algorithm, they used a bivariate model

P(Xi)=32πσ2exp(3σx12+x22)
        =f(r)=32πσ2exp(3r12),

where Xi=(x 1,x 2)T and ρ=(σ200σ2). .

The Bishrinkage function in [13] was derived from aMAP estimator. It is easy to show that we can also obtain the Bishrinkage function using Eq. (6). Now we have ddr=logf(r)=32r12 . Substituting it into Eq. (6), we can obtain the Bishrinkage function

x1̂=(y12+y223σn2σ)+y12+y22y1,

where (∙)+ is defined as (x)+={2,   ifx<0x, otherwise  .

The bivariate model Eq. (8) can be extended to multivariate cases and the resulting multivariate models belong to the Elliptically Contoured Distribution Family (ECDF) [21]. In [21], we developed an effective image denoising algorithm using the Maximum a Posterior (MAP) estimator based on these extended multivariate models. Interestingly, the algorithm developed in [21] has the same form as Eq. (6), which is the limit case of Eq. (5). Of course, it is not surprising that the EM scheme and the MAP estimator can obtain the same result, if we realize that the EM scheme is essentially a method to solve MAP (or ML) problems. We would like to emphasize that the iterative formula Eq. (5) converges due to the property of EM scheme itself. Hence, the derivation above, in fact, provides an explanation of the convergence of the algorithm developed in [21].

Next, we will simply discuss the numerical implementation of the iterative formula Eq. (5). For more details please see [21]. Let S be the symmetric square root of the positive definite matrix ρε (i.e., ρε=SST), and {Q, Λ} be the eigenvector/eigenvalue expansion of the matrix S -1 ρST, namely QΛQT=S -1 ρST (we note here QT=Q -1). It follows that

STρ1S=(S1ρST)1=(QΛQT)1=QΛ1QT.

Hence we can rewrite Eq. (5) as

Xit+1=SQ(Id2ddrlogf(r)Λ1)1(SQ)1Yi
=SQ[λiλi2ddrlogf(r)]1d(SQ)1Yi,

where [λiλi2ddrlogf(r)]1d denotes the diagonal matrix diag[λ1λ12ddrlogf(r),,λdλd2ddrlogf(r)] , and λi, 1≤id, are the diagonal elements of Λ, namely, Λ=diag(λ 1,…,λd. In addition, using S, Q and Λ, we can write r as

r=(Xit)Tρ1Xit=(Xit)TSTSTρ1SS1Xit=(Xit)TSTQΛ1QTS1Xit
=(QTS1Xit)TΛ1QTS1Xit=i=1d1λiVi2,

where Vi, 1≤id, denotes the i-th element of the d-dimension vector QT S -1 Xti. Now we can obtain the estimate of Xi,1≤iM, through the iterative formula Eq. (11).

3. Unified denoising algorithm: examples

As mentioned above, most existing statistical models for natural image wavelet coefficients belong to the GSM models. In this section, we will test several classical and recently-proposed GSM prior models for the image denoising application using the unified framework developed in Section 2. According to the iterative formula Eq. (11), we only need to write a GSM prior model as a function of the quadratic form and compute the derivative of its logarithm with respect to r. We summarize the unified denoising algorithm as follows.

Unified Denoising Algorithm:

  1. Write a probability density function of a GSMmodel as a function of the quadratic form such as P(X)=C·f(r), where r=XTρ -1 X and C is a constant;
  2. Compute the derivative of log f(r) with respect to r, ddrlogf(r) ;
  3. Obtain the estimate using the iterative formula Eq. (11).

3.1. Examples for univariate prior models

In this subsection, we will focus on the usage of several univariate models (namely, d=1). Note in this case, we have r=σ -2 x 2.

1). Univariate Gaussian Model

Although the 1-D Gaussian model is not suitable to characterize the heavy-tailed marginal statistical behavior of the wavelet coefficients of natural images, we still test this model for analysis and comparison purpose. The univariate Gaussian model is of the form

P(x)=(2π)12σ1exp(r2)=C·fG(r),

where fG(t)=exp(r2). . Obviously, we have

ddrlogfG(r)=12.

2). Univariate Generalized Laplacian Model

To model the heavy-tailed property of natural image wavelet coefficients, a widely-used univariate model is the two parameter Generalized Laplacian (GL) model of the form [2, 3]

P(x)=1Ψ(s,p)exp(xsp),

where Ψ(s,p)=2spΓ(1p) is a normalization term and Γ(x)=∫ 0 t x-1 e -t dt is the Gamma function.

The GL model is zero-mean and symmetric, and can capture the large kurtosis property of the 1-D marginal statistic of natural image wavelet coefficients [2, 3]. The typical values of p for the wavelet coefficients of natural images are in the range [0.5 1.0], corresponding to kurtosis values in the range [6, 25.2] [25]. The parameters {s,p} are directly related to the second and fourth moment, σ2 and m 4, of the random variable x. Specifically, the second and fourth moments of a generalized Laplacian signal corrupted by additive Gaussian white noise are [2]

σ2=σn2+s2Γ(3p)Γ(1p),m4=3σn4+6σn2s2Γ(3p)Γ(1p)+s4Γ(5p)Γ(1p).

Using Eq. (16), it is convenient to compute {s, p} from the noisy wavelet coefficients.

In order to use the iterative formula Eq. (11), we rewrite Eq. (15) as

P(x)=C·exp(x2p)=C·exp[(x2s2)p2]=C·exp[(x2σ2)p2(σ2s2)p2]
=C·exp[(σs)prp2]=C·exp(αrβ)=C·fGL(r),

where α=(σs)p, β=p2 and fGL(r)=exp(-αrβ). Obviously, we have

ddrlogfGL(r)=αβrβ1.

3). Univariate Bessel K Form Model

For p>0 and s>0, the probability density function of the univariate Bessel K Form model can be expressed as [7, 8]

P(x)=1πΓ(p)(s2)p214x2p12Kp12(2sx),

where K ν is the modified Bessel function of the second kind defined as

Kv(zx)=Γ(v+12)(2x)vπΓ(p)0cos(zt)dt(t2+x2)v+12(Re(v)>12,z>0,argx<π2),

p and s are the shape and scale parameters, respectively.

The probability density function of the univariate Bessel K Form model is unimodal, symmetric around the mode. When p<1, P(x) is sharply peaked and has heavy tails. The parameters {s,p} are directly related to the second and fourth cumulant, σ2 and κ, of the random variable x. Specifically, we have σ2=ps, κ=3p+3 [8, 9]. Since adding a Gaussian process to a zero-mean process only modifies its variance but not the other cumulants, we can obtain the estimate of parameters {s,p} from the noisy wavelet coefficients using the relation

σ~2=ps+σn2,κ~=3p+3,

where σ˜ 2 and κ˜ denote the sample variance and the fourth cumulant of noisy wavelet coefficients, respectively. Using the relation s=σ2p , we can rewrite Eq. (19) as

P(x)=C·rp214Kp12(2pr)=C·fBK(r),

where fBK(r)=rp214Kp12(2pr) . Since ddz[zvKv(z)]=zvKv1(z) [26], we get

ddz[Kv(z)]=ddz[zvKv(z)zv]=Kv1(z)vz1Kv(z).

Now, we have

ddrlogfBK(r)=(p214)1r+ddrKp12(2pr)Kp12(2pr)=p2prKp32(2pr)Kp12(2pr).

4). Asymptotic Univariate Bessel K Form Model

Although the Bessel K Form model Eq. (19) describes the marginal statistical behavior of natural image wavelet coefficients very well [8, 9], its functional form is not easy to work with for some applications. The Bessel K Form model with the parameters p and s can be approximated well by the function (that we call asymptotic Bessel K Form model) [8]

P(x)=1Γ(p)(2s)p2xp1exp(2sx).

For denoising applications, parameters of the asymptotic Bessel K Form model can be estimated using Eq. (20) directly from the noisy wavelet coefficients. Now we can rewrite Eq. (24) as

P(x)=1Γ(p)(2s)p2(x2)p12exp(2s(x2)12)
=1Γ(p)(2s)p2σp1(x2σ2)p12exp(2σ2s(x2σ2)12)
=C·rp12exp(r122σ2s).

Since σ2=ps, we get

P(x)=C·rp12exp(2pr)=C·fAABK(r),

where fAABK(r)=rp12exp(2pr) . Obviously, we have

ddrlogfAABK(r)=p12rp2r.

5). Laplacian Model (or Bi-exponential Model)

The probability density function of the Laplacian model is of the form

P(x)=1pexp(xs),

where p=2s. The tails of the Laplacian model are intermediate in heaviness between the Student t-distribution and the Gaussian distribution. Since for the Laplacian model, σ2=Var(x)=2s, we have

P(x)=1pexp(2r)=C·fBE(r),

where fBE(r)=exp(2r) . Obviously, we have

ddrlogfBE(r)=12r.

3.2. Examples for multivariate prior models

It is well-known that a residual dependency structure always remains among wavelet coefficients, although the orthonormal wavelet was found to be fairly well-decorrelated. Recently, some multivariate models have been proposed to characterize the statistical dependence among natural image wavelet coefficients [7, 13, 15, 16, 17, 18, 19, 20, 21]. Generally, image denoising algorithms based on multivariate models can produce better denoising performance than those based on univariate models. In this subsection, we will focus on the usage of the multivariate GSM models that have an analytical expression form to the image denoising application under the unified denoising framework.

1). Multivariate Gaussian Model

The multivariate Gaussian model is of the form

P(X)=(2π)N212exp[12XTρ1X]=C·fG(r),

where fG(r)=exp(12r) . The same as the univariate Gaussian case, we have

ddrlogfG(r)=12.

2). Multivariate Laplacian Model

The multivariate Laplacian model is of the form

P(X)=(2π)d22λKd21(2λq(X))(λ2q(X))d21,

where q(X)=λXT ρ -1 X. Themodel is useful for representingmultivariate, sparsely distributed data, with mutually dependent components [15]. Equation (33) can be rewritten as

P(X)=(2π)d2212+d4λd2Kd21(2r)rd412=C·Kd21(2r)rd412=C·fML(r),

where fML(r)=Kd21(2r)rd412 . Using Eq. (22), we can obtain

ddrlogfML(r)=ddrlogKd21(2r)1r(d412)=Kd22(2r)2rKd21(2r)(d21)1r.

3). Multivariate Elliptically Contoured Distribution Model

In [21], we extended a generalized version of the bivariate model Eq. (8) to multivariate cases and obtained several new multivariate models that belong to the Elliptically Contoured Distribution Family (ECDF). The generalized bivariate model is of the form

P(X)=32πρ12exp(3r12)=C·fECDF_2(r),

where X(x 1, x 2)T. Obviously, we have

ddrlogfECDF_2(r)=32r12.

The extended 4-D model is of the form

P(X)=334π2ρ12r12exp(3r12)=C·fECDF_4(r),

where X=(x 1, x 2, x 3, x 4)T. Similarly, we have

ddrlogfECDF_4(r)=12r32r12.

The extended 10-D model is of the form

P(X)=9332π5ρ12(5r72+53r3+6r52+3r2)exp(3r12)=fECDF_10(r),

where X=(x 1, …, x 10)T. Again, we have

ddrlogfECDF_10(r)=3r263r3215r73r1215153r1217.5r13r32+6r+53r12+5.

4). Multivariate Exponential Model

Although the Elliptically Contoured Distribution models developed in [21] are highly effective for the image denoising application, how they fit the empirical statistics of the wavelet coefficients of natural images remains unknown. In [17], we built several new multivariate models that not only match well with the observed statistics of the wavelet coefficients of natural images, but also have analytical forms simpler than those in [21]. These models are of exponential forms and their parameters were determined by directly fitting the 1-D histogram of the quadratic form of wavelet coefficients.

The exponential models can be expressed as [17]

P(X)=C·exp(a1a2ra3)=C·fMEM_d(r)=C·exp(a2ra3),

where X=(x 1, …, xd)T and fMEM_d(r)=exp(-a 2 r a3). For the 2-D case, we have a 1=-2.1, a 2=6.8 and a 3=0.17. For the 4-D case, we have a 1=-3.2, a 2=6.3 and a 3=0.22. For the 9-D case, we have a 1=-4.0, a 2=5.6 and a 3=0.26. For the 10-D case, we have a 1=-4.5, a 2=5.5 and a 3=0.3. From Eq. (42), we obtain

ddrlogfMEM_d(r)=a2a3ra31.

4. Experiment results and discussion

In this section, we will test and compare the performance of prior models demonstrated in Section 3 for the image denoising application under the unified denoising framework.

We use several standard test images, namely, Barbara, Lena, Boat and Pepper for our experiments. These images are available at Portilla’s homepage http://www.io.csic.es/PagsPers/JPortilla/denoise/software/index.htm. The first three images are of size 512×512, and the last one is of size 256×256. To study the dependency of our denoising algorithm on different noise levels, these test images are contaminated by additive Gaussian white noise with different standard deviation, 10, 20 and 30. The denoising performance is evaluated both in terms of PSNR and visually. A 4-scale orthonormal wavelet transform with the Symlet-8 filter is used here. The PSNR of the restoration images versus the full range of the input noise level are listed in Table 1 and Table 2 (in both tables, 1×1+1, 3×1+1, 3×3 and 3×3+1 denote the used neighborhood structures. For example, 3×+1 denotes the used neighborhood structure containing a 3×3 local window plus a parent coefficient of the central coefficient of the window). For comparison, we also list in both tables the PSNR due to several other image denoising algorithms including the hard-threshold procedure (3σn) (Hard Threshold), the MMSE estimator based on the GL model proposed by Simoncelli et. al (GL_MMSE) [2], the GSM-based MMSE algorithm developed by Portilla et. al (GSM_MMSE) [19] and the Wiener filtering in the wavelet domain (Wiener).

In our experiments, we use the Successive SubstitutionMethod for the iterative formula Eq. (11) and the initialization is set to be zero (for more details about the Successive Substitution Method please see the Appendix in [21]). Figure 1(a–d) display the PSNR of the solution versus the number of iterations for the Pepper image with different noise level (20 and 30) when employing the asymptotic univariate Bessel K Form model and the multivariate exponential model (using a 3×3 neighborhood structure), respectively. Although the Successive Substitution Method is only of linear convergence, these examples clearly show the unified iterative algorithm can converge quickly. From a practical viewpoint, four or five iterations are enough to get a stable solution. For the Pepper image, when using the asymptotic univariate Bessel K Form model, the algorithm only runs 0.35 seconds for 4 iterations and 0.51 seconds for 10 iterations; when using the multivariate exponential model (using a 3×3 neighborhood structure), the algorithm only runs 1.28 seconds for 4 iterations and 2.64 seconds for 10 iterations (The platform we used is as follows: Hardware Architecture: PC; Operating System: Windows XP; Processor: 3.2 GHz; Memory Size: 2G; Application software: Matlab 7.1).

From Table 1 and Table 2, we can observe some interesting phenomena:

1). The classical Wiener filtering in the wavelet domain produces almost the same performance as the unified iterative algorithm using the Gaussian model (Gaussian_UA). One can check for this the univariate case and the multivariate case in both tables. Figure 1(e,f) display two convergence plots for the multivariate Gaussian model case (using a 3×3 neighborhood structure) for the Pepper image at noise level 20 and 30, respectively. Interestingly, only after the first iteration, the algorithm has converged to the optimal solution (in Section 2, we have noted the equivalence relation between the unified algorithm and the Wiener filtering (see Eq. (7)), which is the optimal estimate under the minimum L 2 error sense for the Gaussian data contaminated by the additive Gaussian white noise). This verifies the validity of our unified algorithm for the image denoising application.

 figure: Fig. 1.

Fig. 1. PSNR of the solution versus the number of iterations for the Pepper image: For the asymptotic univariate Bessel K Form model (a) at noise level 20 and (b) at noise level 30; For the multivariate exponential model (3×3 neighborhood structure) (c) at noise level 20 and (d) at noise level 30; For the multivariate Gaussian model (3×3 neighborhood structure) (e) at noise level 20 and (f) at noise level 30

Download Full Size | PDF

2). It is well known that the univariate Laplacian model is not very suitable to model the statistics of natural image wavelet coefficients since its tails are not heavy enough. However, we surprisingly find in Table 1 and Table 2, for the univariate case, all models except the Gaussian model produce a similar performance. In most cases, the univariate Laplacian model even produces results that are a little better. It has been widely-accepted in the literature that the

Tables Icon

Table 1. Comparison of denoising performance using different prior models in terms of PSNR for Barbara and Lena image

band-adaptive algorithms commonly perform better than the non-band-adaptive ones. It is more surprising if we realize that other models are band-adaptive — they adjust their parameters for every wavelet coefficient band, whereas the univariate Laplacian model is not. This can be explained using a relation in our denoising setting. The unified denoising algorithm operates on the quadratic form r=XT ρ -1 X rather than the wavelet coefficients themselves. In the univariate case, r can be written as r=x2σ2 . It follows that the expectation E(r)=E(x2)σ2=1 , which, we would like to emphasize, is not band-dependent. Obviously, some non-band-adaptive algorithms can obey the relation better than the band-adaptive ones. Thus, they maybe perform better than the band-adaptive ones (if the non-band-adaptive algorithms also can obey some higher order statistic relations such as Var(r)=E(r2)E2(r)=E(x4)σ41 , of course). From Table 1 and Table 2, we also can find the multivariate Laplacian model leads to a very good denoising performance (note that no parameters need to be adjusted in this case).

3). In addition, we can find that for the univariate GL model, the unified algorithm and the MMSE algorithm in [2] have a very similar performance. Generally, the PSNR difference

Tables Icon

Table 2. Comparison of denoising performance using different prior models in terms of PSNR for Boat and Pepper image

between both algorithms is only about 0.1dB. This again implies the validity of the unified algorithm.

To compare the visual effect of different denoising algorithms, some restoration images for the Pepper image with noise standard deviation 20 are shown in Fig. 2 and Fig. 3. As usual, the hard threshold procedure produces many artifacts that blemish the restoration image seriously (see Fig. 2(a)). Interestingly, in the univariate case, the visual effect due to the GL_MMSE algorithm in [2], the unified denoising algorithm using the GL model (GL_UA), the Bessel K Form model (Bessel K Form_UA), the asymptotic Bessel K Form model (Asy. Bessel K Form_UA) and the Laplacian model (Laplacian_UA) are very similar to each other. In Fig. 3, we can find that the visual effect due to the 3×3 Wiener filtering in the wavelet domain and the unified denoising algorithm using the multivariate Gaussian model (Gaussian_UA) are almost the same. Obviously, the unified denoising algorithm based on the multivariate Laplacian model (Laplacian_UA), the multivariate Elliptically Contoured Distribution model (MECDF_UA) and the multivariate exponential model (MEM_UA) all produces a satisfactory visual effect. In order to make it clear, we display crops of denoised Boat images at noise level 20 in Fig. 4. Among them, the multivariate Laplacian model produces the best visual effect.

 figure: Fig. 2.

Fig. 2. Restoration images for visual comparison using different univariate prior models on the Pepper image (with additive Gaussian white noise of standard deviation 20, PSNR=22.08dB) (a) Hard threshold, PSNR=26.84dB; (b) MMSE estimator based on the GL model proposed by Simoncelli et. al (GL_MMSE) in [2], PSNR=28.24dB; (c) Unified denoising algorithm based on the Generalized Laplacian model (GL_UA), PSNR=28.31dB; (d) Unified denoising algorithm based on the Bessel K Form model (Bessel K Form_UA), PSNR=28.12dB; (e) Unified denoising algorithm based on the Asymptotic Bessel K Form model (Asy. Bessel K Form_UA), PSNR=28.07dB; (f) Unified denoising algorithm based on the Laplacian model (Laplacian_UA), PSNR=28.28dB

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Restoration images for visual comparison using different multivariate prior models on the Pepper image (with additive Gaussian white noise of standard deviation 20, PSNR=22.08dB) (a) 3×3 Wiener filtering in the wavelet domain, PSNR=27.84dB; (b) Unified denoising algorithm based on the Gaussian model (Gaussian_UA: 3×3), PSNR=27.85dB; (c) GSM-based MMSE algorithm developed by Portilla et. al in [19] (GSM_MMSE: 3×3+1), PSNR=29.45dB; (d) Unified denoising algorithm based on the Laplacian model (Laplacian_UA: 3×3+1), PSNR=29.43dB; (e) Unified denoising algorithm based on the Elliptically Contoured Distribution model (MECDF_UA: 3×3+1), PSNR=29.19dB; (f) Unified denoising algorithm based on the exponential model (MEM_UA: 3×3+1), PSNR=29.31dB

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Crop of restoration images for visual comparison using different multivariate prior models on the Boat image (with additive Gaussian white noise of standard deviation 20, PSNR=22.10dB) (a) GSM-based MMSE algorithm developed by Portilla et. al in [19] (GSM_MMSE: 3×3+1), PSNR=29.62dB; (b) Unified denoising algorithm based on the Laplacian model (Laplacian_UA: 3×3+1), PSNR=29.76dB; (c) Unified denoising algorithm based on the Elliptically Contoured Distribution model (MECDF_UA: 3×3+ 1), PSNR=29.58dB; (d) Unified denoising algorithm based on the exponential model (MEM_UA: 3×3+1), PSNR=29.67dB

Download Full Size | PDF

5. Conclusion

An image denoising algorithm based on natural image prior models has been developed by combining the EM scheme and the properties of the GSM models. It not only has a simple iterative form but also is computationally tractable. Due to the fact that most prior models of natural image wavelet coefficients belong to the GSM models, the developed algorithm can be viewed as a unified image denoising framework. We have tested several classical and recently-proposed prior models under the unified algorithm. We found that the developed algorithm based on the multivariate Laplacian model can produce restoration results with high quality, not only in terms of the PSNR but also visually. Considering that the main purpose of the paper is to show how the algorithm is obtained and how to use it, we did not test extensively existing prior models for natural images. We will give more test results in future. It is our hope that the developed algorithm is helpful to those who will use new natural image models for the image denoising application.

Appendix

To compute Ω(Xti), we adopt a tip used in [24]:

Ω(Xit)=zizi1P(zi|Xit)dzi=1P(Xit)zizi1P(Xit|zi)P(zi)dzi.

Considering that

dP(Xitzi)dXi_centt=(2π)d2ziρ12exp[12(Xit)T(ziρ)1Xit](12ddXi_centt(Xit)T(ziρ)1Xit)
=P(Xitzi)(12zi1)d(Xit)Tρ1XitdXi_centt=12zi1P(Xitzi)d(Xit)Tρ1XitdXi_centt,

where Xti_cent denotes the central element of the d-dimensional vector Xti, we obtain

zi1P(Xit|zi)=2d(Xit)Tρ1XitdXi_centtdP(Xit|zi)dXi_centt.

It follows that

Ω(Xit)=1P(Xit)Zi2d(Xit)Tρ1XitdXi_centtdP(Xitzi)dXi_centtP(zi)dzi=2P(Xit)d(Xit)Tρ1XitdXi_centtddXi_centtP(Xit).

Since P(Xti)=f(r) and r=(Xti)T ρ -1 Xti, we obtain

Ω(Xit)=2f(r)d(Xit)Tρ1XitdXi_centtdfdrd(Xit)Tρ1XitdXi_centt=2f(r)dfdr.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 60672126), the National Basic Research Program (973 Program) of China (Grant No. 2006CB705700), and Science and Technology innovation engineering important project in university of China (Grant 706053).

References and links

1. A. Srivastava, B. Lee, E. P. Simoncelli, and S.-C. Zhu, “On advances in statistical modeling of natural images,” Journal of Mathematical Imaging and Vision 18, 17–33 (2003). [CrossRef]  

2. E. P. Simoncelli and E. H. Adelson, “Noise removal via Bayesian wavelet coring,” in Proc. 3rd Int. Conf. on Image Processing (Lausanne, Switzerland, 1996), pp. 379–382. [CrossRef]  

3. S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Pattern Anal. Machine Intell. 11, 674–693 (1989). [CrossRef]  

4. A. Achim, A. Bezerianos, and P. Tsakalides, “Novel Bayesian multiscale method for speckle removal in medical ultrasound images,” IEEE Trans. Med. Imag. 20, 772–783 (2001). [CrossRef]  

5. A. Achim, P. Tsakalides, and A. Bezerianos, “SAR image denoising via Bayesian wavelet shrinkage based on heavy-tailed modeling,” IEEE Trans. Geosci. Remote Sensing 41, 1773–1784 (2001). [CrossRef]  

6. J. Huang, “Statistics of natural images and models,” PhDthesis, Division of Appled Mathematics, Brown University (2000).

7. U. Grenander and A. Srivastava, “Probability models for clutter in natural images,” IEEE Pattern Anal. Machine Intell. 23, 424–429 (2001). [CrossRef]  

8. A. Srivastava, X. Liu, and U. Grenander, “Universal analytical forms for modeling image probability,” IEEE Pattern Anal. Machine Intell. 28, 1200–1214 (2002). [CrossRef]  

9. J. M. Fadili and L. Boubchir, “Analytical form for a Bayesian wavelet estimator of images using the Bessel K Form densities,” IEEE Trans. Image Processing 14, 231–240 (2005). [CrossRef]  

10. B. Vidakovic, “Nonlinear wavelet shrinkage with Bayes rules and Bayes factors,” J. Amer. Stat. Assoc. 93, 173–179 (1998). [CrossRef]  

11. H. A. Chipman, E. D. Kolaczyk, and R. E. McCulloch, “Adaptive Bayesian wavelet shrinkage,” J. Amer. Stat. Assoc. 92, 1413–1421 (1997). [CrossRef]  

12. D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Am. A 4, 2379–2394 (1987). [CrossRef]   [PubMed]  

13. L. Sendur and I. W. Selesnick, “Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency,” IEEE Trans. Signal Processing 50, 2744–2756 (2002). [CrossRef]  

14. L. Sendur and I. W. Selesnick, “Bivariate shrinkage with local variance estimation,” IEEE Signal Processing Lett. 9, 438–441 (2002). [CrossRef]  

15. T. Eltoft, K. Taesu, and Te-Won Lee, “On the multivariate Laplace distribution,” IEEE Signal Processing Lett. 13, 300–303 (2006). [CrossRef]  

16. S. Tan and L. Jiao, “Multishrinkage: analytical form for a Bayesian wavelet estimator based on the multivariate Laplacian model,” Opt. Lett. 32, 2583–2585 (2007). [CrossRef]   [PubMed]  

17. S. Tan and L. Jiao, “Wavelet-based Bayesian image estimation: from marginal and bivariate prior models to multivariate prior models,” IEEE Trans. Image Processing, submitted, (2006).

18. A. Achim and E. E. Kuruoglu, “Image denoising using bivariate-stable distributions in the complex wavelet domain,” IEEE Signal Processing Lett. 12, 17–20 (2005). [CrossRef]  

19. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain,” IEEE Trans. Image Processing 12, 1338–1351 (2003). [CrossRef]  

20. T. A. Øigård, A. Hanssen, R. E. Hansen, and F. Godtliebsen, “EM-estimation and modeling of heavy-tailed processes with the multivariate normal inverse Gaussian distribution,” Signal Process. 85, 1655–1673 (2005). [CrossRef]  

21. S. Tan and L. Jiao, “Multivariate statistical models for image denoising in the wavelet domain,” International Journal of Computer Vision 75, 209–230 (2007). [CrossRef]  

22. P. V. Gehler and M. Welling, “Product of Edgeperts,” in Advances in Neural Information Processing System, Y. Weiss, B. Scholkopf, and J. Platt, eds. (Cambridge, MA., 2005), pp. 419–426.

23. M. J. Wainwright and E. P. Simoncelli, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Applied and Computational Harmonic Analysis 11, 89–123 (2001). [CrossRef]  

24. J. M. Bioucas-Dias, “Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors,” IEEE Trans. Image Processing 15, 937–951 (2006). [CrossRef]  

25. R. W. Buccigrossi and E. P. Simoncelli, “Image compression via joint statistical characterization in the wavelet domain,” IEEE Trans. Image Processing 8, 1688–1701 (1999). [CrossRef]  

26. M. Abramowitz and C. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (New York: Dover, 1972).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. PSNR of the solution versus the number of iterations for the Pepper image: For the asymptotic univariate Bessel K Form model (a) at noise level 20 and (b) at noise level 30; For the multivariate exponential model (3×3 neighborhood structure) (c) at noise level 20 and (d) at noise level 30; For the multivariate Gaussian model (3×3 neighborhood structure) (e) at noise level 20 and (f) at noise level 30
Fig. 2.
Fig. 2. Restoration images for visual comparison using different univariate prior models on the Pepper image (with additive Gaussian white noise of standard deviation 20, PSNR=22.08dB) (a) Hard threshold, PSNR=26.84dB; (b) MMSE estimator based on the GL model proposed by Simoncelli et. al (GL_MMSE) in [2], PSNR=28.24dB; (c) Unified denoising algorithm based on the Generalized Laplacian model (GL_UA), PSNR=28.31dB; (d) Unified denoising algorithm based on the Bessel K Form model (Bessel K Form_UA), PSNR=28.12dB; (e) Unified denoising algorithm based on the Asymptotic Bessel K Form model (Asy. Bessel K Form_UA), PSNR=28.07dB; (f) Unified denoising algorithm based on the Laplacian model (Laplacian_UA), PSNR=28.28dB
Fig. 3.
Fig. 3. Restoration images for visual comparison using different multivariate prior models on the Pepper image (with additive Gaussian white noise of standard deviation 20, PSNR=22.08dB) (a) 3×3 Wiener filtering in the wavelet domain, PSNR=27.84dB; (b) Unified denoising algorithm based on the Gaussian model (Gaussian_UA: 3×3), PSNR=27.85dB; (c) GSM-based MMSE algorithm developed by Portilla et. al in [19] (GSM_MMSE: 3×3+1), PSNR=29.45dB; (d) Unified denoising algorithm based on the Laplacian model (Laplacian_UA: 3×3+1), PSNR=29.43dB; (e) Unified denoising algorithm based on the Elliptically Contoured Distribution model (MECDF_UA: 3×3+1), PSNR=29.19dB; (f) Unified denoising algorithm based on the exponential model (MEM_UA: 3×3+1), PSNR=29.31dB
Fig. 4.
Fig. 4. Crop of restoration images for visual comparison using different multivariate prior models on the Boat image (with additive Gaussian white noise of standard deviation 20, PSNR=22.10dB) (a) GSM-based MMSE algorithm developed by Portilla et. al in [19] (GSM_MMSE: 3×3+1), PSNR=29.62dB; (b) Unified denoising algorithm based on the Laplacian model (Laplacian_UA: 3×3+1), PSNR=29.76dB; (c) Unified denoising algorithm based on the Elliptically Contoured Distribution model (MECDF_UA: 3×3+ 1), PSNR=29.58dB; (d) Unified denoising algorithm based on the exponential model (MEM_UA: 3×3+1), PSNR=29.67dB

Tables (2)

Tables Icon

Table 1. Comparison of denoising performance using different prior models in terms of PSNR for Barbara and Lena image

Tables Icon

Table 2. Comparison of denoising performance using different prior models in terms of PSNR for Boat and Pepper image

Equations (60)

Equations on this page are rendered with MathJax. Learn more.

Y i = X i + ε ,
Q ( X , X t ) = E Z { log P ( X , Y , Z ) | Y , X t } .
Q ( X , X t ) = Z [ ( log P ( Y | X ) + log P ( X | Z ) + log P ( Z ) ) ] P ( Z | Y , X t ) d Z
= log P ( Y | X ) + Z log P ( X | Z ) P ( Z | Y , X t ) d Z + C ( Z , Y , X t ) ,
Q ( X , X t ) = log P ( Y X ) 1 2 i = 1 M X i T ρ 1 X i z z i 1 P ( Z X t ) d Z + C ( Z , Y , X t )
= log P ( Y X ) 1 2 i = 1 M X i T ρ 1 X i z i z i 1 P ( z i X i t ) d z i + C ( Z , Y , X t ) .
X i t + 1 = arg max X i Q ( X , X t ) = arg max X i [ log P ( Y | X ) 1 2 i = 1 N X i T ρ 1 X i Ω ( X i t ) ]
  = arg max X i [ 1 2 i = 1 M ( Y i X i ) T ρ ε 1 ( Y i X i ) 1 2 i = 1 M X i T ρ 1 X i Ω ( X i t ) ]
= arg zero d d X i [ i = 1 M ( Y i X i ) T ρ ε 1 ( Y i X i ) + i = 1 M X i T ρ 1 X i Ω ( X i t ) ]
= arg zero [ 2 ρ ε 1 ( Y i X i ) + 2 Ω ( X i t ) ρ 1 X i ] .
Ω( X i t ) ρ 1 X i t+1 = ρ ε 1 ( Y i X t t+1 ),
X i t + 1 = ( Ω ( X i t ) ρ 1 + ρ ε 1 ) 1 ρ ε 1 Y i .
X i t + 1 = ( 2 f ( r ) d dr f ( r ) ρ 1 + ρ ε 1 ) 1 ρ ε 1 Y i
= ( 2 d dr log f ( r ) ρ 1 + ρ ε 1 ) 1 ρ ε 1 Y i .
X i = ( 2 d dr log f ( r ) ρ 1 + ρ ε 1 ) 1 ρ ε 1 Y i .
X i = ( ρ 1 + ρ ε 1 ) 1 ρ ε 1 Y i ,
P ( X i ) = 3 2 π σ 2 exp ( 3 σ x 1 2 + x 2 2 )
                = f ( r ) = 3 2 π σ 2 exp ( 3 r 1 2 ) ,
x 1 ̂ = ( y 1 2 + y 2 2 3 σ n 2 σ ) + y 1 2 + y 2 2 y 1 ,
S T ρ 1 S = ( S 1 ρ S T ) 1 = ( Q Λ Q T ) 1 = Q Λ 1 Q T .
X i t + 1 = S Q ( I d 2 d d r log f ( r ) Λ 1 ) 1 ( SQ ) 1 Y i
= SQ [ λ i λ i 2 d dr log f ( r ) ] 1 d ( SQ ) 1 Y i ,
r = ( X i t ) T ρ 1 X i t = ( X i t ) T S T S T ρ 1 S S 1 X i t = ( X i t ) T S T Q Λ 1 Q T S 1 X i t
= ( Q T S 1 X i t ) T Λ 1 Q T S 1 X i t = i = 1 d 1 λ i V i 2 ,
P ( x ) = ( 2 π ) 1 2 σ 1 exp ( r 2 ) = C · f G ( r ) ,
d dr log f G ( r ) = 1 2 .
P ( x ) = 1 Ψ ( s , p ) exp ( x s p ) ,
σ 2 = σ n 2 + s 2 Γ ( 3 p ) Γ ( 1 p ) , m 4 = 3 σ n 4 + 6 σ n 2 s 2 Γ ( 3 p ) Γ ( 1 p ) + s 4 Γ ( 5 p ) Γ ( 1 p ) .
P ( x ) = C · exp ( x 2 p ) = C · exp [ ( x 2 s 2 ) p 2 ] = C · exp [ ( x 2 σ 2 ) p 2 ( σ 2 s 2 ) p 2 ]
= C · exp [ ( σ s ) p r p 2 ] = C · exp ( α r β ) = C · f GL ( r ) ,
d dr log f GL ( r ) = α β r β 1 .
P ( x ) = 1 π Γ ( p ) ( s 2 ) p 2 1 4 x 2 p 1 2 K p 1 2 ( 2 s x ) ,
σ ~ 2 = ps + σ n 2 , κ ~ = 3 p + 3 ,
P ( x ) = C · r p 2 1 4 K p 1 2 ( 2 pr ) = C · f BK ( r ) ,
d dz [ K v ( z ) ] = d dz [ z v K v ( z ) z v ] = K v 1 ( z ) v z 1 K v ( z ) .
d dr log f BK ( r ) = ( p 2 1 4 ) 1 r + d dr K p 1 2 ( 2 p r ) K p 1 2 ( 2 p r ) = p 2 p r K p 3 2 ( 2 p r ) K p 1 2 ( 2 p r ) .
P ( x ) = 1 Γ ( p ) ( 2 s ) p 2 x p 1 exp ( 2 s x ) .
P ( x ) = 1 Γ ( p ) ( 2 s ) p 2 ( x 2 ) p 1 2 exp ( 2 s ( x 2 ) 1 2 )
= 1 Γ ( p ) ( 2 s ) p 2 σ p 1 ( x 2 σ 2 ) p 1 2 exp ( 2 σ 2 s ( x 2 σ 2 ) 1 2 )
= C · r p 1 2 exp ( r 1 2 2 σ 2 s ) .
P ( x ) = C · r p 1 2 exp ( 2 pr ) = C · f AABK ( r ) ,
d dr log f AABK ( r ) = p 1 2 r p 2 r .
P ( x ) = 1 p exp ( x s ) ,
P ( x ) = 1 p exp ( 2 r ) = C · f BE ( r ) ,
d dr log f BE ( r ) = 1 2 r .
P ( X ) = ( 2 π ) N 2 1 2 exp [ 1 2 X T ρ 1 X ] = C · f G ( r ) ,
d dr log f G ( r ) = 1 2 .
P ( X ) = ( 2 π ) d 2 2 λ K d 2 1 ( 2 λ q ( X ) ) ( λ 2 q ( X ) ) d 2 1 ,
P ( X ) = ( 2 π ) d 2 2 1 2 + d 4 λ d 2 K d 2 1 ( 2 r ) r d 4 1 2 = C · K d 2 1 ( 2 r ) r d 4 1 2 = C · f ML ( r ) ,
d dr log f ML ( r ) = d dr log K d 2 1 ( 2 r ) 1 r ( d 4 1 2 ) = K d 2 2 ( 2 r ) 2 r K d 2 1 ( 2 r ) ( d 2 1 ) 1 r .
P ( X ) = 3 2 π ρ 1 2 exp ( 3 r 1 2 ) = C · f ECDF _ 2 ( r ) ,
d dr log f ECDF _ 2 ( r ) = 3 2 r 1 2 .
P ( X ) = 3 3 4 π 2 ρ 1 2 r 1 2 exp ( 3 r 1 2 ) = C · f ECDF _ 4 ( r ) ,
d dr log f ECDF _ 4 ( r ) = 1 2 r 3 2 r 1 2 .
P ( X ) = 9 3 32 π 5 ρ 1 2 ( 5 r 7 2 + 5 3 r 3 + 6 r 5 2 + 3 r 2 ) exp ( 3 r 1 2 ) = f ECDF _ 10 ( r ) ,
d dr log f ECDF _ 10 ( r ) = 3 r 2 6 3 r 3 2 15 r 7 3 r 1 2 15 15 3 r 1 2 17.5 r 1 3 r 3 2 + 6 r + 5 3 r 1 2 + 5 .
P ( X ) = C · exp ( a 1 a 2 r a 3 ) = C · f MEM _ d ( r ) = C · exp ( a 2 r a 3 ) ,
d dr log f MEM _ d ( r ) = a 2 a 3 r a 3 1 .
dP ( X i t z i ) dX i _ cent t = ( 2 π ) d 2 z i ρ 1 2 exp [ 1 2 ( X i t ) T ( z i ρ ) 1 X i t ] ( 1 2 d dX i _ cent t ( X i t ) T ( z i ρ ) 1 X i t )
= P ( X i t z i ) ( 1 2 z i 1 ) d ( X i t ) T ρ 1 X i t dX i _ cent t = 1 2 z i 1 P ( X i t z i ) d ( X i t ) T ρ 1 X i t dX i _ cent t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.