Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Unsupervised solution for in-line holography phase retrieval using Bayesian inference

Open Access Open Access

Abstract

In propagation based phase contrast imaging, intensity patterns are recorded on a x-ray detector at one or multiple propagation distances, called in-line holograms. They form the input of an inversion algorithm that aims at retrieving the phase shift induced by the object. The problem of phase retrieval in in-line holography is an ill-posed inverse problem. Consequently an adequate solution requires some form of regularization with the most commonly applied being the classical Tikhonov regularization. While generally satisfying this method suffers from a few issues such as the choice of the regularization parameter. Here, we offer an alternative to the established method by applying the principles of Bayesian inference. We construct an iterative optimization algorithm capable of both retrieving the unknown phase and determining a multi-dimensional regularization parameter. In the end, we highlight the advantages of the introduced algorithm, chief among them being the unsupervised determination of the regularization parameter(s). The proposed approach is tested on both simulated and experimental data and is found to provide robust solutions, with improved response to typical issues like low frequency noise and the twin-image problem.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the hard x-ray regime the investigation of low absorbing materials such as biological samples is difficult using x-ray absorption imaging. This limitation can be overcome through phase imaging techniques with highly intense and coherent synchrotron beams [1]. In in-line holography [2–4], phase contrast is the result of the wavefield propagation through free space after interaction with the object. Combined with a nanofocus [5], the experimental setup allows for the acquisition of several diffraction patterns on a fixed detector as the sample is moved at multiple distances from the focus. A quantitative relation exists between the recorded images and the complex index of refraction characterizing the object. This dependency is described in our paper by detailing the image formation process and the “Contrast Transfer Function” model that approximates it. Based on this model, inversion algorithms aiming at retrieving the phase shift can be deduced. However, due to poor transfer of information from the object plane to the detector plane for some spatial frequencies, phase retrieval is an ill-posed inverse problem that requires adequate regularization. We describe in the following the standard Tikhonov regularization techniques most commonly in use for phase retrieval as well as their limitations and drawbacks. Consequently we propose an alternative inversion method based on Bayesian inference that addresses some of the imperfections of the classical approach. The main advantage of the proposed algorithm is the unsupervised determination of the regularization parameter. Finally we proceed to test and compare the standard and Bayesian approaches on both simulated and experimental datasets.

1.1. Image formation in in-line holography

Considering the propagation of x-ray electromagnetic waves through an object, a refraction phenomena must occur that can be described by the three-dimensional complex refractive index of that material:

n(r)=1δ(r)+iβ(r),
where r denotes the spatial coordinate r = (x, y, z). The real part of the complex refractive index δ(r) records a relative phase shift in the wave propagating through the object as compared to the wave passing through free space (the ’background’ in imaging terms). The imaginary part, β(r), is a measure of the wave attenuation and proportional to the linear attenuation coefficient, μ = 4πβ/λ. The difference from unity of n is very small for x-rays, with the real part larger or much larger than the imaginary part (δ ≈ 10−6, β ≈ 10−9, for water or intracellular biological material in hard x-ray regime at 17keV, as studied in this work). This represents the prime argument for the use of phase contrast imaging instead of classical absorption imaging.

Given the object plane is orthogonal to the optical axis and an unitary incident wave uinc = 1, we get that the expression of the wave in the object plane (as it exits the object) u0(rT) is equal to the so-called transmission function T(rT) of the object :

u0(rT)=T(rT)uinc(rT)=T(rT),rT=(x,y)
The transmission function [6] is determined by the projection of the complex refractive index on the optical axis z:
T(rT)=exp{B(rT)}exp{iϕ(rT)},B(rT)=2πλβ(x,y,z)dzϕ(rT)=2πλ1δ(x,y,z)dz
The wave intensity in this plane corresponds to the absorption image. It is given by the square modulus of the wave and thus retains only the attenuation information:
I0(rT)=|u0(rT)|2=|T(rT)|2=exp{2B(rT)}

The Kirchhoff-Fresnel [7] integral in its paraxial approximation gives the relation between the wave in the object plane and the wave propagated to a finite distance D:

uD(rT)=exp{i2πD/λ}iλDu0(rT0)exp{iπλD|rTrT0|2}drT0,
where λ is the wavelength. It can be used to derive a powerful mathematical operator in in-line holography, called the Fresnel propagator. Wave propagation is found equivalent to convolution with this operator in real space, thus multiplication in frequency space:
realspace:uD(rT)=PD(rT)*u0(rT),PD(rT)=1iλDexp{iπλDrT2}reciprocalspace:u˜D(f)={uD(rT)}=P˜D(f)u˜0(f),P˜D(f)={PD(rT)}=exp{iπλD|f|2},
where is the Fourier transform operator and f stands for the frequency coordinate:
u˜D(f)={uD(rT)}=uD(rT)exp{2iπfrT}drTuD(rT)=1{u˜D(f)}=u˜D(f)exp{2iπrTf}df
Finally any recorded image is basically the wave intensity ID at a certain propagation distance D:
ID(rT)=|uD(rT)|2I˜D(f)=u˜D(η)u˜D*(ηf)dη

The last result makes it obvious that the recorded diffraction patterns represent an entangled, convoluted mixture of the phase and attenuation coefficients of the wave in the object plane. To avoid the computational difficulties related to the convolution operator the inversion algorithms make use of the Fourier transforms of the recorded images ĨD to retrieve the phase and/or the absorption components. Reconstructing the entire refractive index requires the limitation of the problem to a simpler (though farther from the truth) mathematical expression of the diffraction patterns, that permits the numerical retrieval of both the phase and attenuation images. This begins with the important result of Guigay [8]:

I˜D(f)=T(rTλDf2)T*(rT+λDf2)exp{i2πrTf}drT
Additionally, it employs several hypothesis on the physical properties of the object in order to arrive to a linear relation between the Fourier transform of the diffraction patterns and the Fourier transform of the unknown phase and attenuation. Linear approximations based on the “Transport of intensity equation” (TIE) exist and are in general use [9, 10]. However they are valid approximations only for small propagation distances. Contrast due to phase shift becomes more significant as the propagation distance increases which means that phase imaging is more sensitive in a regime that TIE fails to correctly approximate [11]. Instead we make use of the “Contrast transfer function” (CTF) method [4,8,12,13] that builds on the reduction of relation (3) to a linear expression:
T(rT)1B(rT)+iϕ(rT)
This is valid under the assumptions of low absorption, B(rT) ≪ 1 (often true for soft biological tissue), and slowly varying phase, |ϕ(rT) − ϕ(rT + λDf)| ≪ 1 (again true in cell imaging, where recorded images usually have a sparse gradient). Combining the last two equations one can derive the formulation of a basic direct model in phase retrieval called the CTF model:
I˜D(f)=δ(f)2cos(πλD|f|2)B˜(f)+2sin(πλD|f|2)ϕ˜(f),
Further simplification of this model can be achieved by assuming either a non-absorbing, pure phase object [12], B(x) = 0:
I˜D(f)=δ(f)+2sin(πλD|f|2)ϕ˜(f),
or a homogeneous object [14,15] with a known ratio between its phase and attenuation components, δ/β:
I˜D(f)=δ(f)+[2sin(πλD|f|2)+2βδcos(πλD|f|2)]ϕ˜(f)
An extension of the CTF model, called the Mixed Approach [13], was developed for the treatment of more strongly absorbing samples starting with the hypothesis of a slowly varying attenuation A(rT + λDf) ≈ A(rT) + λDfA(rT), where A(rT) = exp{−B(rT)}:
I˜D(f)=I˜0(f)+2sin(πλD|f|2)Ψ˜(f)+λD2πcos(πλD|f|2){(ΨlnI0)},
where Ψ(rT) = I0(rT) ϕ(rT). A drawback to this approach is the requirement to know (record) I0(rT), the attenuation image.

For a correct numerical evaluation, the direct model must also account for some physical realities like the effect of the spherical wave illumination, the partial coherence of the beam or the point-spread-function of the detector. The spherical wave illumination induces a magnification and a change in effective propagation distance. The effect can be accounted for prior to the phase retrieval process [5]. The partial coherence and the detector response can be described as simple multiplications of the Fourier transform of the intensity ĨD(f) with respectively the degree of coherence and the detector transfer function. From now on, for a simplified formalism, every time we write ĨD(f) we assume these multiplications are included in notation.

All enumerated models feature a linear relation between the Fourier representation of the recorded image ĨD(f) and the Fourier transforms of the phase, ϕ̃(f) and the attenuation, (f). The inverse problem in in-line holography consists in retrieving the latter two quantities from the input ĨD(f). In the case of single distance acquisition, the problem is obviously underdetermined. Immediately this can be overcome by recording several images at multiple propagation distances. Still the problem remains ill-posed for the following reason. The coefficients of equations (11) to (14) are sine and/or cosine functions of a multiple of the squared modulus of the spatial frequency. For those frequencies that render these coefficients null (the so-called zero-crossings of the transfer function, see Fig. 1 the phase and/or the attenuation retrieval becomes impossible. Although this is true for only a few frequencies in reciprocal space, it means there is no unique solution to the phase retrieval problem, known as the twin image problem. Another use of multi-distance acquisition in phase imaging is to provide redundancy in input information and thus mitigate the zero-crossings issue. Previous research and practice have set the number of optimum distances at four [12]. However the multi-distance acquisition does not solve the fact that the phase transfer function is close to zero at low spatial frequencies. In order to fully address the ill-posedness of phase retrieval one still has to take additional measures on the computational side such as regularization. We proceed to describe how this is applied in phase retrieval in the next section.

 figure: Fig. 1

Fig. 1 The value of the Contrast Transfer Function for the amplitude, phase and phase of a homogeneous object with δ/β = 20. They are expressed as combinations of sine/cosine functions of the square modulus of the spatial frequency coordinate.

Download Full Size | PDF

1.2. Phase retrieval through classical regularization

We have set in the previous section the formulation of the direct (CTF) model in phase imaging as a linear, algebraic relation between the observations (recorded diffraction patterns) and the unknowns (complex index of refraction) in Fourier space. We also underlined the difficulties raised by the ill-posed inverse problem and the necessity of applying regularization to obtain a satisfying solution to phase retrieval. Here we proceed to introduce the results of classical regularization and their application to the inverse problem of phase retrieval. The rising shortcomings of this method are highlighted and addressed later in this paper.

We consider inverse problems that, like phase retrieval, can be formulated as a linear and algebraic relation y = Hx + , between the known output data y and the unknown input x. H represents the system matrix describing the model, the physical relation between the input and output datasets; stands for the noise affecting the measurements. In phase retrieval y is equivalent to the collection of recorded images in Fourier space ĨDk(f), while x stands for the unknown phase ϕ̃(f) and/or attenuation (f), again in reciprocal space. The elements of matrix H are defined by the coefficients of the CTF model chosen to better represent our a priori information on the imaged object.

There are several basic methods to solve the inverse problem, i.e. to infer on the unknown quantity x. Most of them generally offer poor estimates because of the unrealistic hypotheses they rely on. Direct inversion is based on the assumption that the matrix H is invertible leading to the simple solution: = H−1y. Even if the matrix H is invertible, it might still suffer from ill-conditioning. A small conditioning number of the matrix of a linear system makes it so that small variations in the input translate into large variations of the output. Consequently, small errors in measurements (recorded images) translate into large errors in the retrieval of the unknown quantities (phase, attenuation).

The basic solution for a realistic hypothesis that accounts for noise errors and a non-square matrix H is the least squares (LS) estimate. It is based on the minimization of the Euclidian distance between the observations y and model simulations Hx:

x^=argminxyHx2=argminxJLS(x),
where JLS designates the least squares criterion. Given the problem is convex, the solution can be obtained by solving
JLS(x)x=02H*(yHx)=0x^=[H*H]1H*y
where H* denotes the conjugate transpose of H. This solution is still very sensitive to noise depending on the conditioning number of matrix H*H that needs to be inverted. In such situations least squares estimation requires regularization. The regularization theory aims to overcome the ill-posedness of inverse problems. As introduced by Tikhonov [16], the common approach is to construct a compound criterion that adds a penalization term to the standard least squares one:
J(x)=yHx2+λRΔ(x,x0),
where Δ is a distance in the unknown space, λR is called the regularization parameter and x0 represents an a priori estimation of the solution. While the least squares term ‖yHx2 attempts to match the observations y to the model simulations Hx, the regularization term λRΔ(x, x0) acts as a constraint that makes the solution less sensitive to the inherent noise in the measurements. It is also meant to bring the solution closer to available a priori information as described by x0 and the choice of distance Δ. The regularization parameter λR acts as a weight to the penalization term and balances between the relevance attributed to the a priori information as compared to the observations. If we choose Δ to be the Euclidian norm, Δ(x, x0) = ‖xx02, the compound criterion for Tikhonov regularization becomes:
J(x)=yHx2+λRxx02,
and by minimizing it xJ(x)=2H*(yHx)+2λR(xx0)=0, we obtain the estimator:
x^=[H*H+λRI]1(H*y+λRx0).
Given the lack of a priori information, in many cases we assume that x0 = 0, which leads to the minimum norm least squares solution:
x^=[H*H+λRI]1H*y.
Here, by appropriately choosing the regularization parameter’s value, λR, we can control the conditioning number of matrix H*H + λRI that needs to be inverted. Applying this result for the CTF models expressed by equations (11) to (13) leads to the following solutions for phase retrieval:
{(11):ϕ^=12Δ+λR[kI˜ksin(πλDk|f|2)𝔄kI˜kcos(πλDk|f|2)](12):ϕ^=kIDk2sin(πλDk|f|2)4sin2(πλDk|f|2+λR(13):ϕ^kIDk2(sin(πλDk|f|2)+βδcos(πλDk|f|2))4(sin(πλDk|f|2+βδ(πλDk|f|2))2+λR
where k stands for the index of the distance at which a certain image was acquired, 𝔄=ksin(πλDk|f|2)cos(πλDk|f|2),𝔅=ksin2(πλDk|f|2),=kcos2(πλDk|f|2) and Δ = 𝔅ℭ𝔄2. These solutions are the result of applying classical Tikhonov regularization to the inverse problem of phase retrieval. They are tested in this work against a regularization method based on Bayesian inference.

There are other various options available when choosing the appropriate distance for the model term Δ(y, Hx) and the regularization term λRΔ(x, x0):

  • Quadratic or Least squares (LS):
    Δ(y,Hx)=yHx2=i=1M|yi[Hx]i|2,Δ(x,x0)=xx02=j=1N|xjx0j|2
    where M is the length of output y and N is the length of output x. It leads to the Tikhonov solution in Eq. (19).
  • Weighted least squares:
    Δ(y,Hx)=yHxQ12=(yHx)*Q11(yHx),Δ(x,x0)=xx0Q22=(xx0)*Q21(xx0)
    where Q1 and Q2 represent the weights applied to the two norms in matrix form. Combined they play an identical role to λR, the scalar regularization parameter in Eq. (18), but they express it in a more complex matrix form. Given that the choice of λR in classical regularization is done empirically, the determination of Q1 and Q2 is even more complicated. We show later how a non-scalar and unsupervised regularization can be achieved through the Bayesian method we introduce.
  • Lp norm, with 1 < p1,2 < 2:
    Δ(y,Hx)=yHxp1=i=1M|yi[Hx]i|p1,Δ(x,x0)=xx0p2=j=1N|xjx0j|p2
    A generalization of the first case where p1 = p2 = 2. For p1 = 2 and p2 = 1 we get L1 regularization or total variation (TV) minimization [17,18], useful to impose the a priori in formation of a sparse object [19].
  • Kullback-Leibler (KL) divergence:
    Δ(y,Hx)=i=1Myilnyi[Hx]i,Δ(x,x0)=j=1Nxjlnxjx0j
    is widely applied in statistics as a measure of similarity between two or more probability density functions [20]. This ’distance’ is used in the next section where we develop the Bayesian algorithm.

Computing the estimator in Eq. (19) requires the inversion of the matrix H*H + λRI. In those instances where this matrix proves too large or too difficult to invert, several iterative optimization methods can provide or improve a solution [21,22]. At iteration k + 1, it generally follows the expression:

x(k+1)=x(k)+α(k)δ(x(k))
Depending on the expression of functional δ(x(k)) various gradient based iterative algorithms can be applied. The fixed step gradient descent iterative solution is commonly employed. It uses δ(x(k)) = −∇J(x(k)) and a fixed α coefficient, so that
x(k+1)=x(k)αJ(x(k))
For the basic minimum norm least square criterion J(x) = ‖yHx2 + λRx2 the iterative solution becomes:
x(k+1)=x(k)+α[H*(yHx(k))λRx(k)],
We can observe that initialized with a null value x(0) = 0, we obtain the solution x(1) = H*y after the first iteration.

Although the standard regularization methods we presented above offer satisfying and robust results, there are a number of problems that remain open, like the determination of the regularization parameter or the arguments behind choosing the appropriate type of penalization term. Various theoretical methods of automatically computing the ideal value of the regularization parameter have been developed, among which the L-curve method is the most commonly applied [23]. In practice we found this method to be unsatisfactory and consistently resorted to manually determining the value of the regularization parameter by trial and error. Under a Bayesian framework we show that these issues can be addressed directly, in a less empirical fashion.

2. Bayesian inversion

2.1. Bayesian regularization with Gaussian priors

We assume that our problem has been linearized and discretized so that the forward model can be written using this algebraic form:

y=Hx+,
where the output (experimental data) y and the additive noise are vectors of length M, the input (the unknown quantity we try to estimate) x is a vector of length N, so that H is a M by N matrix describing the linear model.

Probability density functions are assigned to this quantities: p(x) as the prior distribution, p(y|x) as the likelihood and p(x|y) as the the posterior distribution. Applying the Bayes rule for probabilities [24], the posterior distribution is found to be proportional to the multiple of the previous two:

p(x|y)=p(y|x)p(x)p(y)p(y|x)p(x),
where the denominator p(y) = ∫ p(y|x) p(x) dx is a normalizing constant called evidence, that will be ignored in the following formalism.

Let’s consider a white noise distribution for , p() = 𝒩(|0, vI), i.e. a Gaussian distribution with zero mean and where v denotes the variance of the noise. In turn the likelihood follows a Gaussian distribution of mean equal to the model simulation Hx and variance equal to that of the noise:

p(y|x)=𝒩(y|Hx,vI)vM2exp{12vyHx2}.
A result that we make use of in the next section states that the estimator of the variance of a Gaussian distribution follows an Inverse Gamma distribution, so that:
p(v)=𝒢(v|α,β)v(α+1)exp{β/v}
The choice of the prior law is one of most sensible since it is mathematically equivalent to the choice of the regularization term in the classical regularization theory. In our case we employ a multivariate Gaussian distribution to describe the random vector x of unknowns, of mean x0 and covariance matrix Vx. The vj components of this vector are defined as independent random variables following one-dimensional Gaussian distributions, of mean x0j and variance vj:
p(x)=𝒩(x|x0,Vx)exp{12(xx0)*Vx1(xx0)}p(x)=j𝒩(xj|x0j,vj)jvj12exp{j12vj|xx0j|2}
This ultimately leads to a solution comparable to applying an L2 norm penalization term in Tikhonov regularization. The variance of the noise and of the prior distribution make up the so-called hyper-parameters of the model, θ.

In the case of maximum a posteriori (MAP) estimation of the posterior distribution p(x, θ|y) the Bayesian approach [25] is able to infer on both the unknown quantity x and the hyper-parameters of the model, θ:

x^=argmaxxp(x,θ/y)=argminxJ(x),whereJ(x)=lnp(x,θ|y),θ^=argmaxθp(x,θ|y)=argminθJ(x)
The log-a-posteriori criterion J(x) is logarithmic in order to simplify the minimization of the exponential functions that define the posterior distribution p(x, θ|y).

Here we propose a more sophisticated method, the Bayesian Variational Approximation (BVA) [26], which begins with approximating the posterior distribution p(x, θ|y) by a separable one q(x, θ|y) = q1(x) q2(θ). The quality of this approximation is assessed with the Kullback-Leibler divergence measure.

KL(q1(x)q2(θ):p(x,θ|y))=q1(x)q2(θ)lnq1(x)q2(θ)p(x,θ|y)dxdθ
Minimizing it,qiKL(q:p)=0, we obtain solutions for the q1(x) and q2(θ) distributions
{q1(x)explnp(x,θ|y)q2=q1(x|θ˜),withlnp(x,θ|y)q2=lnp(x,θ|y)q2(θ)dθq2(θ)explnp(x,θ|y)q1=q1(θ|x˜),withlnp(x,θ|y)q1=lnp(x,θ|y)q1(x)dx
The estimators and θ̂ are computed as the expected values of these distributions.

Gaussian prior case

Here we offer a more detailed explanation of how the expressions of the aforementioned estimators are determined. The Gaussian prior model assumes that the prior follows a normal probability distribution. It is called a hierarchical prior model since it introduces a hidden variable v of Inverse Gamma distribution. This hidden variable represents the variance of the normal prior distribution.

p(x,v)=p(x|v)p(v)=j=1Np(xj|vj)j=1Np(vj)=j=1N𝒩(xj|0,vj)j=1N𝒢(vj|αv,βv).
Above we assumed the normal prior has a zero mean as, more often than not, no estimation of the inferred quantity is available a priori. In these conditions the expression of the posterior distribution becomes:
p(x,v,v|y)=p(y|x,v)p(x|v)p(v)p(v)
p(x,v,v|y)=𝒩(y|Hx,vI)j=1N𝒩(xj|0,vj)j=1N𝒢(vj|αv,βv)𝒢(v|α,β)
The fully developed expression of the logarithm of the posterior distribution writes as:
lnp(x,v,v|y)=M2lnvyHx22v12j=1Nlnvj12j=1N1vjxj2(αv+1)j=1Nlnvjβvj=1N1vj(α+1)lnvβv
Applying BVA as presented above, we search for a separable distribution q(x, v, v) that approximates the posterior distribution in Eq. (39). We may treat x as a random vector with independent components xj:
q(x,v,v)=q1(x)q2(x)q3(v)=j=1Nq1,j(xj)j=1Nq2,j(vj)q3(v),
or with correlated components:
q(x,v,v)=q1(x)q2(v)q3(v)=q1(x)j=1Nq2,j(vj)q3(v),

Hence forward we develop both the correlated case and the independent case. We continue to solve the following expectations:

{q1(x)expq2q3orq1,j(xj)expq1,jq2q3q2,j(vj)expq1q2,jq3,j{1,,N}q3(v)expq1q2,
where denotes the logarithm of the posterior distribution in Eq. (39) and we arrive to:
{q1(x)exp{12(xμ˜)*Σ˜1(xμ˜)}𝒩(x|μ˜,Σ˜)orq1,j(xj)exp{(xjμj)22[Σ˜]jj}q2,j(vj)vj(α˜v+1)exp{β˜v,j/vj}𝒢(vj|α˜v,β˜v,j)q3(v)v(α˜+1)exp{β˜/v}𝒢(v|α˜,β˜)
The expressions of the estimators are found by a simple process of identification.
{x^=μ˜=(H*H+v^V˜*V˜)1H*y,V˜=diag{1/v^j},j{1,,N}orx^j=[i=1Mhij2+v^/v^j]1i=1MhijyiΣ˜=v^(H*Hv^V˜*V˜)1,[Σ˜]jj=v^[i=1Mhij2+v^/v^j]1v^j=β˜v,j/α˜v,α˜v=αv+12,β˜v,j=βv+12(μ˜j2+[Σ˜]jj)v^=β˜/α˜,α˜=α+M2,β˜=β+12yHμ˜2

The numerical computation of these estimators requires the initialization of a few parameters linked to the Inverse Gamma distributions. Standard values for hyper-parameters αv = α = 1 and βv,j = β = 1 can be chosen as input so that, at first iteration, the regularization term /j equals one leading to a solution identical to the minimum norm least square estimator (as in Eq. (20) with λR = 1)). An iterative optimization process can now begin where the estimators {, , j}i+1 at iteration i + 1 are computed using the values obtained at the previous iteration, i, until convergence is reached. Practice has shown that very few iterations (often less than 10) are necessary to reach convergence. The difference in computational speed compared to the non-iterative Tikhonov solution is negligible since all calculations included in the optimization process are vector-based arithmetics and do not include any Fourier transformations.

The solution obtained here for is similar to the one given by minimum norm least squares regularization Eq. (20). The significant change in the expression of the estimator is that the regularization parameter λR, a scalar, is here replaced by a vector defined as the ratio of the estimated variances of the noise and posterior distributions /j. Interestingly, this particular result is consistent with the formulation of the Wiener deconvolution filter [27]. Widely used in various image processing applications, this filter makes use of the same ratio between the variance of the noise and the variance of the signal (a signal-to-noise measurement) in order to improve on the least squares filter. Coming back to our proposed solution we can underline several advantages. The values of the regularization vector /j are obtained alongside the final solution in an alternate optimization scheme. This leads to an unsupervised determination of the regularization parameter that replaces the empirical, trial-and-error tuning process necessary in classical regularization. Moreover the availability of a specific regularization parameter for each element of obviously allows for more flexibility than the scalar λR, since different components of may require regularization in different ’weights’. In the case of phase retrieval this non-uniform regularization pattern is shown to come in very useful in dealing with the low frequency noise.

2.2. Phase retrieval through Bayesian regularization

In order to implement the algorithm constructed in the previous section we need to follow a few preparatory steps. First the outputs of the in-line holography model, the acquired images Ik of size n × n pixels need to be translated into Fourier space and padded accordingly (to a size of 2n × 2n pixels) and then assembled and reshaped into a single vector y of length 4Kn2 pixels, where K denotes the number of recorded diffraction patterns. We consider here the acquisition of four images (K = 4) as it was the case with the experimental data that we treat later. The phase and attenuation images we intend to retrieve are of course of the same size 2n × 2n pixels, after padding. Consequently, the x vector comprising the inferred quantities must be of length 4n2 in the case of a pure-phase object (when we only retrieve the phase) or 8n2 in the more general case (when we reconstruct the entire refractive index). At the end of the algorithm the solution has to be deconstructed into one or two matrices which, after applying the Inverse Fourier transform, will represent the retrieved phase ϕ and attenuation B. The scheme below illustrates the algebraic result of the vectorization process we described.

[I˜1(f11)I˜1(fnn)I˜2(f11)I˜2(fnn)I˜3(f11)I˜3(fnn)I˜4(f11)I˜4(fnn)]=[s1(f112)0c1(f112)00s1(fnn2)0c1(fnn2)s2(f112)0c2(f112)00s2(fnn2)0c2(fnn2)s3(f112)0c3(f112)00s3(fnn2)0c3(fnn2)s4(f112)0c4(f112)00s4(fnn2)0c4(fnn2)][ϕ˜(f11)ϕ˜(fnn)B˜(f11)B˜(fnn)]y[4Kn2,1]H[4Kn2,8n2]x[8n2,1]

Lastly we need to define the model matrix H. Its elements are the coefficients of the CTF model used and have the following general expressions:

sk(fij2)=2sin(πλDkfij2);ck(fij2)=2cos(πλDkfij2)
where k stands for the index of the distance at which a certain image was acquired. In propagation based phase retrieval the acquired images are often large, typically n = 2048 pixels. Thus matrix H is found to be huge-dimensional and impractical to store or manipulate in its entire form. Fortunately, it is also a highly sparse matrix, block diagonal, which means it is sufficient to store the diagonals representing the coefficients of the contrast transfer function. The matrix operations needed in Eq. (45) can be comfortably handled by operating with vectors sk and ck.

3. Results

3.1. Simulations

First we tried to validate our method on simulated data. We used modified Shepp-Logan phantoms to model the ground truth, i.e. the real and imaginary components of the complex refractive index. The first picture represents the phase component ϕ of the constructed object (see Fig. 2). The attenuation component was considered proportional to the phase, lower by a factor of 1 000, corresponding to a δ/β ratio typical for water and soft tissue samples. We simulated the propagation of the constructed wavefield in the object plane u0 = exp{−B + } to four distances D by multiplication with the Fresnel propagator D (see relation (6)) in Fourier space ũD = D ũ0. The four propagation distances D1 = 16.8 mm, D2 = 17.8 mm, D3 = 21.4 mm, D4 = 30.4 mm and the wavelength λ ≈ 0.073 nm required for the computation of the propagator were selected to be similar to real experimental conditions. To the resulting diffraction patterns ID = |uD|2 we added zero-mean Gaussian noise in real space. The standard deviation of the noise was chosen as a fraction of the standard deviation of the respective image (a third in the example shown here).

 figure: Fig. 2

Fig. 2 Phase component of the constructed object in radians.

Download Full Size | PDF

The four simulated holograms finally obtained are shown in Fig. 3. They were used as input for a phase retrieval algorithm that assumed the pure-phase CTF model in Eq. (11) as a reflection of the large δ/β value chosen.

 figure: Fig. 3

Fig. 3 Simulated holograms of the constructed object for different propagation distances: a) D1 = 16.8 mm, b) D2 = 17.8 mm, c) D3 = 21.4 mm, d) D4 = 30.4 mm.

Download Full Size | PDF

The featured results were obtained by L2 regularization (Fig. 4(a)) and by Bayesian inversion with a Gaussian prior assuming independent components (Fig. 4(b)). A qualitative comparison between the two retrieved ’phase maps’ shows the first one to suffer from a low frequency noise that erroneously lowers the values of the pixels in the center of the image and raises the values of those at the four corners. This undesired effect is less pronounced in the second result obtained with the proposed algorithm. In order to provide a quantitative assessment of the two retrieved ’phase maps’ we calculated the normalized mean square error for each result relative to the ground truth: NMSE=ij(ϕ^ijϕij)2/ij(ϕij)2, where ϕ̂ is the retrieved phase and ϕ is the ground truth. The values of the normalized mean square error found in the two cases were 0.44 for the standard regularization result and 0.27 for the Bayesian inversion. Multiple simulation scenarios were explored where we varied model parameters like the δ/β value (in order to simulate weaker or stronger absorbing objects), the CTF formalism applied for inversion or the standard deviation of the added noise. All the results we obtained suggest that the proposed Bayesian-based algorithm is capable of providing robust solutions that are qualitatively and quantitatively superior to the standard method.

 figure: Fig. 4

Fig. 4 Phase map obtained through a) standard regularization and b) Bayesian inference.

Download Full Size | PDF

3.2. Experimental data

The proposed method was applied to different experimental datasets acquired at the ID16A NanoImaging beamline at the European Synchrotron Radiation Facility (ESRF). The presented data are chemically fixed red blood cells, imaged at an energy of 17.05 keV (equivalent to an x-ray wavelength of 0.073 nm). The beam was focused by Kirkpatrick-Baez optics [28, 29] down to approximately 30 nm in both directions [30]. The sample was placed behind the focus acting as a quasi-point source, creating a divergent beam geometry. Images were recorded using a FReLoN CCD based detector with a pixel size of 0.845μm. Images were registered at four effective propagation distances (D1 = 4 mm, D2 = 4.16 mm, D3 = 4.82 mm, D4 = 6.17 mm) with a fixed focus to detector distance of 337 mm, yielding an effective pixel size of 10 nm.

We show the acquired holograms in Fig. 5 after necessary pre-processing corrections (like flat-field correction, compensation for magnification, mutual alignment etc.) were applied. The phase maps of the red blood cell were retrieved using standard L2 regularization (see Fig. 6(a)) and the Bayesian algorithm assuming independent components (see Fig. 6(b)). As with the simulated data we can see some improvement in low frequency noise reduction. One obvious fault of the phase map retrieved by standard regularization is the unjustified ’glow’ or high value pixels found in close vicinity of the cell edge. This is interpreted to be the effect of the twin-image problem [31]. Especially encouraging is that the Bayesian inversion algorithm seems to limit the effects of the twin-image problem, as it is revealed by the line profiles through the cell shown in Fig. 7.

 figure: Fig. 5

Fig. 5 Pre-processed holograms of a red blood cell at four propagation distances a) D1 = 4 mm, b) D2 = 4.16 mm, c) D3 = 4.82 mm, d) D4 = 6.17 mm.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Phase map of a red blood cell obtained through a) standard regularization and b) Bayesian inference.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Line profiles (along the yellow lines of Fig. 6) through the retrieved phase maps obtained by classical regularization (blue) vs. Bayesian inference (green).

Download Full Size | PDF

Applying the Bayesian method of phase retrieval to other real data, we could observe its validity as an alternative to the standard inversion algorithm in place. The advantages theoretically anticipated were confirmed, notably the unsupervised determination of the regularization parameter(s). A basic quantitative comparison of the two results is shown in Table 1. Here by ’std’ we denote the standard deviation of a particular region of the image. SNR stands for the signal-to-noise ratio; the values in Table 1 were determined as the fraction between the square standard deviation of the region of interest (the cell) and the square of the standard deviation of the background (the region outside of the cell). The algorithm we introduced produced a phase map with a higher SNR mainly due to more pronounced signal gradient characterizing the cell region.

Tables Icon

Table 1. Numerical appreciation of the quality of phase retrieval through the standard and the Bayesian algorithms by means of standard deviation and signal-to-noise evaluation.

As shown in Fig. 8, the proposed algorithm quickly converges towards the final solution, the retrieved phase map shown in Fig. 6(b). Fast convergence (below 8 to 12 iterations) has been observed on each occasion the algorithm was applied, independent of the nature of reconstructed sample (here, a weakly absorbing biological sample i.e. a red blood cell; in other instances, more strongly absorbing samples or variously defined phantom objects). In the particular case of the red blood cell, in order to enable the fastest convergence we chose to initialize the variables and j in such manner that their fraction - the regularization parameter at first iteration - equals 0.1. This value was found after running the algorithm multiple times. The initialization value chosen for the variance of the noise, = 25, was found as the average variance of the background in the four recorded images multiplied by the number of pixels in one image (4n2). At first iteration, the obtained phase map resembles a too strongly regularized minimum norm least squares solution as predicted in the discussion of Eq. (45). The second iteration provides a solution similar to the classical regularization result shown in Fig. 6(a), that suffers from pronounced twin-image artifacts, in the form of the unrealistic “glow” surrounding the cell. The following iterations enable a fast optimization of this result, as indicated by the quick stabilization of the norm of the solution around the value of 3.52 × 10−2 (see table in Fig. 8). Correcting some of the abundant low-frequency noise present at the second iteration and showcasing a ’flatter’ background, the final solution represents an improved phase map reconstruction.

 figure: Fig. 8

Fig. 8 Solutions (retrieved phase maps) provided by the Bayesian algorithm at each iteration. The norm of these solutions calculated in Fourier space indicate quick convergence (for convenience, a factor of 10−2 is omitted from the table).

Download Full Size | PDF

In an attempt to analyze the amount of regularization performed by the proposed algorithm we looked at the modulus of the ratio /j (equivalent to the regularization parameter for our method) and its variation with the spatial frequency, as shown in Fig. 9. First we observe how sharply the function varies in the low to intermediate frequency range. This showcases the advantage of the proposed method in adapting the amount of regularization needed for a specific spatial frequency. It is easy to see how a classical approach using a scalar regularization parameter, thus constant for all frequencies, might fail to apply the appropriate amount of regularization for at least a certain frequency range. As expected the value of the regularization parameter is larger for very low frequencies, close to zero, corresponding to low sensitivity of propagation based imaging to slow variations of the phase; the phase transfer function is close to zero at low spatial frequencies as discussed at the end of section 1.1. For high frequencies where the signal-to-noise ratio is again low, the regularization parameter is also large in order to compensate for the noisy data.

 figure: Fig. 9

Fig. 9 Plot of the radially averaged regularization parameter|/j| versus the amplitude of the spatial frequency, normalized by the sampling frequency fs.

Download Full Size | PDF

4. Conclusions

The inverse problem of phase retrieval in in-line holography is an ill-posed inverse problem. We considered the case of multi-distance propagation-based phase contrast imaging and introduced linearized forward models expressed as Contrast Transfer Functions. The ill-posedness of the problem was readily recognized at low spatial frequencies and the zero-crossings of the transfer functions, making the phase retrieval procedure particularly sensitive to noise and measurement errors. The present work first discussed standard Tikhonov regularization and highlighted its drawbacks, such as the difficult determination of the regularization parameter and the reasoning behind the choice of the appropriate type of regularization term. As an alternative to the standard approach, we proposed a statistical estimation method based on Bayesian inference. An iterative optimization algorithm was constructed along the lines of the Bayesian Variational Approximation method and applied to the linearized inverse problem of phase retrieval. An immediate advantage to the standard framework is the unsupervised determination of the regularization parameter. Moreover this parameter is no longer a scalar, but a vector of the same length as the solution. This allows for a better tuned result by applying a specific scalar regularization value to each frequency in the Fourier representation of the solution. Another improvement to classical regularization is the fact that this algorithm accounts for and models the noise through one of the prior distributions. The Bayesian inversion as a statistical method allows for more informed decisions, intentionally shaping the solution through a priori knowledge instead of repetitive, empirical choices as with deterministic methods based on Tikhonov regularization. The introduced algorithm was tested on simulated and experimental data. The results obtained on the constructed phantom proved to be numerically superior to those provided by the standard method in terms of normalized mean square error. Applied on experimental data the Bayesian algorithm provided qualitatively improved results by reducing the negative effects of low frequency noise and the twin-image problem. Its fast convergence and robustness make it usable on different kind of data without significant computational overhead. In order to make better use of the a priori information available, one can consider other than Gaussian priors [32]. The flexibility of the algorithm allows for the quick implementation of alternative solutions based on these different priors. The ability of the proposed algorithm to infer on the unknowns as well as other multi-dimensional quantities like the regularization parameter or the noise itself shows it is able to make better use of the redundancy in the input information available in multi-distance propagation-based phase imaging. This quality makes the proposed approach well suited to other redundancy rich methods in coherent diffraction imaging like near-field [33] or far-field [34,35] ptychography.

References

1. J. Als-Nielsen and D. McMorrow, Elements of modern X-ray physics (John Wiley & Sons, 2011). [CrossRef]  

2. T. Davis, D. Gao, T. Gureyev, A. Stevenson, and S. Wilkins, “Phase-contrast imaging of weakly absorbing materials using hard x-rays,” Nature 373, 595 (1995). [CrossRef]  

3. A. Snigirev, I. Snigireva, V. Kohn, S. Kuznetsov, and I. Schelokov, “On the possibilities of x-ray phase contrast microimaging by coherent high-energy synchrotron radiation,” Rev. Sci. Instrum. 66, 5486–5492 (1995). [CrossRef]  

4. P. Cloetens, R. Barrett, J. Baruchel, J.-P. Guigay, and M. Schlenker, “Phase objects in synchrotron radiation hard x-ray imaging,” J. Phys. D: Appl. Phys. 29, 133 (1996). [CrossRef]  

5. R. Mokso, P. Cloetens, E. Maire, W. Ludwig, and J.-Y. Buffière, “Nanoscale zoom tomography with hard x rays using kirkpatrick-baez optics,” Appl. Phys. Lett. 90, 144104 (2007). [CrossRef]  

6. K. A. Nugent, “Coherent methods in the x-ray sciences,” Adv. Phys. 59, 1–99 (2010). [CrossRef]  

7. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

8. J.-P. Guigay, “Fourier-transform analysis of fresnel diffraction patterns and in-line holograms,” Optik 49, 121–125 (1977).

9. T. Gureyev, A. Roberts, and K. Nugent, “Partially coherent fields, the transport-of-intensity equation, and phase uniqueness,” J. Opt. Soc. Am. A 12, 1942–1946 (1995). [CrossRef]  

10. K. Nugent, T. Gureyev, D. Cookson, D. Paganin, and Z. Barnea, “Quantitative phase imaging using hard x rays,” Phys. Rev. Lett. 77, 2961 (1996). [CrossRef]   [PubMed]  

11. R. Hofmann, J. Moosmann, and T. Baumbach, “Criticality in single-distance phase retrieval,” Opt. Express 19, 25881–25890 (2011). [CrossRef]  

12. S. Zabler, P. Cloetens, J.-P. Guigay, J. Baruchel, and M. Schlenker, “Optimization of phase contrast imaging using hard x rays,” Rev. Sci. Instrum. 76, 073705 (2005). [CrossRef]  

13. J. P. Guigay, M. Langer, R. Boistel, and P. Cloetens, “Mixed transfer function and transport of intensity approach for phase retrieval in the fresnel region,” Opt. Lett. 32, 1617–1619 (2007). [CrossRef]   [PubMed]  

14. D. Paganin, S. Mayo, T. E. Gureyev, P. R. Miller, and S. W. Wilkins, “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc. 206, 33–40 (2002). [CrossRef]   [PubMed]  

15. L. Turner, B. Dhal, J. Hayes, A. Mancuso, K. Nugent, D. Paterson, R. Scholten, C. Tran, and A. Peele, “X-ray phase imaging: Demonstration of extended conditions with homogeneous objects,” Opt. Express 12, 2960–2965 (2004). [CrossRef]   [PubMed]  

16. A. N. Tikhonov, A. Goncharsky, V. Stepanov, and A. G. Yagola, Numerical methods for the solution of ill-posed problems, vol. 328 (Springer Science & Business Media, 2013).

17. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imag. Vis. 20, 89–97 (2004). [CrossRef]  

18. A. Kostenko, K. J. Batenburg, H. Suhonen, S. E. Offerman, and L. J. Van Vliet, “Phase retrieval in in-line x-ray phase contrast imaging based on total variation minimization,” Opt. Express 21, 710–723 (2013). [CrossRef]   [PubMed]  

19. A. Pein, S. Loock, G. Plonka, and T. Salditt, “Using sparsity information for iterative phase retrieval in x-ray propagation imaging,” Opt. Express 24, 8332–8343 (2016). [CrossRef]   [PubMed]  

20. J. R. Hershey and P. A. Olsen, “Approximating the kullback leibler divergence between gaussian mixture models,” in IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP (IEEE, 2007), vol. 4, pp. IV–317.

21. S. Maretzke, M. Bartels, M. Krenkel, T. Salditt, and T. Hohage, “Regularized newton methods for x-ray phase contrast and general imaging problems,” Opt. Express 24, 6490–6506 (2016). [CrossRef]   [PubMed]  

22. A. Mirone, E. Brun, E. Gouillart, P. Tafforeau, and J. Kieffer, “The pyhst2 hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities,” Nucl. Instrum. Meth. Phys. Res. Sect. B 324, 41–48 (2014). [CrossRef]  

23. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the l-curve,” SIAM review 34, 561–580 (1992). [CrossRef]  

24. G. E. Box and G. C. Tiao, Bayesian inference in statistical analysis, vol. 40 (John Wiley & Sons, 2011).

25. A. Mohammad-Djafari, “A full bayesian approach for inverse problems,” in Maximum Entropy and Bayesian Methods (Springer, 1996), pp. 135–144. [CrossRef]  

26. A. Mohammad-Djafari, “Bayesian approach with prior models which enforce sparsity in signal and image processing,” EURASIP J. on Adv. Signal Process . 2012, 52 (2012). [CrossRef]  

27. J. Benesty, J. Chen, Y. A. Huang, and S. Doclo, “Study of the wiener filter for noise reduction,” in Speech Enhancement (Springer, 2005), pp. 9–41. [CrossRef]  

28. P. Kirkpatrick and A. V. Baez, “Formation of optical images by x-rays,” J. Opt. Soc. Am. 38, 766–774 (1948). [CrossRef]   [PubMed]  

29. C. Morawe, R. Barrett, P. Cloetens, B. Lantelme, J.-C. Peffen, and A. Vivo, “Graded multilayers for figured kirkpatrick-baez mirrors on the new esrf end station id16a,” Proc. SPIE 9588, 958803 (2015). [CrossRef]  

30. J. C. da Silva, A. Pacureanu, Y. Yang, S. Bohic, C. Morawe, R. Barrett, and P. Cloetens, “Efficient concentration of high-energy x-rays for diffraction-limited imaging resolution,” Optica 4, 492–495 (2017). [CrossRef]  

31. M. Guizar-Sicairos and J. R. Fienup, “Understanding the twin-image problem in phase retrieval,” J. Opt. Soc. Am. A 29, 2367–2375 (2012). [CrossRef]  

32. F. Soulez, Éric Thiébaut, A. Schutz, A. Ferrari, F. Courbin, and M. Unser, “Proximity operators for phase retrieval,” Appl. Opt. 55, 7412–7421 (2016). [CrossRef]   [PubMed]  

33. M. Stockmar, P. Cloetens, I. Zanette, B. Enders, M. Dierolf, F. Pfeiffer, and P. Thibault, “Near-field ptychography: phase retrieval for inline holography using a structured illumination,” Sci. Rep. 3, 1927 (2013). [CrossRef]   [PubMed]  

34. M. Dierolf, P. Thibault, A. Menzel, C. M. Kewish, K. Jefimovs, I. Schlichting, K. Von Koenig, O. Bunk, and F. Pfeiffer, “Ptychographic coherent diffractive imaging of weakly scattering specimens,” New J. Phys. 12, 035017 (2010). [CrossRef]  

35. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109, 338–343 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The value of the Contrast Transfer Function for the amplitude, phase and phase of a homogeneous object with δ/β = 20. They are expressed as combinations of sine/cosine functions of the square modulus of the spatial frequency coordinate.
Fig. 2
Fig. 2 Phase component of the constructed object in radians.
Fig. 3
Fig. 3 Simulated holograms of the constructed object for different propagation distances: a) D1 = 16.8 mm, b) D2 = 17.8 mm, c) D3 = 21.4 mm, d) D4 = 30.4 mm.
Fig. 4
Fig. 4 Phase map obtained through a) standard regularization and b) Bayesian inference.
Fig. 5
Fig. 5 Pre-processed holograms of a red blood cell at four propagation distances a) D1 = 4 mm, b) D2 = 4.16 mm, c) D3 = 4.82 mm, d) D4 = 6.17 mm.
Fig. 6
Fig. 6 Phase map of a red blood cell obtained through a) standard regularization and b) Bayesian inference.
Fig. 7
Fig. 7 Line profiles (along the yellow lines of Fig. 6) through the retrieved phase maps obtained by classical regularization (blue) vs. Bayesian inference (green).
Fig. 8
Fig. 8 Solutions (retrieved phase maps) provided by the Bayesian algorithm at each iteration. The norm of these solutions calculated in Fourier space indicate quick convergence (for convenience, a factor of 10−2 is omitted from the table).
Fig. 9
Fig. 9 Plot of the radially averaged regularization parameter|/j| versus the amplitude of the spatial frequency, normalized by the sampling frequency fs.

Tables (1)

Tables Icon

Table 1 Numerical appreciation of the quality of phase retrieval through the standard and the Bayesian algorithms by means of standard deviation and signal-to-noise evaluation.

Equations (47)

Equations on this page are rendered with MathJax. Learn more.

n ( r ) = 1 δ ( r ) + i β ( r ) ,
u 0 ( r T ) = T ( r T ) u i n c ( r T ) = T ( r T ) , r T = ( x , y )
T ( r T ) = exp { B ( r T ) } exp { i ϕ ( r T ) } , B ( r T ) = 2 π λ β ( x , y , z ) d z ϕ ( r T ) = 2 π λ 1 δ ( x , y , z ) d z
I 0 ( r T ) = | u 0 ( r T ) | 2 = | T ( r T ) | 2 = exp { 2 B ( r T ) }
u D ( r T ) = exp { i 2 π D / λ } i λ D u 0 ( r T 0 ) exp { i π λ D | r T r T 0 | 2 } d r T 0 ,
real space : u D ( r T ) = P D ( r T ) * u 0 ( r T ) , P D ( r T ) = 1 i λ D exp { i π λ D r T 2 } reciprocal space : u ˜ D ( f ) = { u D ( r T ) } = P ˜ D ( f ) u ˜ 0 ( f ) , P ˜ D ( f ) = { P D ( r T ) } = exp { i π λ D | f | 2 } ,
u ˜ D ( f ) = { u D ( r T ) } = u D ( r T ) exp { 2 i π f r T } d r T u D ( r T ) = 1 { u ˜ D ( f ) } = u ˜ D ( f ) exp { 2 i π r T f } d f
I D ( r T ) = | u D ( r T ) | 2 I ˜ D ( f ) = u ˜ D ( η ) u ˜ D * ( η f ) d η
I ˜ D ( f ) = T ( r T λ D f 2 ) T * ( r T + λ D f 2 ) exp { i 2 π r T f } d r T
T ( r T ) 1 B ( r T ) + i ϕ ( r T )
I ˜ D ( f ) = δ ( f ) 2 cos ( π λ D | f | 2 ) B ˜ ( f ) + 2 sin ( π λ D | f | 2 ) ϕ ˜ ( f ) ,
I ˜ D ( f ) = δ ( f ) + 2 sin ( π λ D | f | 2 ) ϕ ˜ ( f ) ,
I ˜ D ( f ) = δ ( f ) + [ 2 sin ( π λ D | f | 2 ) + 2 β δ cos ( π λ D | f | 2 ) ] ϕ ˜ ( f )
I ˜ D ( f ) = I ˜ 0 ( f ) + 2 sin ( π λ D | f | 2 ) Ψ ˜ ( f ) + λ D 2 π cos ( π λ D | f | 2 ) { ( Ψ ln I 0 ) } ,
x ^ = arg min x y Hx 2 = arg min x J LS ( x ) ,
J LS ( x ) x = 0 2 H * ( y Hx ) = 0 x ^ = [ H * H ] 1 H * y
J ( x ) = y Hx 2 + λ R Δ ( x , x 0 ) ,
J ( x ) = y Hx 2 + λ R x x 0 2 ,
x ^ = [ H * H + λ R I ] 1 ( H * y + λ R x 0 ) .
x ^ = [ H * H + λ R I ] 1 H * y .
{ ( 11 ) : ϕ ^ = 1 2 Δ + λ R [ k I ˜ k sin ( π λ D k | f | 2 ) 𝔄 k I ˜ k cos ( π λ D k | f | 2 ) ] ( 12 ) : ϕ ^ = k I D k 2 sin ( π λ D k | f | 2 ) 4 sin 2 ( π λ D k | f | 2 + λ R ( 13 ) : ϕ ^ k I D k 2 ( sin ( π λ D k | f | 2 ) + β δ cos ( π λ D k | f | 2 ) ) 4 ( sin ( π λ D k | f | 2 + β δ ( π λ D k | f | 2 ) ) 2 + λ R
Δ ( y , Hx ) = y Hx 2 = i = 1 M | y i [ Hx ] i | 2 , Δ ( x , x 0 ) = x x 0 2 = j = 1 N | x j x 0 j | 2
Δ ( y , Hx ) = y Hx Q 1 2 = ( y Hx ) * Q 1 1 ( y Hx ) , Δ ( x , x 0 ) = x x 0 Q 2 2 = ( x x 0 ) * Q 2 1 ( x x 0 )
Δ ( y , Hx ) = y Hx p 1 = i = 1 M | y i [ Hx ] i | p 1 , Δ ( x , x 0 ) = x x 0 p 2 = j = 1 N | x j x 0 j | p 2
Δ ( y , Hx ) = i = 1 M y i ln y i [ Hx ] i , Δ ( x , x 0 ) = j = 1 N x j ln x j x 0 j
x ( k + 1 ) = x ( k ) + α ( k ) δ ( x ( k ) )
x ( k + 1 ) = x ( k ) α J ( x ( k ) )
x ( k + 1 ) = x ( k ) + α [ H * ( y H x ( k ) ) λ R x ( k ) ] ,
y = Hx + ,
p ( x | y ) = p ( y | x ) p ( x ) p ( y ) p ( y | x ) p ( x ) ,
p ( y | x ) = 𝒩 ( y | Hx , v I ) v M 2 exp { 1 2 v y Hx 2 } .
p ( v ) = 𝒢 ( v | α , β ) v ( α + 1 ) exp { β / v }
p ( x ) = 𝒩 ( x | x 0 , V x ) exp { 1 2 ( x x 0 ) * V x 1 ( x x 0 ) } p ( x ) = j 𝒩 ( x j | x 0 j , v j ) j v j 1 2 exp { j 1 2 v j | x x 0 j | 2 }
x ^ = arg max x p ( x , θ / y ) = arg min x J ( x ) , where J ( x ) = ln p ( x , θ | y ) , θ ^ = arg max θ p ( x , θ | y ) = arg min θ J ( x )
KL ( q 1 ( x ) q 2 ( θ ) : p ( x , θ | y ) ) = q 1 ( x ) q 2 ( θ ) ln q 1 ( x ) q 2 ( θ ) p ( x , θ | y ) d x d θ
{ q 1 ( x ) exp ln p ( x , θ | y ) q 2 = q 1 ( x | θ ˜ ) , with ln p ( x , θ | y ) q 2 = ln p ( x , θ | y ) q 2 ( θ ) d θ q 2 ( θ ) exp ln p ( x , θ | y ) q 1 = q 1 ( θ | x ˜ ) , with ln p ( x , θ | y ) q 1 = ln p ( x , θ | y ) q 1 ( x ) d x
p ( x , v ) = p ( x | v ) p ( v ) = j = 1 N p ( x j | v j ) j = 1 N p ( v j ) = j = 1 N 𝒩 ( x j | 0 , v j ) j = 1 N 𝒢 ( v j | α v , β v ) .
p ( x , v , v | y ) = p ( y | x , v ) p ( x | v ) p ( v ) p ( v )
p ( x , v , v | y ) = 𝒩 ( y | Hx , v I ) j = 1 N 𝒩 ( x j | 0 , v j ) j = 1 N 𝒢 ( v j | α v , β v ) 𝒢 ( v | α , β )
ln p ( x , v , v | y ) = M 2 ln v y Hx 2 2 v 1 2 j = 1 N ln v j 1 2 j = 1 N 1 v j x j 2 ( α v + 1 ) j = 1 N ln v j β v j = 1 N 1 v j ( α + 1 ) ln v β v
q ( x , v , v ) = q 1 ( x ) q 2 ( x ) q 3 ( v ) = j = 1 N q 1 , j ( x j ) j = 1 N q 2 , j ( v j ) q 3 ( v ) ,
q ( x , v , v ) = q 1 ( x ) q 2 ( v ) q 3 ( v ) = q 1 ( x ) j = 1 N q 2 , j ( v j ) q 3 ( v ) ,
{ q 1 ( x ) exp q 2 q 3 or q 1 , j ( x j ) exp q 1 , j q 2 q 3 q 2 , j ( v j ) exp q 1 q 2 , j q 3 , j { 1 , , N } q 3 ( v ) exp q 1 q 2 ,
{ q 1 ( x ) exp { 1 2 ( x μ ˜ ) * Σ ˜ 1 ( x μ ˜ ) } 𝒩 ( x | μ ˜ , Σ ˜ ) or q 1 , j ( x j ) exp { ( x j μ j ) 2 2 [ Σ ˜ ] j j } q 2 , j ( v j ) v j ( α ˜ v + 1 ) exp { β ˜ v , j / v j } 𝒢 ( v j | α ˜ v , β ˜ v , j ) q 3 ( v ) v ( α ˜ + 1 ) exp { β ˜ / v } 𝒢 ( v | α ˜ , β ˜ )
{ x ^ = μ ˜ = ( H * H + v ^ V ˜ * V ˜ ) 1 H * y , V ˜ = diag { 1 / v ^ j } , j { 1 , , N } or x ^ j = [ i = 1 M h i j 2 + v ^ / v ^ j ] 1 i = 1 M h i j y i Σ ˜ = v ^ ( H * H v ^ V ˜ * V ˜ ) 1 , [ Σ ˜ ] j j = v ^ [ i = 1 M h i j 2 + v ^ / v ^ j ] 1 v ^ j = β ˜ v , j / α ˜ v , α ˜ v = α v + 1 2 , β ˜ v , j = β v + 1 2 ( μ ˜ j 2 + [ Σ ˜ ] j j ) v ^ = β ˜ / α ˜ , α ˜ = α + M 2 , β ˜ = β + 1 2 y H μ ˜ 2
[ I ˜ 1 ( f 11 ) I ˜ 1 ( f n n ) I ˜ 2 ( f 11 ) I ˜ 2 ( f n n ) I ˜ 3 ( f 11 ) I ˜ 3 ( f n n ) I ˜ 4 ( f 11 ) I ˜ 4 ( f n n ) ] = [ s 1 ( f 11 2 ) 0 c 1 ( f 11 2 ) 0 0 s 1 ( f n n 2 ) 0 c 1 ( f n n 2 ) s 2 ( f 11 2 ) 0 c 2 ( f 11 2 ) 0 0 s 2 ( f n n 2 ) 0 c 2 ( f n n 2 ) s 3 ( f 11 2 ) 0 c 3 ( f 11 2 ) 0 0 s 3 ( f n n 2 ) 0 c 3 ( f n n 2 ) s 4 ( f 11 2 ) 0 c 4 ( f 11 2 ) 0 0 s 4 ( f n n 2 ) 0 c 4 ( f n n 2 ) ] [ ϕ ˜ ( f 11 ) ϕ ˜ ( f n n ) B ˜ ( f 11 ) B ˜ ( f n n ) ] y [ 4 K n 2 , 1 ] H [ 4 K n 2 , 8 n 2 ] x [ 8 n 2 , 1 ]
s k ( f i j 2 ) = 2 sin ( π λ D k f i j 2 ) ; c k ( f i j 2 ) = 2 cos ( π λ D k f i j 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.