Abstract

We propose a novel low-complexity recursive filter to efficiently recover quantitative phase from a series of noisy intensity images taken through focus. We first transform the wave propagation equation and nonlinear observation model (intensity measurement) into a complex augmented state space model. From the state space model, we derive a sparse augmented complex extended Kalman filter (ACEKF) to infer the complex optical field (amplitude and phase), and find that it converges under mild conditions. Our proposed method has a computational complexity of NzN logN and storage requirement of 𝒪(N), compared with the original ACEKF method, which has a computational complexity of 𝒪(NzN3) and storage requirement of 𝒪(N2), where Nz is the number of images and N is the number of pixels in each image. Thus, it is efficient, robust and recursive, and may be feasible for real-time phase recovery applications with high resolution images.

© 2013 Optical Society of America

1. Introduction

When light propagates, its amplitude and phase evolve according to the wave equation. These amplitude and phase perturbations contain important information about the complex-field in the original (focal) plane. Intensity can be measured easily; however, phase cannot be measured directly and it needs to be reconstructed from intensity images with computational methods. Most solutions for phase recovery do not explicitly account for noise in the measurement, and thus typical methods are not applicable in high noise situations, such as in X-ray imaging or photon-starved situations. Recently, Kalman filters have been used to estimate phase from very noisy intensity images (signal-to-noise ratio less than one) [1]; however the method is very computationally intensive and impractical for images with large pixel counts (e.g. 1 megapixel or greater). In this paper, we solve these problems by developing efficient inference methods for recovering the phase from a set of defocus intensity images, which can be captured either sequentially by moving camera along the optical axis, or all at once by using various focal stack collection methods [2, 3]. Such methods can be useful for applications in medical imaging, neuroscience, and materials science, where low noise images are difficult to obtain. Furthermore, since our proposed method is recursive (not iterative), it has the ability to estimate phase in real-time during the measurement sequence, progressively improving the estimate as each new image is captured.

Traditional methods for recovering phase involve complicated interferometric setups, so there is a significant experimental advantage to methods such as ours, which only require a set of intensity images taken with different amounts of defocus. One of the originally proposed algorithms, the Gerchberg-Saxton(GS) method [4, 5], uses only two images and treats the problem as convex (which it is generally not), iterating back and forth between two domains to reduce error. In [6], a generalization of the GS method is given for cases of multiple defocus images. These methods tend to be strongly sensitive to the noise in the last image, although techniques exist for averaging at each iteration. Direct methods generally linearize the problem, either by assuming that the object is weak or that the propagation distance is small. It allows one to exploit the Transport of Intensity Equation (TIE) [7], which reveals relationship between phase and the first derivative of intensity with respect to the optical axis. The first derivative of intensity is usually approximated by finite difference method [8, 9], and it is not robust to severe noise. A few statistical approaches have been proposed as well. An approximation to the maximum likelihood estimator is derived in [10, 11]. However, it easily gets stuck in local maxima, and sometimes leads to poor results.

In [1], an augmented complex extended Kalman filter (ACEKF) was used to solve for phase in the presence of significant noise corruption. The Kalman filter is a well-known recursive algorithm for estimating dynamically changing quantities from a set of noisy measurements. In the linear case, with Gaussian noise statistics, the Kalman filter is the optimal estimation method. However, intensity is a nonlinear measure of complex field, and noise in images generally follows Poisson statistics, so the solution will not be provably optimal. Extended Kalman filters add a linearization step in order to account for nonlinear measurements. Further, we use a complex EKF in order to deal with complex numbers. The resulting ACEKF outperforms many previous algorithms, at the cost of very high computational complexity of 𝒪(NzN3) and storage requirement of 𝒪(N2). The memory storage requirements are particularly limiting - for example, an image with pixel counts on the order of megapixels will require terabytes of data to be stored and processed - thus, the algorithm is infeasible for large images and impractical for real-time applications. In [12], a diagonalized complex extended Kalman filter (diagonalized CEKF) was proposed to alleviate those issues, without jeopardizing the reconstruction accuracy. The diagonalized CEKF is iterative: it needs to cycle through the set of intensity images repeatedly, yielding a more accurate phase reconstruction after each cycle. The computational complexity increases with each cycle.

The sparse ACEKF proposed here uses the same augmented state space model as ACEKF. With the assumption that the phase is small, we derive theorems to simplify the update equations of ACEKF. It eventually results in a low-complexity noise-robust phase reconstruction algorithm. The proposed method has a computational complexity of NzN logN, while ACEKF requires NzN3. However, our method still achieves better phase reconstruction than both ACEKF and the diagonalized CEKF.

This paper is organized as follows. In the next section, we turn the propagation and observation models from scalar wave optics into an augmented state space model in signal processing. In Section 3, we derive the proposed sparse ACEKF algorithm. In Section 4, we analyze the convergence and stability of the new ACEKF. In Section 5, we compare results of different methods using synthetic and experimental data. We offer concluding remarks in Section 6.

2. Problem description and state space model of the optical field

We aim to estimate the 2D complex-field A(x,y,z0) at the focal plane z0, from a sequence of noisy intensity images I(x,y,z) captured at various distance z0,z1,z2,...,zNz. We assume a linear medium with homogeneous refractive index and coherent (laser) illumination, such that the complex-field at z0 fully determines the complex-field at all other planes. The complex optical field at z is A(x,y,z) = |A(x,y,z)|e(x,y,z), where |A(x,y,z)| is the amplitude, and ϕ(x,y,z) is the phase. Propagation is modeled by the homogeneous paraxial wave equation:

A(x,y,z)z=iλ4π2A(x,y,z),
where λ is the wavelength of the illumination, and ∇ is the gradient operator in the lateral (x,y) dimensions. The noisy measurements I(x,y,z) usually adhere to a (continuous) Poisson distribution:
p[I(x,y,z)|A(x,y,z)]=eγ|A(x,y,z)|2(γ|A(x,y,z)|2)I(x,y,z)I(x,y,z)!,
where γ is the photon count detected by the camera. The measurement at each pixel I(x,y,z) is assumed statistically independent of any other pixel (conditioned on the optical field A(x,y,z)).

We discretize the optical field A(x,y,z) as a raster-scanned complex column vector an, and similarly discretize the intensity measurement I(x,y,z) as column vector In. We denote by b(u,v,z) the 2-D Fourier transform of A(x,y,z). The column vector bn is again raster-scanned from b(u,v,z), and hence can be expressed as bn = Kan, where K is the discrete Fourier transform matrix. Since K is unitary, we can write KKH = KH K = U (with normalization), where U is the identity matrix or unit matrix and KH denotes the hermitian of K.

Let us assume the distance between two consecutive planes is a constant Δz. Then, we can define the propagation matrix from zn−1 to zn as [13]:

H=diag(exp[iλπ(u12Lx2+v12Ly2)Δz],,exp[iλπ(uM2Lx2+vN2Ly2)Δz]),
where Lx and Ly are the width and height of the image, respectively.

The relation between two images with distance Δz in Fourier domain can be written as:

bn=Hbn1.

In order to fit within the Kalman filter framework, we approximate the Poisson observation model as in Eq. (2) with a Gaussian distribution of the same mean and covariance. In particular, we consider the approximate observation model:

In=γ|an|2+v,
where v is a Gaussian vector with zero mean and covariance R=γdiag(an*)diag(an). Here an* denotes the complex conjugate of an, and diag(an*) is a diagonal matrix with its corresponding diagonal entries equal to the elements in the vector an*.

The nonlinear observation model in Eq. (5) is linearized as [14]:

In=γ|a^n|2+γdiag(a^n*)(ana^n)+γdiag(a^n)(an*a^n*)+v,
where ân is the state predicted from the previous n−1 observations, and Eq. (6) is the first order Taylor series expansion of Eq. (5) with respect to ân.

Summarizing, the augmented state space model is given as:

state:[bnbn*]=[H00H*][bn1bn1*]
observation:In=[JnJn*][bnbn*]γ|a^n|2+v,withv~(0,R),
where
R=γdiag(a^n*)diag(a^n)andJn=γdiag(a^n*)KH.

We adopt the augmented state space model because the ACEKF with the state augmented outperforms CEKF in both estimation error and convergence [14]. If the state is not augmented, such as in CEKF algorithm, the covariance E[bnbnH] is not sufficient for a complete description of the complex state. Since the state in ACEKF is augmented from bn to [bnbn*]T, it introduces both the covariance E[bnbnH] and the pseudo-covariance E[bnbnT]. Intuitively, E[bnbnH] and E[bnbnT] together provide a more comprehensive statistical description of the complex state.

3. State estimation by sparse augmented complex extended Kalman filter

The state covariance matrix of the augmented state has the form:

Sn=[SnQSnP(SnP)*(SnQ)*]
From the update equations of ACEKF [1, 15], we have the following steps:
  1. Initialize: b0, S0Q, and S0P.
  2. Predict:
    S^nQ=HSn1QHH,
    S^nP=HSn1PH.
  3. Update:
    SnQ=S^nQ(S^nQJnH+S^nPJnT)(JnS^nQJnH+JnS^nPJnT+Jn*(S^nQ)*JnT+Jn*(S^nP)*JnH+R)1(JnS^nQ+Jn*(S^nP)*),
    SnP=S^nP(S^nQJnH+S^nPJnT)(JnS^nQJnH+JnS^nPJnT+Jn*(S^nQ)*JnT+Jn*(S^nP)*JnH+R)1(JnS^nP+Jn*(S^nQ)*),
    Gn=(SnQJnH+SnPJnT)R1,
    bn=b^n+Gn(Inγ|a^n|2).

The size of SnQ or SnP is N2, where N is the total number of the pixels in the image. Thus, the covariance matrix dominates the memory requirements. The inversion of the covariance matrix has a computational complexity of 𝒪(N3) in each step. Both the storage requirement and computational burden make the above update algorithm impractical for real applications. Here we make some mild assumptions and derivations, resulting in a low-complexity algorithm with reduced storage requirement. After some derivations (see more in Appendices A and B), we prove Lemma 1 as well as Theorems 1 and 2. Theorem 1 is derived from the ACEKF update Eqs. (11)(14). It shows that if the covariance matrix at step n−1 has the form Sn1Q=Qn1 and Sn1P=Pn1E, where Qn−1 and Pn−1 are diagonal, then the covariance matrix at step n has the same form as that of step n − 1 (see Eqs. (23)(24)). Therefore, once the first covariance matrix is initialized as S0Q=Q0 and S0P=P0E, then the subsequent matrices will keep the same form. Since the matrices in Theorem 1 are diagonal, the computational complexity in Theorem 1 is 𝒪(N). Theorem 2, derived from the Eqs. (15)(16), provides a new Kalman gain and update formula using the diagonal matrices Qn and Pn.

Lemma 1. If a matrix M is diagonal and its diagonal entries are rotationally symmetric in 2-D, then EME = M, where E = KKT, and K is the Discrete Fourier Transform Matrix.

Theorem 1. If

Sn1Q=Qn1
Sn1P=Pn1E,
where Qn−1 and Pn−1 are diagonal, then we derive from the ACEKF update Eqs. (11)(16) that the covariance matrix can be updated as follows
  • Predict:
    Q^n=Qn1
    P^n=HPn1H
  • Update:
    Qn=Q^n(Q^n+P^n)(Q^n+P^n+(Q^n)*+(P^n)*+qI)1(Q^n+(P^n)*)
    Pn=P^n(Q^n+P^n)(Q^n+P^n+(Q^n)*+(P^n)*+qI)1(P^n+(Q^n)*)
    SnQ=Qn
    SnP=PnE,
    where Qn and Pn are diagonal, and q=1γ.

Theorem 2. The Kalman gain and update formula for the state are

Gn=(SnQJnH+SnPJnT)R1=(Qn+Pn)(Jn)1q1.
bn=b^n+Gn(Inγ|a^n|2).

The resulting algorithm, referred to as the sparse ACEKF, is summarized in Table 1. Matrices Qn and Pn are diagonal, and hence they can be stored as two vectors. The storage burden of Eqs. (13)(14) in the update step is reduced from N2 to N. The inverse of Jn in Eq. (33) can be computed by Fast Fourier Transform (FFT), which has a computational complexity of 𝒪(N log(N). Since Qn and Pn are diagonal, the matrix multiplications and inversions in Eqs. (31)(32) have a computational complexity of 𝒪(N). The overall computational complexity of the sparse ACEKF is 𝒪(NzN log(N)), limited by the complexity of FFT.

Tables Icon

Table 1. Sparse augmented complex extended Kalman filter for estimating a wave field.

4. Analysis of the convergence of the augmented complex extended Kalman filter

The convergence of ACEKF, for the complex state space model, has not been well analyzed in the literature. However, the convergence and stability of the extended Kalman filter (EKF) for the real state is well studied [1618]. Thus we first convert the ACEKF state space model into its equivalent dual form as a real state space model (EKF), then we use existing methods to analyze the convergence of the dual form.

We follow [14] to derive the dual real form of the ACEKF state space model (7)(8). Define

M=12[UUjUjU],
where U is identity matrix. When multiplied by M, a complex variable is converted to its corresponding real form:
[Re(bn)Im(bn)]=M[bnbn*],
where Re(bn) and Im(bn) are the real and imaginary parts of bn, respectively. The complex model (7)(8) becomes a real value model:
state:[Re(bn)Im(bn)]=An[Re(bn1)Im(bn1)],
observation:In=Cn[Re(bn)Im(bn)]γ|a^n|2+v,
where
An=M[H00H*]M1,
Cn=[JnJn*]M1.

By applying the theory of EKF convergence in [16] to our real value model, we obtain Theorem 3 which provides the convergence and stability of the model.

Theorem 3. The estimation error of the EKF algorithm in Eqs. (36)(37) is bounded, and hence also the estimation error of ACEKF algorithm for Eqs. (7)(8) is bounded, when the initial estimation error and the noise are small enough, and the EKF model satisfies the following conditions [16]:

  1. There exist positive numbers â,c̄, s̄,s,v > 0, such that for every n ≥ 0:
    Ana¯,
    Cnc¯,
    s_USns¯U,
    v_Uv.
  2. An is non-singular for every n.
  3. The linearization at Eq. (6) has a bounded error.

In Theorem 3, ||.|| denotes the spectral norm of matrices. Assuming the initial estimation error and the noise are small enough, let us check each condition of Theorem 3. From the definition in Eq. (3), the propagation matrix H is non-singular and determined by the propagation distance Δz. Since M is full rank, the matrix An given in Eq. (38) is bounded and non-singular. Therefore, the inequality (40) and the second condition of Theorem 3 are satisfied. The linearization in Eq. (6) is a first-order truncation of Taylor series expansion of a quadratic function. Therefore, the linearization has a bounded error and the real observation matrix Cn of Eq. (37) is bounded as well, as required by the inequality (41) and the third condition of Theorem 3. The matrix R is the covariance matrix of the Poisson noise in Eq. (5), and it is bounded as the amplitude of the state is bounded.

The inequality (42) requires the covariance matrix to be bounded for every n. In [16], it is shown that the boundedness of the covariance matrix is closely related to observability and detectability properties of the EKF model. However, it is difficult to prove the observability of the model directly, because An and Cn are based on the observed data at each step. We calculate the rank of the observability matrix and the bound on the covariance matrix by testing our algorithm on the synthetic data (see Data Set 2 in section 5). We initialize the covariance matrix Q0 and P0 in Table 1 with 30 different values. The matrix Q0 is initialized as qU (U is identity matrix), with q ranging from 70 to 7000, and P0 is initialized as pU, with p ranging from 50 to 5000. Figure 1 shows that the minimum singular value of the observability matrix is larger than zero, so the observability matrix is full rank at each step. Figures 2 and 3 show the maximum and minimum singular value of the covariance matrix at each step with different initializations. The covariance matrix is bounded, as required by inequality (42).

 figure: Fig. 1

Fig. 1 Minimum singular value of observability matrix with 30 different initializations of the covariance matrix Q0 and P0. The matrix Q0 is initialized as qU, with q ranging from 70 to 7000, and P0 is initialized as pU, with p ranging from 50 to 5000.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2

Fig. 2 Maximum singular value of covariance matrix with 30 different initializations of p and q in the covariance matrix.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 Minimum singular value of covariance matrix with 30 different initializations of p and q in the covariance matrix.

Download Full Size | PPT Slide | PDF

5. Experimental results

We consider three sets of data to assess the performance the augmented Kalman filter. Data Set 1 consists of 100 images of size 100 × 100 pixels artificially generated to simulate a complex field propagating from focus in 0.5 μm steps over a distance of 50 μm with illumination wavelength of 532 nm. Pixels are corrupted by Poisson noise so that, on average, each pixel detects γ = 0.998 photons. Data Set 2 comprises 50 images of size 150 × 150 pixels acquired by a microscope. The wavelength was again 532 nm, and the defocused intensity images were captured by moving the camera axially with a step size of 2 μm over a distance of 100 μm. Data Set 3 has 101 images of size 492 × 656 pixels acquired by a microscope. The light source is partially coherent, and filtered by a narrow-band color filter of center wavelength 633nm. The images were captured by moving the camera axially with a step size of 2 μm. Figure 4 shows the images of synthetic data (Data Set 1) and experimental data (Data Set 2 and Data Set 3).

 figure: Fig. 4

Fig. 4 (a) Data Set 1: synthetic images with strong noise (100 × 100 pixels with size of 1μm×1μm). (b) Data Set 2: experimental data acquired by a microscope (150×150 pixels with size of 2μm × 2μm). (c) Data Set 3: experimental data of large size (492 × 656 pixels with size of 0.74μm × 0.74μm) obtained by a microscope.

Download Full Size | PPT Slide | PDF

5.1. Synthetic data

Table 2 summarizes the results of Data Set 1 using three methods: ACEKF (augmented complex extended Kalman filter) [1], diagonalized CEKF [12], and the proposed method sparse ACEKF. The ACEKF method has a high computational complexity of 𝒪(NzN3) and storage requirement of 𝒪(N2). In order to alleviate the computational burden of ACEKF, the images are divided into independent blocks of size 50 × 50, but it still takes 1.4 × 104 seconds by a standard CPU. On the other hand, the computational complexity of the sparse ACEKF is 𝒪(NzN logN), and it takes 0.40 seconds to process the 100 (full) images, thus giving a speedup factor of 35000x.

Tables Icon

Table 2. Comparison of different methods. Each image is divided into 4 blocks of size 50 × 50 for ACEKF, while the proposed sparse ACEKF and diagonalized CEKF processes the images without separating into blocks.

As illustrated in Table 2, the computational complexity of the diagonalized CEKF is lower than that of ACEKF. However, the latter yields better results in terms of phase error. In order to reduce the error of the diagonalized CEKF, forward and backward sweeps (iterations) are applied in [12]. However, the iteration increases the computational complexity linearly, and makes the method no longer recursive. The sparse ACEKF method has an intensity error of 0.0071, and a phase error of 0.0143 (radian). Compared with the diagonalized CEKF, the sparse ACEKF has the same computational complexity and storage requirement, but returns more accurate images.

We assess the quality of reconstruction by root mean square error (RMSE). The proposed sparse ACEKF has an error near to that of ACEKF, while the recovered phase and intensity images of the sparse ACEKF in Fig. 5 might look better. The images recovered by ACEKF exhibit a block effect as straight lines crossing the images, whereas the result of sparse ACEKF is free of such block effect. The sparse ACEKF has a much lower complexity, which avoids the need of dividing the images into independent blocks. The images recovered by ACEKF and the diagonalized CEKF contain traces of phase in the intensity images due to errors. However, the trace of phase is mostly removed in the estimated intensity image from the sparse ACEKF.

 figure: Fig. 5

Fig. 5 Recovered intensity and phase [radian] image from synthetic Data Set 1 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF.

Download Full Size | PPT Slide | PDF

5.2. Experimental data

In Fig. 6, we compare the estimated intensity and phase images of Data Set 2 using the ACEKF, the diagonalized CEKF, and the sparse ACEKF. Stripes in the phase image recovered by the diagonalized CEKF look darker, while the stripes in the recovered phase image of the sparse ACEKF method have stronger contrast. In Fig. 7(a), we show the recovered phase of the Data Set 3 by ACEKF, the diagonalized CEKF, and the sparse ACEKF. The real depth of the sample in Data Set 3 is around 75 ± 5nm. The proposed method takes 20.24 seconds to process 101 images of size 492 × 656. However, the ACEKF method takes 54.15 hours and each image is separated into 117 pieces of 50 × 50 blocks. In Fig. 7(b), we compare the depth across the black line of the recovered phase in Fig. 7(a). The sparse ACEKF method shows a result much closer to the true value, compared to the ACEKF and the diagonalized CEKF. However, the reconstructed phase image obtained by sparse ACEKF contains low-frequency noise, especially at the edges. This shadow effect may be attributed to the partial coherence of the light, which is not incorporated into our model (we assume full coherence). The low frequency issue may be alleviated through a nonlinear diffusion filter, which we will explore in future work.

 figure: Fig. 6

Fig. 6 Estimated intensity and phase [radians] of Data Set 2 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 (a) Estimated height [nm] for Data Set 3 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF. (b) Depth along the black line in (a).

Download Full Size | PPT Slide | PDF

6. Conclusions

The proposed method efficiently recovers phase and amplitude from a series of noisy defocused images. Although a constant defocus step size is used for the through-focal series in this paper, it can be generalized to cases of non-constant step-size. The method can directly deal with large data sets and high-resolution images because of its low computational complexity and storage requirement. It can process the images recursively in real-time after the data has been captured, or even during the capture sequence. For example, with further work, this method could form the basis for an adaptive phase imaging method, in which the current estimate of the phase informs the choice of the next measurement plane (defocus distance). Since the optimal measurement planes will depend on the object itself [19], real-time estimation during capture could lead to optimization not only of the processing, but also optimization of the measurement scheme itself. Furthermore, due to the scalability of the wave equations and the simplicity of the measurement technique, this method could find use in phase imaging beyond optical wavelengths (for example, X-ray or neutron imaging), where high-quality images are difficult to obtain and noise is significant and unavoidable.

Appendix A: proof of theorem 1

Predict

If Qn−1 and Pn−1 are diagonal, from the definition of H in Eq. (3) it follows:

P^n=HPn1H
Q^n=HQn1HH=Qn1.
Since the propagation marix H is diagonal and its diagonal entries are rotationally symmetric, from Eq. (11)(12) and Lemma 1, it follows:
S^nQ=HSn1QHH+HQn1HH=Q^n
S^nP=HSn1PH=HPn1EH=HPn1HE=P^nE.

Update

From the update formula in Eqs. (13)(14), we have

SnQ=S^nQ(S^nQJnH+S^nPJnT)(JnS^nQJnH+JnS^nPJnT+Jn*(S^nQ)*JnT+Jn*(S^nP)*JnH+R)1(JnS^nQ+Jn*(S^nP)*).

From Eq. (9), Jn=γdiag(a^n*)KH. Let us define D=γdiag(a^n*), so that Jn = DKH. It follows:

SnQ=S^nQ(S^nQKDH+S^nPK*DT)(DKHS^nQKDH+DKHS^nPK*DT+D*KT(S^nQ)*K*DT+D*KT(S^nP)*KDH+qDKHKDH)1(DKHS^nQ+D*KT(S^nP)*),
where q=1γ.

Because D is diagonal matrix, it has DT = D and DH = D*.

SnQ=S^nQ(S^nQK+S^nPK*D(D*)1)(KHS^nQK+KHS^nPK*D(D*)1+D1D*KT(S^nQ)*K*D(D*)1+D1D*KT(S^nP)*K+qKHK)1(KHS^nQ+D1D*KT(S^nP)*).
Since in our phase recovery problem the value of phase is small, here we consider the approximation D*D−1I. Future research will explore the influence of large phase values.
SnQ=S^nQ(S^nQK+S^nPK*)(KHS^nQK+KHS^nPK*+KT(S^nQ)*K*+KT(S^nP)*K+qKHK)1(KHS^nQ+KT(S^nP)*).
Taking KH and K out of the inverse term yields
SnQ=S^nQ(S^nQK+S^nPK*)(KH(S^nQ+S^nPK*KH+KKT(S^nQ)*K*KH+KKT(S^nP)*+qI)K)1(KHS^nQ+KT(S^nP)*).
It follows:
SnQ=S^nQ(S^nQ+S^nPK*KH)(S^nQ+S^nPK*KH+KKT(S^nQ)*K*KH+KKT(S^nP)*+qI)1(S^nQ+KKT(S^nP)*).

Likewise, we can derive a similar expression for the update of SnP:

SnP=S^nP(S^nQ+S^nPK*KH)(S^nQ+S^nPK*KH+KKT(S^nQ)*K*KH+KKT(S^nP)*+qI)1(S^nP+KKT(S^nQ)*).
From Lemma 1 and Eqs. (46)(47), we obtain:
SnQ=Q^n(Q^nP^nEE)(Q^n+P^nEE+E(Q^n)*E+E(P^n)*E+qI)1(Q^n+E(P^n)*E)
SnP=P^nE(Q^n+P^nEE)(Q^n+P^nEE+E(Q^n)*E+E(P^n)*E+qI)1(P^nE+E(Q^n)*EE).
We initialize Q0 and P0 as a scalar times the identity matrix. From Eqs. (44)(45) and (55)(56), the covariance matrices remain rotationally symmetric during the update. From Lemma 1, it follows:
SnQ=Q^n(Q^n+P^n)(Q^n+P^n+(Q^n)*+(P^n)*+qI)1(Q^n+(P^n)*)
SnP=P^nE(Q^n+P^n)(Q^n+P^n+(Q^n)*+(P^n)*+qI)1(P^nE+(Q^n)*E)=(P^n(Q^n+P^n)(Q^n+P^n+(Q^n)*+(P^n)*+qI)1(P^n+(Q^n)*))E.

Appendix B: proof of theorem 2

In Eq. (15), the Kalman gain for the augmented Kalman filter is given as:

Gn=(SnQJnH+SnPJnT)R1=(SnQJnH+SnPJnT)(JnH)1(Jn)1q1=(SnQ+SnPE)(Jn)1q1=(Qn+Pn)(Jn)1q1.

Acknowledgments

The authors appreciate the help from George Barbastathis’s group at MIT. Specifically, the authors thank George Barbastathis, Justin Lee, Nikhil Vadhavkar, Adam Pan and other colleagues for helpful discussions and providing the experimental data. Manuel A. Vázquez acknowledges the support of the Ministry of Education of Spain (Programa Nacional de Movilidad de Recursos Humanos del Plan Nacional de I-D+i 2008–2011).

References and links

1. L. Waller, M. Tsang, S. Ponda, S. Yang, and G. Barbastathis, “Phase and amplitude imaging from noisy images by Kalman filtering,” Opt. Express 19, 2805–2814 (2011) [CrossRef]   [PubMed]  .

2. L. Waller, Y. Luo, S. Y. Yang, and G. Barbastathis, “Transport of intensity phase imaging in a volume holographic microscope,” Opt. Lett. 35, 2961–2963 (2010) [CrossRef]   [PubMed]  .

3. S. S. Gorthi and E. Schonbrun, “Phase imaging flow cytometry using a focus-stack collecting microscope,” Opt. Lett. 37, 707–709 (2012) [CrossRef]   [PubMed]  .

4. R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).

5. J. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982) [CrossRef]   [PubMed]  .

6. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30, 833–835 (2005) [CrossRef]   [PubMed]  .

7. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984) [CrossRef]  .

8. M. Soto and E. Acosta, “Improved phase imaging from intensity measurements in multiple planes,” Appl. Opt. 46, 7978–7981 (2007) [CrossRef]   [PubMed]  .

9. L. Waller, L. Tian, and G. Barbastathis, “Transport of intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express 18, 12552–12561 (2010) [CrossRef]   [PubMed]  .

10. R. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992) [CrossRef]  .

11. R. Paxman and J. Fienup, “Optical misalignment sensing and image reconstruction using phase diversity,” J. Opt. Soc. Am. A 5, 914–923 (1988) [CrossRef]  .

12. Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP 2012 pp. 25–30.

13. J. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

14. D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012) [CrossRef]  .

15. R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering 82, 35–45 (1960) [CrossRef]  .

16. K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999) [CrossRef]  .

17. T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004) [CrossRef]  .

18. A. J. Krener, “The convergence of the extended Kalman filter,” in “Directions in mathematical systems theory and optimization,” (Springer, 2003), pp. 173–182 [CrossRef]  .

19. D. J. Lee, M. C. Roggemann, and B. M. Welsh, “Cramér-Rao analysis of phase-diverse wave-front sensing,” J. Opt. Soc. Am. A 16, 1005–1015 (1999) [CrossRef]  .

References

  • View by:
  • |
  • |
  • |

  1. L. Waller, M. Tsang, S. Ponda, S. Yang, and G. Barbastathis, “Phase and amplitude imaging from noisy images by Kalman filtering,” Opt. Express 19, 2805–2814 (2011).
    [Crossref] [PubMed]
  2. L. Waller, Y. Luo, S. Y. Yang, and G. Barbastathis, “Transport of intensity phase imaging in a volume holographic microscope,” Opt. Lett. 35, 2961–2963 (2010).
    [Crossref] [PubMed]
  3. S. S. Gorthi and E. Schonbrun, “Phase imaging flow cytometry using a focus-stack collecting microscope,” Opt. Lett. 37, 707–709 (2012).
    [Crossref] [PubMed]
  4. R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).
  5. J. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref] [PubMed]
  6. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30, 833–835 (2005).
    [Crossref] [PubMed]
  7. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
    [Crossref]
  8. M. Soto and E. Acosta, “Improved phase imaging from intensity measurements in multiple planes,” Appl. Opt. 46, 7978–7981 (2007).
    [Crossref] [PubMed]
  9. L. Waller, L. Tian, and G. Barbastathis, “Transport of intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express 18, 12552–12561 (2010).
    [Crossref] [PubMed]
  10. R. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).
    [Crossref]
  11. R. Paxman and J. Fienup, “Optical misalignment sensing and image reconstruction using phase diversity,” J. Opt. Soc. Am. A 5, 914–923 (1988).
    [Crossref]
  12. Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.
  13. J. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  14. D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012).
    [Crossref]
  15. R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering 82, 35–45 (1960).
    [Crossref]
  16. K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
    [Crossref]
  17. T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
    [Crossref]
  18. A. J. Krener, “The convergence of the extended Kalman filter,” in “Directions in mathematical systems theory and optimization,” (Springer, 2003), pp. 173–182.
    [Crossref]
  19. D. J. Lee, M. C. Roggemann, and B. M. Welsh, “Cramér-Rao analysis of phase-diverse wave-front sensing,” J. Opt. Soc. Am. A 16, 1005–1015 (1999).
    [Crossref]

2012 (3)

S. S. Gorthi and E. Schonbrun, “Phase imaging flow cytometry using a focus-stack collecting microscope,” Opt. Lett. 37, 707–709 (2012).
[Crossref] [PubMed]

Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.

D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012).
[Crossref]

2011 (1)

2010 (2)

2007 (1)

2005 (1)

2004 (1)

T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
[Crossref]

1999 (2)

D. J. Lee, M. C. Roggemann, and B. M. Welsh, “Cramér-Rao analysis of phase-diverse wave-front sensing,” J. Opt. Soc. Am. A 16, 1005–1015 (1999).
[Crossref]

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

1992 (1)

1988 (1)

1984 (1)

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

1982 (1)

1972 (1)

R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).

1960 (1)

R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering 82, 35–45 (1960).
[Crossref]

Acosta, E.

Barbastathis, G.

Bruyninckx, H.

T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
[Crossref]

Dauwels, J.

Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.

Dini, D. H.

D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012).
[Crossref]

Fienup, J.

Gerchberg, R.

R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).

Goodman, J.

J. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Gorthi, S. S.

Gunther, S.

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

Jingshan, Z.

Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.

Kalman, R. E.

R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering 82, 35–45 (1960).
[Crossref]

Krener, A. J.

A. J. Krener, “The convergence of the extended Kalman filter,” in “Directions in mathematical systems theory and optimization,” (Springer, 2003), pp. 173–182.
[Crossref]

Lee, D. J.

Lefebvre, T.

T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
[Crossref]

Luo, Y.

Mandic, D. P.

D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012).
[Crossref]

Osten, W.

Paxman, R.

Pedrini, G.

Ponda, S.

Reif, K.

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

Roggemann, M. C.

Saxton, W.

R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).

Schonbrun, E.

Schulz, T.

Schutter, J. D.

T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
[Crossref]

Soto, M.

Streibl, N.

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

Tian, L.

Tsang, M.

Unbehauen, R.

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

Vázquez, M. A.

Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.

Waller, L.

Welsh, B. M.

Yang, S.

Yang, S. Y.

Yaz, E.

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

Zhang, Y.

Appl. Opt. (2)

IEEE Trans. Autom. Control (1)

K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, “Stochastic stability of the discrete-time extended Kalman filter,” IEEE Trans. Autom. Control 44, 714–728 (1999).
[Crossref]

IEEE Trans. Neural Netw. Learn. Syst. (1)

D. H. Dini and D. P. Mandic, “Class of widely linear complex Kalman filters,” IEEE Trans. Neural Netw. Learn. Syst. 23, 775–786 (2012).
[Crossref]

Internat. J. Control (1)

T. Lefebvre, H. Bruyninckx, and J. D. Schutter, “Kalman filters for nonlinear systems: a comparison of performance,” Internat. J. Control 77, 639–653 (2004).
[Crossref]

J. Opt. Soc. Am. A (3)

Journal of basic Engineering (1)

R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of basic Engineering 82, 35–45 (1960).
[Crossref]

Opt. Commun. (1)

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984).
[Crossref]

Opt. Express (2)

Opt. Lett. (3)

Optik (1)

R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane picture,” Optik 35, 237–246 (1972).

Proc. IEEE ICASSP (1)

Z. Jingshan, J. Dauwels, M. A. Vázquez, and L. Waller, “Efficient Gaussian inference algorithms for phase imaging,” Proc. IEEE ICASSP2012 pp. 25–30.

Other (2)

J. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

A. J. Krener, “The convergence of the extended Kalman filter,” in “Directions in mathematical systems theory and optimization,” (Springer, 2003), pp. 173–182.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Minimum singular value of observability matrix with 30 different initializations of the covariance matrix Q0 and P0. The matrix Q0 is initialized as qU, with q ranging from 70 to 7000, and P0 is initialized as pU, with p ranging from 50 to 5000.
Fig. 2
Fig. 2 Maximum singular value of covariance matrix with 30 different initializations of p and q in the covariance matrix.
Fig. 3
Fig. 3 Minimum singular value of covariance matrix with 30 different initializations of p and q in the covariance matrix.
Fig. 4
Fig. 4 (a) Data Set 1: synthetic images with strong noise (100 × 100 pixels with size of 1μm×1μm). (b) Data Set 2: experimental data acquired by a microscope (150×150 pixels with size of 2μm × 2μm). (c) Data Set 3: experimental data of large size (492 × 656 pixels with size of 0.74μm × 0.74μm) obtained by a microscope.
Fig. 5
Fig. 5 Recovered intensity and phase [radian] image from synthetic Data Set 1 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF.
Fig. 6
Fig. 6 Estimated intensity and phase [radians] of Data Set 2 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF.
Fig. 7
Fig. 7 (a) Estimated height [nm] for Data Set 3 by ACEKF, diagonalized CEKF, and the proposed sparse ACEKF. (b) Depth along the black line in (a).

Tables (2)

Tables Icon

Table 1 Sparse augmented complex extended Kalman filter for estimating a wave field.

Tables Icon

Table 2 Comparison of different methods. Each image is divided into 4 blocks of size 50 × 50 for ACEKF, while the proposed sparse ACEKF and diagonalized CEKF processes the images without separating into blocks.

Equations (59)

Equations on this page are rendered with MathJax. Learn more.

A ( x , y , z ) z = i λ 4 π 2 A ( x , y , z ) ,
p [ I ( x , y , z ) | A ( x , y , z ) ] = e γ | A ( x , y , z ) | 2 ( γ | A ( x , y , z ) | 2 ) I ( x , y , z ) I ( x , y , z ) ! ,
H = diag ( exp [ i λ π ( u 1 2 L x 2 + v 1 2 L y 2 ) Δ z ] , , exp [ i λ π ( u M 2 L x 2 + v N 2 L y 2 ) Δ z ] ) ,
b n = H b n 1 .
I n = γ | a n | 2 + v ,
I n = γ | a ^ n | 2 + γ diag ( a ^ n * ) ( a n a ^ n ) + γ diag ( a ^ n ) ( a n * a ^ n * ) + v ,
state : [ b n b n * ] = [ H 0 0 H * ] [ b n 1 b n 1 * ]
observation : I n = [ J n J n * ] [ b n b n * ] γ | a ^ n | 2 + v , with v ~ ( 0 , R ) ,
R = γ diag ( a ^ n * ) diag ( a ^ n ) and J n = γ diag ( a ^ n * ) K H .
S n = [ S n Q S n P ( S n P ) * ( S n Q ) * ]
S ^ n Q = H S n 1 Q H H ,
S ^ n P = H S n 1 P H .
S n Q = S ^ n Q ( S ^ n Q J n H + S ^ n P J n T ) ( J n S ^ n Q J n H + J n S ^ n P J n T + J n * ( S ^ n Q ) * J n T + J n * ( S ^ n P ) * J n H + R ) 1 ( J n S ^ n Q + J n * ( S ^ n P ) * ) ,
S n P = S ^ n P ( S ^ n Q J n H + S ^ n P J n T ) ( J n S ^ n Q J n H + J n S ^ n P J n T + J n * ( S ^ n Q ) * J n T + J n * ( S ^ n P ) * J n H + R ) 1 ( J n S ^ n P + J n * ( S ^ n Q ) * ) ,
G n = ( S n Q J n H + S n P J n T ) R 1 ,
b n = b ^ n + G n ( I n γ | a ^ n | 2 ) .
S n 1 Q = Q n 1
S n 1 P = P n 1 E ,
Q ^ n = Q n 1
P ^ n = H P n 1 H
Q n = Q ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( Q ^ n + ( P ^ n ) * )
P n = P ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( P ^ n + ( Q ^ n ) * )
S n Q = Q n
S n P = P n E ,
G n = ( S n Q J n H + S n P J n T ) R 1 = ( Q n + P n ) ( J n ) 1 q 1 .
b n = b ^ n + G n ( I n γ | a ^ n | 2 ) .
b ^ n = H b n 1
Q ^ n = Q n 1
P ^ n = H P n 1 H
a ^ n = K H b ^ n
Q n = Q ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( Q ^ n + ( P ^ n ) * )
P n = P ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( P ^ n + ( Q ^ n ) * )
b n = b ^ n + ( Q n + P n ) ( J n ) 1 q 1 ( I n γ | a ^ n | 2 ) .
M = 1 2 [ U U j U j U ] ,
[ Re ( b n ) Im ( b n ) ] = M [ b n b n * ] ,
state : [ Re ( b n ) Im ( b n ) ] = A n [ Re ( b n 1 ) Im ( b n 1 ) ] ,
observation : I n = C n [ Re ( b n ) Im ( b n ) ] γ | a ^ n | 2 + v ,
A n = M [ H 0 0 H * ] M 1 ,
C n = [ J n J n * ] M 1 .
A n a ¯ ,
C n c ¯ ,
s _ U S n s ¯ U ,
v _ U v .
P ^ n = H P n 1 H
Q ^ n = H Q n 1 H H = Q n 1 .
S ^ n Q = H S n 1 Q H H + H Q n 1 H H = Q ^ n
S ^ n P = HS n 1 P H = HP n 1 EH = HP n 1 HE = P ^ n E .
S n Q = S ^ n Q ( S ^ n Q J n H + S ^ n P J n T ) ( J n S ^ n Q J n H + J n S ^ n P J n T + J n * ( S ^ n Q ) * J n T + J n * ( S ^ n P ) * J n H + R ) 1 ( J n S ^ n Q + J n * ( S ^ n P ) * ) .
S n Q = S ^ n Q ( S ^ n Q K D H + S ^ n P K * D T ) ( D K H S ^ n Q K D H + D K H S ^ n P K * D T + D * K T ( S ^ n Q ) * K * D T + D * K T ( S ^ n P ) * K D H + q D K H K D H ) 1 ( D K H S ^ n Q + D * K T ( S ^ n P ) * ) ,
S n Q = S ^ n Q ( S ^ n Q K + S ^ n P K * D ( D * ) 1 ) ( K H S ^ n Q K + K H S ^ n P K * D ( D * ) 1 + D 1 D * K T ( S ^ n Q ) * K * D ( D * ) 1 + D 1 D * K T ( S ^ n P ) * K + q K H K ) 1 ( K H S ^ n Q + D 1 D * K T ( S ^ n P ) * ) .
S n Q = S ^ n Q ( S ^ n Q K + S ^ n P K * ) ( K H S ^ n Q K + K H S ^ n P K * + K T ( S ^ n Q ) * K * + K T ( S ^ n P ) * K + q K H K ) 1 ( K H S ^ n Q + K T ( S ^ n P ) * ) .
S n Q = S ^ n Q ( S ^ n Q K + S ^ n P K * ) ( K H ( S ^ n Q + S ^ n P K * K H + K K T ( S ^ n Q ) * K * K H + K K T ( S ^ n P ) * + q I ) K ) 1 ( K H S ^ n Q + K T ( S ^ n P ) * ) .
S n Q = S ^ n Q ( S ^ n Q + S ^ n P K * K H ) ( S ^ n Q + S ^ n P K * K H + K K T ( S ^ n Q ) * K * K H + K K T ( S ^ n P ) * + q I ) 1 ( S ^ n Q + K K T ( S ^ n P ) * ) .
S n P = S ^ n P ( S ^ n Q + S ^ n P K * K H ) ( S ^ n Q + S ^ n P K * K H + K K T ( S ^ n Q ) * K * K H + K K T ( S ^ n P ) * + q I ) 1 ( S ^ n P + K K T ( S ^ n Q ) * ) .
S n Q = Q ^ n ( Q ^ n P ^ n EE ) ( Q ^ n + P ^ n EE + E ( Q ^ n ) * E + E ( P ^ n ) * E + q I ) 1 ( Q ^ n + E ( P ^ n ) * E )
S n P = P ^ n E ( Q ^ n + P ^ n EE ) ( Q ^ n + P ^ n EE + E ( Q ^ n ) * E + E ( P ^ n ) * E + q I ) 1 ( P ^ n E + E ( Q ^ n ) * EE ) .
S n Q = Q ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( Q ^ n + ( P ^ n ) * )
S n P = P ^ n E ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( P ^ n E + ( Q ^ n ) * E ) = ( P ^ n ( Q ^ n + P ^ n ) ( Q ^ n + P ^ n + ( Q ^ n ) * + ( P ^ n ) * + q I ) 1 ( P ^ n + ( Q ^ n ) * ) ) E .
G n = ( S n Q J n H + S n P J n T ) R 1 = ( S n Q J n H + S n P J n T ) ( J n H ) 1 ( J n ) 1 q 1 = ( S n Q + S n P E ) ( J n ) 1 q 1 = ( Q n + P n ) ( J n ) 1 q 1 .

Metrics