## Abstract

We propose and demonstrate a computational method for complex-field imaging from many noisy intensity images with varying defocus, using an extended complex Kalman filter. The technique offers dynamic smoothing of noisy measurements and is recursive rather than iterative, so is suitable for adaptive measurements. The Kalman filter provides near-optimal results in very low-light situations and may be adapted to propagation through turbulent, scattering, or nonlinear media.

© 2011 Optical Society of America

## 1. Introduction

Phase imaging plays an important role in adaptive optics and biological imaging, where samples that are otherwise transparent can be visualized via the phase delay that they impose on the incident beam. Quantitative phase information can recover the optical density of an object, which usually relates to physical density or shape, making it useful in metrology and imaging of temperature or pressure distributions. In holography, phase information solves the ‘twin image’ problem [1] and is essential for unique backpropagation of the wave-field. Since optical phase cannot be measured directly, quantitative phase retrieval is necessarily a computational imaging technique, and many simple experimental schemes exist which use post-processing in place of complicated imaging hardware. Specifically, we are interested in algorithms for recovering phase and amplitude from a set of intensity images at varying defocus distances.

Microscopists recognized early on that propagation introduces contrast due to phase variations - a perfectly focused image has a purely real transfer function and thus contains no information about the phase of the object [2], whereas *any* complex transfer function will introduce phase contrast. Defocus is a popular way to introduce a complex-valued transfer function, because it is pure-phase (no absorption) and simple to implement - one need only move the camera along the optical axis between image captures (see setup in Fig. 1). Since intensity of a complex-field is proportional to amplitude squared, the measurement process is nonlinear.

Perhaps the first methods for phase from defocused intensity measurements were iterative methods in electron imaging [3] and the Gerchberg-Saxton (GS) [4] algorithm. The GS method uses two images, an in-focus and a Fourier Domain (FD) image (*i.e.* far field), and alternately bounces between the two domains, updating an estimate of the complex-field at each step with measured or a priori information [5–8]. Similar algorithms use Fresnel instead of Fourier transforms to define the complex transfer function between intensity images in phase retrieval or synthesis [9–13]. All of these techniques can be classified as a subset of the more general projection-based algorithms [14], which place no restriction on the transforms used for the optimization, thus allowing constraints in non-conventional domains [15–17]. Solutions are not provably unique, but are likely to be correct [18], and many heuristics exist for reducing the solution space [19] or guiding phase-mask design towards a practical solution [13,16]. In the case of phase imaging, where there is only one correct solution, ambiguities can be reduced by using more than two intensity images (*i.e.* a stack of defocused images) [19–22], or custom phase diversity [23]. Here, we refer to this entire class of techniques as ‘iterative techniques’ and find that, under aggravated noise conditions, the final result will be disproportionately affected by the noise of the last image included [8, 24].

Direct solution of the ‘phase problem’ requires linearization in some domain, or an assumption about the object. The nonlinear problem can then be solved in 1D [25], 2D with oversampling [31], or under the assumption of pure-phase [26], small-phase [27, 28], slowly varying phase [29] or homogeneous objects [30]. When defocus is small, phase contrast is approximately linear in intensity [32] and the Transport of Intensity equation (TIE) [33] describes a direct solution for phase and amplitude from just two defocused images. The result is very noise-sensitive [34], but can be improved by using more images [35], provided that they are all within the small defocus regime (suggesting low phase contrast). Recently, the TIE was extended beyond this limit by using higher order derivatives to correct for nonlinearities [36]. TIE-based techniques are fast, computationally efficient and can be implemented in many existing systems [37–39], but fail in the case of significant diffraction between images and/or large amounts of noise. Here, we seek a more general technique that can adaptively take into account noise and varying distance between images, *Δz*, with a single model.

Estimation theory provides a framework for separating phase, amplitude and noise. A maximum likelihood estimator has been proposed and extended for use with multiple intensity images [40,41]. The practical application, however, is iterative and, with significant noise, can get stuck at local maxima [42, 43]. Regularization of the objective function enables a (user-chosen) tradeoff between noise and accuracy [43, 44]. The Kalman filter (KF) which we propose here aims to provide the optimal regularization of noise on a pixel-by-pixel basis.

Independent of the algorithm used, the choice of complex transfer function determines how much phase information is captured and its sensitivity to noise [45]. Thus, *phase estimation accuracy is fundamentally limited by the defocus distances chosen and noise levels*. The Cramer-Rao bound (CRB) is an information theory metric describing the minimum achievable error of *any* estimator, with no guarantee that this ideal estimator exists [46]. Estimators that achieve the lower bound provided by the CRB are called *efficient*. Unfortunately, the bound itself is object-dependent, implying that the optimal measurement set is *object-dependent* and only heuristic conclusions may be made. Generally, larger distances provide better diffraction contrast [24,32] but are limited by numerical aperture, and using more images at varying defocus is likely to capture optimal information over a wider range of object spatial frequencies and across phase singularities [47]. However, there exist objects whose optimal set of measurements will be a single image plane (a phase grating, for example).

The fact that the accuracy of phase retrieval is object-dependent means that it is very difficult to compare methods or optimize the measurement scheme. This suggests application of adaptive techniques, which estimate and adjust the measurement in real-time. The KF estimator which we describe here represents a first step in this direction. The KF is a well-known algorithm in control theory [48], which is able to recursively incorporate measurements of a dynamic system. The KF is proven to be an *efficient* algorithm for linear Gaussian systems, guaranteeing optimal estimation. This means that the noise of the result approaches the fundamental limit of the noise in a single image divided by the total number of images used. Here we use a complex Extended Kalman filter (EKF), augmenting the nominal KF to handle nonlinear and complex-valued systems [49]. The filter simultaneously estimates complex-field and filters out noise from the intensity measurements, so is robust to very high noise levels and is able to provide the near-optimal result.

## 2. Theory

We wish to determine the 2D optical complex-field, *A*(*x*, *y*, *z*_{0}) = |*A*(*x*, *y*, *z*_{0})|*e*^{iϕ(x,y,z0)} at a distance *z*_{0} from the camera, where *ϕ*(*x*, *y*, *z*_{0}) is the phase. We capture a sequence of *N* noisy intensity measurements, *I*(*x*, *y*, *z _{n}*) = |

*A*(

*x*,

*y*,

*z*)|

_{n}^{2}+

*υ**at various distances*

_{n}*z*

_{0},

*z*

_{1}, ...

*z*separated by

_{N}*Δz*, where

*υ**describes the noise at each pixel. We assume here that the field propagates through a linear medium and obeys the homogeneous paraxial wave equation*

_{n}*λ*is the wavelength of illumination and ∇

_{⊥}is the gradient operator in the lateral (

*x*,

*y*) dimensions only. We assume that the probability distribution of measured intensity at each pixel is independent and Poisson distributed:

*γ*is the photon count detected by the camera. We would like to find the

*conditional*probability distribution of

*A*(

*x*,

*y*,

*z*) given measurements of

_{n}*I*(

*x*,

*y*,

*z*). Because the statistics of

_{n}*I*(

*x*,

*y*,

*z*) are non-Gaussian and depend nonlinearly on

_{n}*A*(

*x*,

*y*,

*z*), the estimation problem is nonlinear, and it is computationally expensive to solve for the exact conditional probability distribution

_{n}*P*[

*A*(

*x*,

*y*,

*z*

_{0})|

*I*(

*x*,

*y*,

*z*)]. One way of obtaining a near-optimal estimate is to use the EKF followed by back-propagation, also called the Rauch-Tung-Streibel (RTS) smoother [50, 51]. We describe below our complex-valued version of the well-known EKF [51, 52].

_{n}We discretized the complex-field as a raster-scanned complex-valued column vector, ** a**(

*z*), and do the same for the pixels of the intensity image that constitutes the measurement,

*η**:*

_{n}*M*is the number of pixels in each dimension (

*x*,

*y*). The evolution of the discretized complex-field is

*d*

**/**

*a**dz*=

**, where**

*La***is a matrix describing the evolution of the complex-field along the optical axis, determined by Eq. (1). Where the measurement scheme involves other complex transfer functions,**

*L***should be modified accordingly.**

*L*We then approximate the intensity measurement as a function of ** a**(

*z*) plus noise:

_{n}The measurement noise covariance matrix is
${\mathit{R}}_{n}\equiv \u3008{\mathit{\upsilon}}_{n}{\mathit{\upsilon}}_{n}^{T}\u3009=\gamma \text{diag}\u3008{\left|\mathit{a}\left({z}_{n}\right)\right|}^{2}\u3009$, where * ^{T}* denotes the transpose and diag

**is a diagonal matrix with the diagonal vector given by**

*u***.**

*u*Assume an initial estimate ** â**(

*z*

_{0}) and initial covariance matrices

**(**

*Q**z*

_{0}) and

**(**

*P**z*

_{0}) defined as

** Q** and

**are discretized coherence functions. At each subsequent step in**

*P**z*, we forward-propagate both the estimate and the covariance matrices to obtain estimates of their values at the next image along the optical axis:

To incorporate the measurement record *η** _{n}* at each step, we update the estimate and covariance matrices by defining a state vector as
$\mathit{\alpha}\left({z}_{n}\right)\equiv \left(\begin{array}{c}\mathit{a}\left({z}_{n}\right)\\ {\mathit{a}}^{*}\left({z}_{n}\right)\end{array}\right)$, and applying the complex EKF [51, 52]. The result is

*I*is the identity matrix.

*K**is called the Kalman gain matrix and given by*

_{n}

*R**depends on*

_{n}**(**

*a**z*), we use the estimated

_{n}**(**

*â**z*) to calculate

_{n}

*R**at each step. The propagation and Kalman-filter update are repeated for each*

_{n}*z*-step until $z={z}_{N}^{+}$ and all measurements are incorporated. Once the estimate at

*z*is obtained, it can be numerically back-propagated.

_{N}A block diagram of the filter is given in Fig. 2. The top loop represents a model of the physical process as a dynamic system. The bottom loop represents the estimation process. The structure of the process is remniscant of some iterative techniques [7, 20]; however, the key difference is that the EKF stores and updates covariance matrices in order to take into account measurement noise statistics and compute a pixel-by-pixel feedback mechanism that should approach the optimal gain. By including information using an adaptive filter, we need only consider each image once, allowing a recursive rather than iterative algorithm.

## 3. Implementation

Direct application of EKF theory leads to a very large complex-valued state vector of size 2*M*^{2}, where *M* is the number of pixels in one dimension. This state vector will have a covariance matrix of size 2*M*^{4}, which is currently an unrealizable memory requirement for typical image sizes on standard computers. Thus, we suggest below two methods for reducing computational complexity, but point out that the use of 64 bit personal computers and better memory management will significantly relax the computational requirements of this technique in the future. Processing speed-up was achieved by using parallel processing on a Graphics Processing Unit (GPU), though the main computational limitation was memory.

#### 3.1. Computational method 1. Compressive storage

Images which concentrate all of their information in a sparse set of coefficients of some basis set can be represented by a smaller state vector, when the ** L** matrix in that domain can be derived. For example, smooth phase distributions are well represented by the low coefficients in the Fourier domain. If Fourier coefficients are used as state variables, the full information may be conserved by a smaller state vector. As with compressive sensing, the information need not be confined to the low frequencies, as long as it is sparse. Another example could be wavefront aberrations in a microscope, which can often be described by just a few Zernike polynomials, whose coefficients could make up the state vector.

#### 3.2. Computational method 2. Block processing

The second method for managing computation involves separating the image into blocks and processing each separately. This will be valid only when phase in one block has negligible effect on the intensity in the neighboring block. Technically, with Fresnel propagation, every pixel transfers information to every other pixel; however, the amount of information transferred drops off as the inverse square of the distance between pixels. Put another way, intensity changes are greatest near the phase disturbance. Thus, one can define (based on Fresnel propagation) a rough estimate of the (90%) width of the local influence function for a point change of phase, $\Delta x\approx 10\sqrt{\lambda \Delta z}$. Using a block size larger than this value will incur minimal crosstalk error while significantly reducing the computational costs.

## 4. Simulations

We simulated the 3D intensity field through a complex-valued object, propagating from focus in 0.5*μ*m steps over a total distance of 50*μ*m with wavelength 532nm. The intensity data was then corrupted by Poisson noise such that each pixel detected an average of *γ* = 0.998 photons, giving a signal-to-noise ratio (SNR) of
$SNR=\gamma /\sqrt{\gamma}=0.999$, to yield the noisy test measurements shown in Fig. 3(a). After recursively incorporating all the noisy images into a backward-propagating EKF using block processing (block size 60x60 pixels), the recovered phase and amplitude are shown in Fig. 3(d,e), respectively as compared to the original object field (Fig. 3(b,c)). Note that the highly scattering sharp edges of the phase information manifest as absorption edges, because information is being scattered outside the aperture of the system.

We show in Fig. 4(a) the progress of the filter as the light propagates and more images are captured. The first and third rows, respectively, show the propagation of the noise-free amplitude and phase. The second and fourth rows show the Kalman estimation of these quantities as the filter moves from *z* = 0 to *z* = 50*μ*m, adding a noisy intensity measurement at each step to refine the dynamic estimate. We start here with a zero initial guess and find that the error in both the phase and amplitude of the estimate decreases as more images are added (see Fig. 4(c)). Error is defined as the average root-mean-squared (RMS) error across all pixels.

To get a sense of the noise level in the images, the axial intensity of a single pixel is shown in Fig. 4(b) both without noise (actual) and with noise (measurement). This amount of noise will compromise most phase imaging methods, which do not explicitly account for noise. We show in Fig. 5 the phase recovered by several methods. TIE imaging uses only two images (*Δz* = 50*μ*m) and is very susceptible to noise, particularly in low frequencies (Fig. 5(b)). Higher order TIE [36], which fits the axial intensity of each pixel to higher order polynomials, can trade off noise performance for nonlinearity error correction. With severe noise, 1* ^{st}* order TIE is used (similar to [35]) and the result (Fig. 5(c)) has good noise performance, but severe blurring due to nonlinearity error. The standard iterative technique [20] does not account for noise in image data and so the result is disproportionately affected by noise in the last image included, since each image is treated as a perfect measurement (Fig. 5(d)). To solve this problem, we tried a ‘modified iterative’ technique, in which each iteration propagates the estimate to

*all*measurement planes simultaneously, then replaces measured amplitude and backpropagates all planes to the object plane, the new estimate becoming the average of all these estimates. This method explicitly accounts for noise and therefore has significantly lower error than the standard iterative algorithm (Fig. 5(e)). However, our Kalman estimation method provides the lowest error for this particular data set (Fig. 5(f)).

As an example of phase retrieval with Fourier domain compression (Sec. 3.1), we simulate a smooth (but not bandlimited) pure-phase distribution propagated to 20 image planes with *Δz* = 0.2 waves, *γ* = 200 and keep the 18x18 lowest Fourier coefficients as state variables. Results are shown in Fig. 6, along with the error maps for both amplitude and phase.

## 5. Experimental results

Experiments used laser illumination (*λ* = 532nm) and a motion stage for moving the camera axially to obtain a stack of defocused images (SNR 700). A test phase object was electron beam etched into PMMA substrate, having 190nm trenches. A microscopic 4*f* system (see Fig. 1) magnified and relayed the field at the object plane to the camera plane. Intensity images were collected at 50 axial planes separated by 2*μ*m, with the final image in focus. Some collected images are shown in Fig. 7(a). Here, the sharp edges of the phase object make the Fourier compression scheme invalid and block processing is used instead (50x50 pixel blocks). Recovered amplitude and phase are shown in Fig. 7(b,c) and
Media 1 shows the evolution of the estimation process. For verification, we used an atomic force microscope (AFM) to independently measure the surface profile (Fig. 7(d)). The dark outline of the phase object lettering in our reconstruction may be due to lost light at sharp edges, but seems to also appear to some degree in the AFM results, as evidenced by the overshoot at the edges of the M in Fig. 7(e).

## 6. Conclusion

We have demonstrated the use of Kalman filtering for recovering complex-fields from multiple defocused images in severe noise. This technique allows near-optimal smoothing of noise, and has the potential for extension to other phase-contrast schemes (aberrations, phase plates, nonlinear propagation, scattering media, etc.). The recursive nature of the processing could be exploited for real-time imaging, where the object is estimated in real-time in order to predict the next best measurement parameters. An interesting extension could potentially use models of propagation through turbulent or scattering media as ‘process noise’ in order to account for speckle, or different ** L** matrices for propagation through inhomogeneous or nonlinear media.

## Acknowledgments

We thank J. Dominguez-Caballero and N. Loomis for assistance and GPU code. Financial support was provided by the Singapore-MIT Alliance for Research and Technology and the Keck Foundation Center for Extreme Quantum Information Theory (M. Tsang), NSF Grants No. PHY-0903953 and No. PHY-0653596 and ONR Grant No. N00014-07-1-0304.

## References and links

**1. **W. Bragg, “Elimination of the unwanted image in diffraction microscopy,” Nature **167**, 190–191 (1951). [CrossRef] [PubMed]

**2. **C. J. R. Sheppard, “Defocused transfer function for a partially coherent microscope and application to phase retrieval,” J. Opt. Soc. Am. A **21**(5), 828–831 (2004). [CrossRef]

**3. **D Misell, “A method for the solution of the phase problem in electron microscopy,” J. Phys. D **6**, L6–L9 (1973). [CrossRef]

**4. **R. Gerchberg and W. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik **35**, 237–246 (1972).

**5. **R. Gonsalves, “Phase retrieval from modulus data,” J. Opt. Soc. Am. A **66**, 961–964 (1976). [CrossRef]

**6. **J. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. **3**, 27–29 (1978). [CrossRef] [PubMed]

**7. **J. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**8. **G. Yang, B. Dong, B. Gu, J. Zhuang, and O. Ersoy, “Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. **33**(2), 209–218 (1994). [CrossRef] [PubMed]

**9. **J Fienup, “Iterative method applied to image reconstruction and to computer-generated holograms,” Opt. Eng **19**(3), 291–305 (1980).

**10. **R. Rolleston and N. George, “Image reconstruction from partial Fresnel zone information,” Appl. Opt. **25**(2), 178–183 (1986). [CrossRef] [PubMed]

**11. **R. Piestun and J. Shamir, “Control of wave-front propagation with diffractive elements,” Opt. Lett. **19**(11), 771–773 (1994). [CrossRef] [PubMed]

**12. **Z. Zalevsky, D. Mendlovic, and R. Dorsch, “Gerchberg–Saxton algorithm applied in the fractional Fourier or the Fresnel domain,” Opt. Lett. **21**, 842–844 (1996). [CrossRef] [PubMed]

**13. **J. Dominguez-Caballero, S. Takahashi, S. J. Lee, and G. Barbastathis, “Design and fabrication of computer generated holograms for Fresnel domain lithography,” In OSA DH and 3D Imaging, paper DWB3 (2009).

**14. **H. Stark, Y. Yang, and Y. Yang, *Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics*. (John Wiley & Sons, 1998). [PubMed]

**15. **T. D. Gerke and R. Piestun, “Aperiodic volume optics,” Nat. Photonics **4**(3), 188–193 (2010). [CrossRef]

**16. **S. Pavani and R. Piestun, “High-efficiency rotating point spread functions,” Opt. Express **16**(5), 3484–3489 (2008). [CrossRef] [PubMed]

**17. **R. Horstmeyer, S. Oh, and R. Raskar, “Iterative aperture mask design in phase space using a rank constraint,” Opt. Express **18**(21), 22545–22555 (2010). [CrossRef] [PubMed]

**18. **A. Devaney and R. Childlaw, “On the uniqueness question in the problem of phase retrieval from intensity measurements,” J. Opt. Soc. Am. A **68**1352–1354 (1978). [CrossRef]

**19. **J. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. **32**1737–1746 (1993). [CrossRef] [PubMed]

**20. **L. Allen and M. P. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. **199**(1–4), 65–75 (2001). [CrossRef]

**21. **Y. Zhang, G. Pedrini, W. Osten, and H. Tiziani, “Whole optical wave field reconstruction from double or multi in-line holograms by phase retrieval algorithm,” Opt. Express **11**(24), 3234–3241 (2003). [CrossRef] [PubMed]

**22. **G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. **30**(8), 833–835 (2005). [CrossRef] [PubMed]

**23. **H. Campbell, S. Zhang, A. Greenaway, and S. Restaino, “Generalized phase diversity for wave-front sensing,” Opt. Lett. **29**(23), 2707–2709 (2004). [CrossRef] [PubMed]

**24. **B. Dean and C. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A **20**(8), 1490–1504 (2003). [CrossRef]

**25. **R. Gonsalves, “Phase retrieval by differential intensity measurements,” J. Opt. Soc. Am. A **4**, 166–170 (1987). [CrossRef]

**26. **W. Southwell, “Wave-front analyzer using a maximum likelihood algorithm,” J. Opt. Soc. Am. **67**(3), 396–399 (1977). [CrossRef]

**27. **R. Gonsalves, “Small-phase solution to the phase-retrieval problem,” Opt. Lett. **26**(10), 684–685 (2001). [CrossRef]

**28. **T. Gureyev, A. Pogany, D. Paganin, and S. Wilkins, “Linear algorithms for phase retrieval in the Fresnel region,” Opt. Commun. **231**(1–6), 53–70 (2004). [CrossRef]

**29. **J. Guigay, M. Langer, R. Boistel, and P. Cloetens, “Mixed transfer function and transport of intensity approach for phase retrieval in the Fresnel region,” Opt. Lett. **32**(12), 1617–1619 (2007). [CrossRef] [PubMed]

**30. **D. Paganin, S. Mayo, T. Gureyev, P. Miller, and S. Wilkins, “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc. **206**(1), 33–40 (2002). [CrossRef] [PubMed]

**31. **J. Miao, D. Sayre, and H. Chapman, “Phase retrieval from the magnitude of the Fourier transforms of nonperiodic objects,” J. Opt. Soc. Am. A **15**(6), 1662–1669 (1998). [CrossRef]

**32. **S. Mayo, P. Miller, S. Wilkins, T. Davis, D. Gao, T. Gureyev, D. Paganin, D. Parry, A. Pogany, and A. Stevenson, “Quantitative x-ray projection microscopy: phase-contrast and multi-spectral imaging,” J. Microsc. **207**, 79–96 (2002). [CrossRef] [PubMed]

**33. **M. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. A **73**(11), 1434–1441 (1983). [CrossRef]

**34. **D. Paganin, A. Barty, P. J. McMahon, and K. Nugent, “Quantitative phase-amplitude microscopy. III. The effects of noise,” J. Microsc. **214**(1), 51–61 (2004). [CrossRef] [PubMed]

**35. **M. Soto and E. Acosta, “Improved phase imaging from intensity measurements in multiple planes,” Appl. Opt. **46**(33), 7978–7981 (2007). [CrossRef] [PubMed]

**36. **L. Waller, L. Tian, and G. Barbastathis “Transport of intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express **18**(12), 12552–12561 (2010). [CrossRef] [PubMed]

**37. **L. Waller, Y. Luo, S. Y. Yang, and G. Barbastathis, “Transport of intensity phase imaging in a volume holographic microscope,” Opt. Lett. **35**(17), 2961–2963 (2010). [CrossRef] [PubMed]

**38. **S. S. Kou, L. Waller, G. Barbastathis, and C. J. R. Sheppard, “Transport-of-intensity approach to differential interference contrast (TI-DIC) microscopy for quantitative phase imaging,” Opt. Lett. **35**(3), 447–449 (2010). [CrossRef] [PubMed]

**39. **L. Waller, S. S. Kou, C. J. R. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express **18**(22), 22817–22825 (2010). [CrossRef] [PubMed]

**40. **R Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. **21**, 829–832 (1982).

**41. **R. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A **9**(7), 1072–1085 (1992). [CrossRef]

**42. **R. Paxman and J. Fienup, “Optical misalignment sensing and image reconstruction using phase diversity,” J. Opt. Soc. Am. A **5**(6), 914–923 (1988). [CrossRef]

**43. **D. Lee, M. Roggemann, B. Welsh, and E. Crosby, “Evaluation of least-squares phase-diversity technique for space telescope wave-front sensing,” Appl. Opt. **36**(35), 9186–9197 (1997). [CrossRef]

**44. **M. Roggemann, D. Tyler, and M. Bilmont, “Linear reconstruction of compensated images: theory and experimental results,” Appl. Opt. **31**(35), 7429–7441 (1992). [CrossRef] [PubMed]

**45. **M. Langer, P. Cloetens, J. P. Guigay, and F. Peyrin, “Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography,” Med. Phys. **35**(10), 4556–4567 (2008). [CrossRef] [PubMed]

**46. **D. Lee, M. Roggemann, and B. Welsh, “Cramér-Rao analysis of phase-diverse wave-front sensing,” J. Opt. Soc. Am. A **16**(5), 1005–1015 (1999). [CrossRef]

**47. **L. Allen, H. Faulkner, K. Nugent, M. Oxley, and D. Paganin, “Phase retrieval from images in the presence of first-order vortices,” Phys. Rev. E **63**(3), 037602 (2001). [CrossRef]

**48. **R. Kalman, “A new approach to linear filtering and prediction problems,” J. Basic Eng. **82**(1), 35–45 (1960). [CrossRef]

**49. **G. Welch and G. Bishop, *An introduction to the Kalman filter, University of North Carolina at Chapel Hill*, Chapel Hill, NC (1995).

**50. **J. Crassidis and J. Junkins, *Optimal Estimation of Dynamic Systems* (Chapman & Hall, 2004). [CrossRef]

**51. **H. Van Trees, *Detection, Estimation, and Modulation Theory* (Krieger, 1992).

**52. **D. Simon, *Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches* (Wiley-Interscience, 2006). [CrossRef]