## Abstract

Wavefront sensing with numerical phase-error correction system is carried out using a random phase plate and phase retrieval using multiple intensity measurements of axially-displaced speckle patterns and the wave propagation equation. Various wavefronts with smooth curvatures incident on the developed phase plate (DPP) are examined: planar, spherical, cylindrical, and a wavefront passing through the side of a bare optical fiber. Spurious fringe pattern in the wavefront reconstructions due to a small tilt (Δ*θ*=0.212°) in the plane illumination wave is detected and numerically corrected for. Fringe pattern of the illumination wave obtained for the setup without the phase object being investigated is used as reference fringe pattern. Fringe compensation yields wavefronts with the correct shape and numerical value based on the specifications of the setup. The numerical phase-error correction system described in this study can be extended to other types of phase errors such as those due to aberrations if optical elements are present in the setup or due to perturbations in the environment.

© 2008 Optical Society of America

## 1. Introduction

Wavefront sensing (WFS) is the technique of measuring the phase distribution across an electromagnetic wave. The technique has been utilized in diverse application areas, which include astronomy, ophthalmology, microscopy and atomic physics. In astronomy, WFS based on Shack-Hartmann design (SH) images a guide star in each of the sensor subapertures, and the aberration due to atmospheric turbulence is deduced from individual images and corrected for [1]. WFS based on the SH aberrometer has become widely accepted as a tool in ophthalmology for quantifying the aberrations in human eyes [2]. In microscopy, interferometric investigations of phase aberrations in lenses and objectives are carried out using digital holography [3]. Phase retrieval based on intensity of transport equations is used in atomic physics in the phase imaging of thermal atomic matter waves [4]. Over the years, various methods to obtain phase measurement have been reported in the literature and, here, a few references will be given only. The methods can be categorized as follows: 1) methods based on the SH sensor, 2) interferometric methods like digital holography, and 3) noninterferometric or phase retrieval methods. In methods based on the SH sensor, phase is calculated from the displacements of focused beam spots after the wavefront passes through a lens array [5-9]. In digital holography, the phase of an object wavefront is directly measured by recording its interference with a reference wave. The phase is then accessed from the numerical reconstruction of the complex-valued wavefront [10-14]. In phase retrieval, the change in the intensity distribution coupled with an iterative or noniterative numerical algorithm is utilized to derive the wavefront phase [15-19]. In the framework of phase retrieval and digital holography, which utilizes a coherent light source and the utilization of diffraction or defocused images to obtain the phase, WFS is commonly referred to as wavefront reconstruction.

In the present study, wavefront reconstruction is carried out using a developed photoresist phase plate and a phase retrieval algorithm based on multiple measurements of speckle patterns and the wave propagation equation. Compared to methods based on the SH sensor where the resolution of the estimated phase depends on the pitch of the lens array (~100µm), the resolution of the calculated phase using this technique depends on the pitch of the detector pixel (~10µm). Compared to conventional phase retrieval methods, which use a few intensity measurements (usually 2), this technique utilizes more intensity information from the whole speckle volume. This guarantees accurate, fast and non-stagnating phase reconstruction. The accuracy of phase reconstruction using this technique is comparable with the results obtained using digital holography and other interferometric methods. This is attributed to the coherent nature of the information-bearing speckle field whose intensity is sequentially measured and inputted in an iterative algorithm. The main difference is the simpler setup and procedure using this technique since there is no need to configure a separate reference beam (as in holography). In a related work, an amplitude mask is used to generate the speckle field [18]. However, an amplitude mask tends to diminish the available light energy. Alternatively, in this study, a random phase plate is employed which also perturbs the incident wavefront but does not absorb much light energy in the process.

In interferometric methods, a common property is that the output phase or phase difference is a 2D fringe pattern, where each fringe contour represents a region of constant phase. From these fringe patterns, the useful physical properties such as shape, index of refraction, density and other parameters or change in these parameters can be derived. The period of the fringe pattern depends on the wavelength of the beam or interfering beams and, thus, is extremely sensitive to various unavoidable experimental and environmental factors other than the parameters being investigated. The interpretation of the calculated phase, then, becomes difficult and unreliable and the fringes have to be compensated experimentally and/or numerically depending on the origin of the unwanted fringes and some practical considerations.

Some methods involve adjustments in the experimental setup to compensate for unwanted fringes. In analogue holographic interferometry, fringes due to unwanted rigid body motion are compensated by mechanical adjustments of the sandwich holographic setup [20]. In speckle interferometry applied to non-destructive testing, overcrowding of fringes even for small loads is addressed by employing two similar objects (test and master) in exact register, except for the defect in one of the objects. When loaded in the same way, the phase difference prominently reveals the defect [21]. In real-time comparative holography, a dense fringe system is compensated using two nominally identical samples. This moiré technique depicts the difference in displacement between master and test samples [22].

Fringe compensation may involve digital correction of phase errors. In holographic deformation measurement and defect detection, overcrowding of fringes can be compensated using numerical displacement fields built from phase-shifted reconstructions in order to decrease fringe density at highly deformed points and hence simplify defect detection [23]. In single-wavelength digital holographic microscopy the unwanted phase curvature introduced by a microscope objective can be corrected numerically in the hologram and/or image planes [24]. Chromatic aberration of the microscope objective, imperfections in the recombining beam-splitter cube, and slight misalignments of the reference beams may introduce phase errors in multi-wavelength digital holographic microscopy. This chromatic aberration may be compensated by subtracting the relative phase from the difference phase at two wavelengths [25].

Fringe compensation can also be accomplished with an algorithm that involves both experimental and numerical procedure. In short-wavelength digital holography, aberration in UV optics and deviation of numerical reference beam from physical reference beam due to non-uniform UV beam profile introduce phase errors in the reconstruction. Fringe compensation was accomplished using a phase-amplitude correction factor determined by a calibration algorithm that utilizes an ideal sample object in the experimental setup to finally obtain the optimum reference and illumination beams [26].

In non-interferometric and SH-based methods, compensation for phase errors may also involve experimental and/or numerical adjustments. In a medical research study it was found that realignment of the SH sensor substantially increased the variance of the measurements and that angular misalignment can result in significant errors, particularly in the determination of coma. The findings from this medical study highlight the importance of having reliable phase data especially when assessing highly aberrated eyes during follow-up or before surgery [27]. Rotational (and translational) misalignments of CCD and lens array in the SH sensor can be corrected by mechanical adjustments in the setup [28]. In deterministic phase retrieval using the transport-of-intensity equation, effects of misalignment for radially symmetric irradiances can be corrected using a centroid algorithm, quantization error in the analogue to digital conversion can be reduced by avoiding nonlinear region of detector, and photo detection error is reduced by wavelet filtering [29]. In iterative phase retrieval using volume speckle field with synthetic apertures applied to rough objects, a nominal tilt of the sensor array will result in misalignments between the micrometer-sized speckle grains at the sides of the two speckle patterns being stitched [30]. The extent of misalignment was determined using intensity correlations of the overlapped portions of the original measurements and corrected to yield the desired reconstruction.

In this study, the aspect of fringe compensation is investigated numerically and experimentally for the reconstructions of wavefronts with small curvatures. Different wavefronts incident on the DPP are considered: planar, spherical, cylindrical and wavefront passing through the side of a bare optical fiber. Since the complete wavefront is directly accessible from the phase retrieval algorithm, fringe compensation can be accomplished numerically. Possible sources of unwanted fringes are determined, and the accuracy of the phase reconstructions is assessed using simulations and calculations based on values from the setup. Section 2 of this paper describes some underlying principles in the utilization of multiple intensity recordings of a volume speckle field. The experimental setup of the wavefront sensor is also discussed in section 2. Section 3A presents the numerical simulations of the different wavefronts used in the experiment and the influence of a linear shift in the phase resulting in spurious fringe pattern. The scenarios of the problems encountered in the interpretation of the spurious fringes are also discussed. The phase-error correction system is described and implemented in the simulated wavefronts. Section 3B presents the results from the experiments demonstrating successful reconstructions of the test wavefronts free of any spurious fringes. Finally, section 4 gives the summary and conclusions of the study.

## 2. Phase retrieval based on a volume speckle field

Various phase retrieval methods utilize the change in two intensity measurements of a wave field to reconstruct the phase of an object wave using a computer algorithm [15]. In conventional iterative methods based on the Gerchberg-Saxton (GS) algorithm, two images of the test object are recorded, one taken at the image plane and another at a diffraction plane. The two images are then utilized in a Fourier transformation process between the two planes and the approximation of the true phase is obtained after a certain number of iterations. In noniterative phase-diversity (PD) methods based on intensity of transport equations, two slightly defocused images are recorded, each of the recordings is taken symmetrically about the image plane or the pupil plane. The infinitesimal variation between two intensity measurements is then related as a spatial derivative and the phase is obtained analytically using the paraxial approximation of the wave equation.

Performance of a phase retrieval method is usually measured in terms of accuracy and/or rate of convergence of phase calculations. Evaluation of the various methods using such parameters presupposes an efficient sampling of the optical intensities by the detection system. Conventional methods which utilize only of a few intensity measurements may offer fast data acquisition and processing. To reconstruct low-curvature wavefronts, PD methods, for example, utilize only 2 defocused images near the focal plane of a lens [16] or 1 composite intensity measurement of 2 or more object planes generated using a distorted diffraction grating [17].

One possible pitfall of relying on very few intensity measurements is the predisposition of conventional methods to quantization or discretization error due to the limited number of available energy levels of the detection system. This measurement error may lead to a precarious phase calculation and a slow phase convergence in the case of iterative methods.

Phase retrieval methods which utilize multiple intensity measurements, on the other hand, are not seriously affected by the influences of quantization error. Individually, the diffraction measurements from each of the multiple planes may be affected by discretization error and perturbations in the system but the useful intensity information cumulatively derived from all of these planes is improved as the number of diffraction planes is increased. As more information is accessible for the wave amplitude from the multiple intensity measurements taken at known separation distances, the accuracy of the wavefront phase approximation, thus, becomes enhanced.

The performance of such methods is demonstrated, for instance, in the utilization of multiple intensity measurements of a volume speckle field in the reconstruction of wavefronts from both rough reflecting and transmitting objects [31]. The volume speckle field is generated when an incident coherent beam is scattered by the random structures on the surface of a rough object. From the simulations and experiments on the millimeter-sized rough objects used, 20 measurements and at least 2 iterations yield wavefront reconstructions [31]. The speckle intensity patterns which vary between the sequential measurement planes afford the required intensity variation vital in the phase reconstruction. The technique also offers a simple setup requiring no lens or special grating. Furthermore, PD methods assume paraxial approximation or the test wavefront is confined in the forward direction. When applied to the general case of speckle fields from rough objects which do not propagate strictly in one direction and exhibiting many phase singularities, the performance of these conventional methods is hampered.

For the case of smooth or low-curvature wavefronts, the use of a diffuser facilitates the conversion of the smooth wavefront into a randomly varying speckle field [18]. The purpose of the diffuser in this wavefront sensing application is to transform the slowly-varying intensity of the test wavefront into a detectable partially-developed speckle field [32]. In the present setup, the time required to accomplish the automated sequential recording of the 20 speckle patterns confines the application of the technique to stationary wavefronts. As an idea for the total recording time for recording the speckle patterns and moving the camera in the recording planes, 20 intensity measurements at 1-mm distance interval would take about 1 minute. A faster recording time could still be achieved with better and more optimized control of the stage and detector system. In principle, simultaneous recording of speckle intensity patterns using arrays of beam splitters and sensors at various positions would allow accurate calculations of dynamic wavefronts in near real time.

Figure 1 shows the schematic diagram for the wavefront reconstruction using the DPP and sequential intensity measurements of a volume speckle field. When a plane wave illuminates the phase object, the transmitted wave is phase-shifted by some amount which depends on the optical path length within the phase object. The transmitted test wavefront with certain curvature is then directed towards the sensor which, in turn, outputs the wavefront amplitude and phase. The wavefront sensor is composed of three components: 1) DPP, 2) photodetector (CMOS, *Opticstar PL130M*, 1280 (H)×1024 (V), pixel size 5.2 µm×5.2 µm) mounted on a motorized stage (*Thorlabs PT-Z6*) to obtain multiple speckle intensity measurements, and 3) iterative phase retrieval algorithm based on the wave propagation equation. The DPP introduces phase randomization to the test wavefront resulting in a partially-developed speckle field. The iris aperture positioned close to the DPP, which controls the speckle size in the measurement planes, has a diameter of 1.2 mm. The speckle pattern at each of the 20 measurement planes is recorded using the CMOS camera. The distance between adjacent measurement planes is Δz=1 mm and the initial position z_{0}=40 mm. The speckle intensity measurements are then inputted in the phase retrieval algorithm.

Figure 2 shows the schematic diagram for the phase retrieval algorithm to reconstruct the test wavefront. Starting at the 1^{st} measurement plane, the wavefront is approximated by an amplitude that is equal to the square root of the speckle intensity at the 1^{st} plane
$\left(\sqrt{{I}_{1}}\right)$
and an initial random guess phase. Then, wave propagation from 1^{st} to 2^{nd} planes is simulated using the Rayleigh-Sommerfeld diffraction equation given by:

$$\mathrm{exp}\left[-i2\pi \left({f}_{x}x+{f}_{y}y\right)\right]d{f}_{x}d{f}_{y}$$

where *U* is the diffracted field, *d*’ is the propagation distance, *C* is a constant, *ȗ* is the Fourier transform of the input field, *f _{x}* and

*f*are the spatial frequencies in the

_{y}*x*- and

*y*-directions, respectively. The first exponential factor inside the double integral is the transfer function of free space propagation. Having derived the wavefront at the 2

^{nd}plane using Eq. (1), the phase is extracted using the

*angle*of the complex-valued wavefront and, subsequently, multiplied by the amplitude at the 2

^{nd}plane obtained from square root of the speckle intensity at that plane $\left(\sqrt{{I}_{2}}\right)$ . The resulting wave in the form ${A}_{2}{e}^{i{\varphi}_{2}}$ is inputted again in the wave propagation equation, where at this sequence, to calculate the wavefront at the 3

^{rd}plane. This sequential process of extracting the calculated phase and its multiplication with the square root of the intensity measurement followed by wave propagation is repeated until the 20

^{th}plane. Iterating the algorithm by back propagating the wavefront from the 20

^{th}to the 1

^{st}plane and repeating the sequential process further enhances the wavefront reconstruction. A more thorough discussion of the recording and reconstruction steps of the technique is found in Ref. [18] and [30-32]. The statistical properties of the transmitted wavefront through the DPP are described in Ref. [32] in the framework of complex

*ABCD*optical system [33]. The effective longitudinal correlation length of the transmitted wavefront decreases as the depth of indentations of the DPP increases. (Average depth of indentations is proportional to the exposure of the photoresist during the photolithographic process). In the context of the phase retrieval technique, this means that the distance between the sequentially-recorded speckle patterns, Δ

*z*, should be in the range of the effective correlation length to achieve phase reconstruction. From the experiments, we have Δ

*z*=1mm.

## 3. Results and discussion

#### 3.1 Numerical simulations

Figure 3 depicts a geometric model of a plane illumination wave (PW) at the iris aperture plane and the influence of a linear phase shift on the output phase of the wavefront sensor. Figure 3(a) shows a constant (single-valued) phase at the aperture plane when the PW is at normal incidence to the sensor. Figure 3(b) shows a PW that is tilted downward at an angle Δ*θ* from the optical axis. The sensor will then measure a PW with a phase that varies linearly in the vertical direction. The straight horizontal fringe pattern indicates lines of constant phase at the aperture plane. Each fringe contour corresponds to a 2π phase shift equivalent to an optical path difference of λ=0.633 µm.

A simple quantitative analysis is as follows. Observing 6 horizontal line fringes in Fig. 3(b) means that along the vertical diameter, the PW arriving at the lower portion has advanced by a physical distance of 6 (0.633 µm)=3.8 µm compared to the wavefront arriving at the upper portion. It can be shown that for small angles, Δ*θ*=3.8µm/1.2mm=0.0031rad=0.18° In addition, as the angle Δ*θ* increases, the fringe density will increase indicating a greater phase difference along the vertical diameter. Figure 3(c) depicts a tilted PW illuminating a phase object which is a lens modeled using a centered quadratic phase distribution. The transmitted wavefront is then made incident on the sensor. The output of the sensor is a curved wavefront that is symmetric from the vertical diameter. Three possible scenarios for an erroneous fringe analysis are as follows: 1) Lack of knowledge of the linear shift due to a nominally tilted illumination wave leads to precarious fringe interpretation; 2) There is awareness of the linear shift due to tilted illumination but compensation for phase error is inadequate; and 3) Failure to account for other contributory sources to the phase error. It is emphasized that the context of accurate fringe interpretation is still bound by the intended application. For example, in detection of defects, qualitative evaluation of deviations from a normal fringe pattern associated to a sample that has no defect may be sufficient. The error in the fringe analysis caused by failing to compensate for the small tilt becomes critical if quantitative measurements are required. The second scenario involves proper calibration of the setup that may include several optical elements. Without the phase object being investigated, a properly compensated tilted PW should register a constant phase value in the plane of the aperture (Fig. 3(a)). The reference phase that produces such constant phase is then used in the subsequent implementations with the phase objects inserted in the setup. Figure 4 depicts the fringe-error correction system which removes the phase error from the reconstructed spurious fringe pattern. Upon subtraction of the reference fringe pattern, the compensated fringe pattern reveals the desired spherical phase.

The third scenario, which is related to the second, includes various and sometimes unavoidable optical elements like glass mounts and suspension fluids of microscopy specimens, non-uniform laser beam profile with varying curvatures, optical turbulence in the environment, and other perturbations in the setup. The effects of these factors in the phase profile have to be accounted, minimized, if not corrected for. For the rest of the numerical results and, in the next section (experimental results), the wavefronts are arranged in an array as shown in Fig. 5, for compactness and multimedia presentation purposes. The representative images for the converging spherical wave are also included in the array (Figs. 5(a) and 5(b)) along with those from a cylindrical lens (Fig. 5(c)) and a bare optical fiber tip (Fig. 5(d)). The top image in Fig. 5(c) shows the spurious wavefront when a tilted PW illuminates a vertical cylindrical lens and the transmitted wavefront is analyzed by the sensor. The bottom image in Fig. 5(c) shows the corrected phase when the reference fringe pattern (middle image in Fig. 5 (a)) is subtracted from the spurious fringe pattern revealing the desired characteristic cylindrical phase. The top image in Fig. 5 (d) shows the spurious wavefront for a narrow strip of cylindrical phase distribution that simulates a cylindrical glass rod or, roughly, an optical fiber tip. The horizontal background fringes are similar to those obtained for a tilted plane wave in the left image in Fig. 5(a). The location where there is a shifted fringe pattern in a narrow strip in the lower middle section signifies a monotonic phase shift compared to the background fringes. When the references fringes are subtracted, the narrow-strip structure of the simulated glass cylinder is exposed. A video of the numerical results shows the evolution of the wavefront phase at small-interval adjacent planes separated by λ/8 or 0.0781 µm as the wave propagates near the plane of the DPP.

#### 3.2 Experimental results

The left image in Fig. 6(a) shows the reconstructed wavefront when the sensor is illuminated by a plane wave. This experimental fringe pattern, similar to the results shown in the left image in Fig. 5(a), has nearly parallel horizontal fringes which is attributed to a tilt in the illumination beam. The magnitude of the tilt is calculated to be Δ*θ*=0.212°. A linear reference fringe pattern as shown in the middle image in Fig. 6(a) is numerically generated and subtracted from the experimental fringe pattern resulting in a plane wave with a constant value at the plane of the aperture as shown in the right image in Fig. 6(a). The reference fringe pattern is then utilized in the subsequent fringe compensation when a phase object is placed in the path of the tilted illumination beam. Furthermore, reference fringes with a certain fringe spacing and orientation can be designed for specific illumination beam tilts. The top image in Fig. 6(b) shows the experimental wavefront when a spherical lens (*f*=200 mm) is positioned at a distance *d*=40 mm from the DPP. The curved fringe pattern is highly similar to the numerical fringe pattern for a converging spherical wave added with a linear phase shift (top image in Fig. 5(b)). The bottom image in Fig. 6(b) is the phase-corrected fringe pattern showing a centered circularly symmetric fringe pattern, upon subtraction of the reference phase in Fig. 6(a). Phase subtraction can also be carried out using the experimental phase without the phase object (left image in Fig. 6(a)) as reference fringes. It is noted in the experiment that if the iris aperture of the DPP is translated or misaligned in the transverse direction (along x- and/or y- axis), the reconstructed spherical wave exhibit similar characteristics as in Fig. 6(b). For such misalignments, the linear phase shift can be compensated in the same manner using the appropriate linear reference fringe pattern. The top image in Fig. 6(c) shows the wavefront when a vertical cylindrical lens (*f*=40 mm) positioned at *d*=100 mm is illuminated by the tilted plane wave. The fringe pattern is similar to the simulated diverging spherical wave with linear phase shift (top image in Fig. 5(c)). Correspondingly, after phase-error correction, the bottom image in Fig. 6(c) reveals the desired wavefront. The top image in Fig. 6(d) shows the reconstructed wavefront when a bare optical fiber tip (OF) (diameter of 150 µm) mounted close to the aperture of the DPP is illuminated by a plane wave. The general area where the OF is positioned is evident from the shifted fringe patterns at a narrow section at the lower middle portion of the aperture. The phase shift is introduced by the different index of refraction in the core of the OF. The phase-corrected fringe pattern is shown in the lower image of Fig. 6(d) where the wavefront phase at the aperture plane is shifted in the narrow strip where the OF tip is located. A video of the experimental results shows the evolution of the wavefront phase at small-interval adjacent planes separated by λ/8 or 0.0781 µm, as the wave propagates near the plane of the DPP. Also available is a video file containing both the numerical and experimental results in each of the video frames.

## 4. Summary and conclusions

Experimental and numerical approaches to compensate for spurious fringes in various wavefront sensing methods are initially discussed. The basic principles as well as the setup for phase retrieval using multiple intensity measurements of a volume speckle field are reviewed. In this study, wavefront sensing is carried out using the DPP and phase retrieval with numerical phase-error correction system. The components of the wavefront sensor include a DPP, a CMOS camera on a motorized stage, and an iterative phase retrieval based on the Rayleigh-Sommerfeld wave propagation equation. The main advantages of the technique are the simple setup requiring no separate reference beam and the accurate, non-stagnating and fast-convergent phase calculation attributed to the full utilization of volume information in the speckle field. Wavefronts generated when a plane wave illuminates different phase objects such as spherical lens, cylindrical lens and through the side of a bare optical glass fiber are investigated numerically and experimentally. The phases of the different test wavefronts are measured and visualized using the constructed sensor. Phase error due to unavoidable minute tilt of the illumination plane wave is detected and numerically compensated for. The numerical phase-error correction system described in this study can be extended to other types of phase errors such as those due to aberrations if optical elements are present in the setup or due to perturbations in the environment. The applications envisaged for this wavefront sensor with phase-error correction system includes phase microscopy of biological specimens in different media, investigations of photoelastic strain, angular displacement measurement and deformation analysis in an industrial environment.

## Acknowledgments

The authors greatly acknowledge the financial support from the Danish Council for Technology and Innovation under the Innovation Consortium CINO (Centre for Industrial Nano Optics). The authors also thank Henrik Chresten Pedersen and Jørgen Stubager for their kind assistance in the preparation of the photoresist plates. P. Almoro acknowledges the University of the Philippines – Office of the Vice-Chancellor for Research and Development for the financial support.

## References and links

**1. **J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A **7**, 1598–1608 (1990). [CrossRef]

**2. **J. Liang, B. Grimm, S. Goelz, and J. F. Bille, “Objective measurement of wave aberrations of the human eye with use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A **11**, 1949- (1994) [CrossRef]

**3. **T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A **23**, 3177–3190 (2006). [CrossRef]

**4. **P. Fox, T. Mackin, L. Turner, I. Colton, K. Nugent, and R. Scholten, “Noninterferometric phase imaging of a neutral atomic beam,” J. Opt. Soc. Am. B **19**, 1773–1776 (2002). [CrossRef]

**5. **R. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. **31**, 6902–6908 (1992) [CrossRef] [PubMed]

**6. **G. Y. Yoon, T. Jitsuno, M. Nakatsuka, and S. Nakai, “Shack Hartmann wave-front measurement with a large F-number plastic microlens array,” Appl. Opt. **35**, 188- (1996) [CrossRef] [PubMed]

**7. **L. Seifert, H.J. Tiziani, and W. Osten, “Wavefront reconstruction with the adaptive Shack-Hartmann sensor,” Opt. Commun. **245**, 255–269 (2005) [CrossRef]

**8. **A. V. Goncharov, J. C. Dainty, and S. Esposito, “Compact multireference wavefront sensor design,” Opt. Lett. **30**, 2721–2723 (2005). [CrossRef] [PubMed]

**9. **M. Ares, S. Royo, and J. Caum, “Shack-Hartmann sensor based on a cylindrical microlens array,” Opt. Lett. **32**, 769–771 (2007). [CrossRef] [PubMed]

**10. **U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. **33**, 179- (1994) [CrossRef] [PubMed]

**11. **I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. **22**, 1268–1270 (1997). [CrossRef] [PubMed]

**12. **C. Wagner, S. Seebacher, W. Osten, and W. Jüptner, “Digital Recording and Numerical Reconstruction of Lensless Fourier Holograms in Optical Metrology,” Appl. Opt. **38**, 4812–4820 (1999). [CrossRef]

**13. **M. Takeda, W. Wang, Z. Duan, and Y. Miyamoto, “Coherence holography,” Opt. Express **13**, 9629–9635 (2005). [CrossRef] [PubMed]

**14. **F. Ghebremichael, G. P. Andersen, and K. S. Gurley, “Holography-based wavefront sensing,” Appl. Opt. **47**, A62–A69 (2008) [CrossRef] [PubMed]

**15. **R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttgart) **35**, 237–246 (1972)

**16. **F. Roddier, “Curvature sensing and compensation: a new concept in adaptive optics,” Appl. Opt. **27**, 1223 – 1225 (1988) [CrossRef] [PubMed]

**17. **P. M. Blanchard, D. J. Fisher, S. C. Woods, and A. H. Greenaway, “Phase-diversity wave-front sensing with a distorted diffraction grating,” Appl. Opt. **39**, 6649–6655 (2000). [CrossRef]

**18. **A. Anand, G. Pedrini, W. Osten, and P. Almoro, “Wavefront sensing with random amplitude mask and phase retrieval,” Opt. Lett. **32**, 1584–1586 (2007). [CrossRef] [PubMed]

**19. **G. Pedrini, F. Zhang, and W. Osten, “Deterministic phase retrieval from diffracted intensities speckle fields,” Opt. Commun. **277**, 50–56 (2007). [CrossRef]

**20. **N. Abramson and H. Bjelkhagen, “Sandwich hologram interferometry 5: Measurement of in-plane displacement and compensation for rigid body motion,” Appl. Opt. **18**, 2872–2882 (1979). [CrossRef]

**21. **C. Joenathan, A. R. Ganesan, and R. S. Sirohi, “Fringe compensation in speckle interferometry: application to nondestructive testing,” Appl. Opt. **25**, 3781–3784 (1986). [CrossRef] [PubMed]

**22. **P. Rastogi, “Interferometric comparison of diffuse objects using comparative holography,” Opt. Eng. **34**, 1923–1929 (1995). [CrossRef]

**23. **A. Nemeth, J. Kornis, and Z. Fuzessy, “Fringe compensation in holographic interferometry using phase-shifted interferograms,” Opt. Eng. **39**, 3196–3200 (2000). [CrossRef]

**24. **F. Montfort, F. Charrière, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Purely numerical compensation for microscope objective phase curvature in digital holographic microscopy: influence of digital phase mask position,” J. Opt. Soc. Am. A **23**, 2944–2953 (2006). [CrossRef]

**25. **S. De Nicola, A. Finizio, G. Pierattini, D. Alfieri, S. Grilli, L. Sansone, and P. Ferraro, “Recovering correct phase information in multiwavelength digital holographic microscopy by compensation for chromatic aberrations,” Opt. Lett. **30**, 2706–2708 (2005). [CrossRef] [PubMed]

**26. **G. Pedrini, F. Zhang, and W. Osten, “Digital holographic microscopy in the deep (193 nm) ultraviolet,” Appl. Opt. **46**, 7829–7835 (2007). [CrossRef] [PubMed]

**27. **A. Cervino, S. Hosking, and M. Dunne, “Operator-induced errors in Hartmann-Shack wavefront sensing: Model eye study,” J. Cataract Refract. Surg. **33**, 115–121 (2007). [CrossRef]

**28. **J. Pfund, N. Lindlein, and J. Schwider, “Misalignment effects of the Shack-Hartmann sensor,” Appl. Opt. **37**, 22–27 (1998). [CrossRef]

**29. **S. Barbero and L. N. Thibos, “Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation,” Opt. Eng. **45**, 094001 (2006). [CrossRef]

**30. **P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. **32**, 733–735 (2007). [CrossRef] [PubMed]

**31. **P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. **45**, 8596–8605 (2006). [CrossRef] [PubMed]

**32. **P. F. Almoro, DTU Fotonik, Department of Photonics Engineering, and S. G. Hanson are preparing a manuscript to be called “Random phase plate for wavefront sensing via phase retrieval and a volume speckle field.”

**33. **H. T. Yura and S. G. Hanson, “Optical beam wave propagation through complex optical systems,” J. Opt. Soc. Am. A **4**, 1931- (1987) [CrossRef]