This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
© 2013 OSA
The use of Shack -Hartmann sensor based adaptive optics for wavefront aberration correction is well established in astronomy and microscopy for point like objects to achieve diffraction limited imaging [1–3]. It is currently an active field of research in optical coherence tomography/microscopy (OCT/OCM) [4, 5]. Recently, adaptive optics via pupil segmentation using SLM was demonstrated in two photon microscopy . These results showed that the optical aberrations introduced by the sample, due to change in the refractive index with depth in the sample, can be reduced to recover diffraction-limited resolution. This can improve the imaging depth in tissues. Such segmented pupil approach has also been shown with scene based adaptive optics . Lately, Tippie and Fienup demonstrated the subaperture correlation based phase correction as a post processing technique in the case of synthetic aperture digital off axis holographic imaging of extended objects .
Especially for coherent imaging techniques the advantage lies in the availability of phase information. This information has been successfully exploited for implementing digital refocusing techniques in OCT, by measuring the full complex field backscattered from the sample. Current methods rely however on two assumptions: first, that the samples exhibit an isotropic and homogenous structure with respect to its optical properties, and secondly, that the aberrations, if present, are well defined, or accessible. Whereas the first limitation has not been addressed so far, the second issue can be solved either by assuming simple defocus and applying spherical wavefront corrections, or by iteratively optimizing the complex wavefront with a merit function that uses the image sharpness as metric [10,11].
In this paper, we investigate the method of subaperture correlation based wavefront error detection and correction as a post processing technique to achieve near diffraction limited resolution in the presence of aberrations. This method has the advantage of directly providing the local wavefront gradient for each subaperture in a single step. As such it operates as a digital equivalent to a Shack-Hartmann sensor. In section 2 we present the theory behind the algorithm which is applicable to any coherent interferometric imaging system. We show that this algorithm does not require the knowledge of any system parameters and furthermore does not rely on the assumption of sample homogeneity with respect to its refractive index. In section 3 we present simulation results demonstrating the practical implementation and performance of the algorithm. We have considered a full field swept source OCT system for our simulation, which is a type of an interference microscopic system. We also show how this method allows for a fast and simple way to correct for defocus aberration. Finally in section 4 we present experimental proof of the principle.
For the theoretical analysis we consider a system based on a Michelson interferometer as shown in Fig. 1, but the theory presented here is valid for any interferometric setup. The interference of the light reflected from the object and reference mirror is adequately sampled using a 2-D camera placed at the image plane of the object. For simplicity, we assume that we have a 4-f telecentric imaging system, and the object field is bandlimited by a square aperture at the pupil plane. In the 2-D interferometric imaging setup, the recorded signal at point of the detection plane and at the wavenumber of the laser illumination is given byEqs. (2) and (3) respectively represent the accumulated phase due to propagation of the field from sample and the reference mirror position to the detector plane. In case of monochromatic illumination, with effectively single, the complex signal which contains object field information, which we are interested in, can be retrieved via phase shifting methods where is varied [12, 13]. Phase shifting methods are also used to detect the complex wavefront employing broad band light sources which provide coherence gating . The complex signal can also be detected via frequency or wavelength diversity as in the case of swept source OCT. In this case, the object is placed at a distance longer than that of the reference mirror position with respect to a common reference point, and the interference signal is recorded with varying . The Fourier transformation along the dimension yields the tomographic signal separated from autocorrelation, constant intensity offset (DC) and complex conjugate terms . We assume that the aberrations present in the system primarily originate from the sample and sample optics L2, and can be represented by an effective phase error at the pupil plane.
In case of FF SSOCT, the complex signal , obtained after Fourier transformation, containing field information about the object’s layer can be written in the discrete from asEq. (5), we have neglected the constant complex factor due to propagation as it does not affect the reconstruction of intensity images, which we are finally interested in. The extent of the spatial frequency is limited due to the square pupil aperture. Let the extent of spatial frequencies in Fourier plane be pixels (), then according to the paraxial approximationFig. 2. In the absence of aberration the wavefront has same slope everywhere across the pupil plane, and hence all the images due to subaperture will be formed at the same position in the image plane. But, in the presence of aberration the slope of the wavefront will vary across the pupil which will result in relative shift between the images of the subaperture. Since we have a circular centro-symmetric optical system we chose the location of the image formed by central subaperture to be the reference where we want all images due to subapertures to be formed after aberration correction. We can also use aperture overlap as shown in Fig. 2 to increase the number of sampling points for local slope data.
We assume that in each subaperture the local phase error can be approximated by the first order Taylor expansion. We measure the translation between the intensity of the images formed by the different subapertures with respect to intensity of the image formed by the central subaperture after normalizing them to the same level . We use this information to calculate the relative local wavefront slope in each subaperture given by
After we have estimated the phase error, we apply a phase correction to the Fourier data and then perform the 2-D IDFT to get the phase corrected image given byFigure 3 shows the schematic diagram for the estimation of the phase error based on split aperture method.
3. Computer simulation
For the computer simulation we consider the same optical setup shown in Fig. 1 with focal length of lens L1 and L2 as mm. We consider a sweeping laser source with center wavelength and bandwidth Fig. 4(a) shows the normalized power spectrum with respect to wavenumber . We assume that the sweeping is linear in. For simplicity, just for the demonstration of the working principle of the method, we consider only a 2-D object which gives diffuse reflection on illumination. Our object is a USAF bar target of size pixels, multiplied with circular complex Gaussian random numbers to simulate the speckle noise. We zero-pad the object to the size of pixels and apply the fast Fourier transform (FFT) to get to the pupil plane. We multiply the result with a square aperture, which is an array of unity value of size pixels zero padded to size pixels. To apply a phase error we multiply the result with a factor using Eq. (16) and then compute the IFFT to get to the image plane. In the last step, we compute the IFFT instead of FFT simply to avoid inverted images without actually affecting the phase correction method. We multiply the resulting field at the image plane with a phase factor of to take into account the propagation distance, then add the delayed reference on-axis plane wave with phase factor of with and compute the squared modulus to obtain the simulated interference signal. This is done for each in the sweep to create a stack of 2-D interference signal with varying from to in 256 equal steps. The reference wave amplitude was times the object field amplitude. Figure 4(b) shows the spectral interferogram at one lateral pixel.
For the reconstruction, we first calculate the FFT along , being 256 spectral pixels after zero padding to size 512 pixels, for each lateral pixel of the detection plane to separate the DC, autocorrelation and the complex conjugate term from the desired object term. The resulting sample depth profile (A-scan) is plotted in Fig. 4(c). It allows separating the DC, autocorrelation and the complex conjugate term from the desired object term (red dashed rectangle). Figure 4(d) shows the image slice from the reconstructed image volume at the position of the peak in Fig. 4(c) without any phase error. The simulated phase error consisted of Taylor monomials up to 6th order and the coefficients were selected from the random Gaussian distribution such that peak to valley phase error across the aperture is 58.03 radians. Figure 4(e) shows the simulated phase error in radians and Fig. 4(f) is the corresponding aberrated image obtained after applying it. We have assumed that the phase error is independent of the light frequency and that the system is compensated for any dispersion effects.
Figure 5 shows the results for the various cases of aperture segmentation. In Fig. 5(a) with we have 9 sampling points, and hence we could fit only first 9 Taylor monomials according to Eq. (16). Comparing the degraded image in Fig. 4(f) to the corrected image in Fig. 5(a) we immediately see the improvement in visual image quality. Except for the image obtained in the case when with 50 percent shown in Fig. 5(h), all the images obtained for different cases are of similar visual quality. We found in our simulation that in order to see any significant smearing or blurring in the images, the peak to valley (P-V) aberration should be more than 15 radians. Otherwise the image is close to the diffraction limited image in visual quality. After phase corrections are applied under different conditions of pupil splitting, we find that the P-V residual phase errors are below 10 radians in all cases under consideration except for the case when with 50 percent overlap, shown in Fig. 5(k). Hence, we judge the performance of the different pupil splitting conditions based on the residual phase error maps. In Fig. 5(b) with we have 25 sampling points and hence we could fit Taylor monomials upto order with 25 coefficients. We see that the residual phase error has further reduced as compared to the case in Fig. 5(d). In Fig. 5(c) and (g) we have and respectively, but we fit only Taylor monomials upto 6th order to see the effect of increasing sampling points. We see that residual error decreases for but increases for . This is because with increasing the size of the subapertures becomes smaller and hence the resolution of the images corresponding to these subapertures also reduces leading to registration error for shift calculations. This results in false slope calculation and phase error estimation. Obviously increasing the number of apertures allows in principle for higher order phase correction, but the smaller number of pixels in the subaperture leads to increasing phase errors. A possible solution is to use overlapping apertures as shown in Fig. 2.
We used overlapping subapertures with 50 percent overlap in order to increase the sampling points with uniform spacing and to maintain the number of pixels of the subapertures. 50 percent overlap ensures that we have maximum number of sampling points without the over redundancy of the subaperture data due to overlap. In case of overlapping apertures, K no longer stands for the number of subapertures but defines the size of each subaperture as pixels. Nevertheless, for we have a higher residual phase error as compared to the non-overlapping case, also visible in the image in Fig. 5(h) as compared to Fig. 5(a). This is because we have assumed in our algorithm that phase error in each subaperture can be described by a 1st order Taylor series. However in the presence of higher order aberrations this assumption is no longer true if the size of the subaperture is big enough to capture large variation of phase error. This can result in an error in phase estimation, and using overlapping with large subapertures of same size can further add up the errors. Betzig et.al refer to errors induced due to overlapping subapertures as “residual coupling between the subregions” . This is what we see in the case of with 50 percent overlap. It shows that for the size of subapertures are not optimal for the 1st order Taylor approximation of local phase error. However, the residual error decreases rapidly for the overlapping case when. In case of the subaperture size is small enough for the approximation to be valid and we see a significant reduction in the residual phase error in Fig. 5(l).
Figure 6 shows the plot of rms residual phase error for different values of for both non-overlapping and overlapping subapertures with different percentage of overlap. We see that with overlapping subapertures of appropriate size with respect to the overall aberrations we obtain in general a lower residual phase error. Only if the subaperture size is chosen too large to cover significant amount of aberrations, which cannot be approximated by the 1st order Taylor series, the overlapping aperture approach performs worse. A smaller size of subapertures ultimately leads to loss of resolution and increased registration error, which in consequence leads to an increase in residual phase error. For bigger size of subapertures the wavefront is sampled with less points and hence one can find coefficients only for the lower order Taylor monomials. The optimal number, size and percentage of overlap of sub-apertures depend of course on the amount of aberrations present. However, from the plots in Fig. 6 we can see that the overlapping subapertures do not yield significantly different results from the non overlapping case for K = 5, 7 9 and 11. Also, increasing the percentage of overlap to more than 50 percent between subapertures does not significantly reduce the residual phase error. This is due to the redundancy of data caused by the overlap. We can see that for K = 3 the residual rms errors are lower for the cases of more than 50 percent overlap as compared to the 50 percent overlap case. But still the residual rms errors are higher than in the non overlapping case. On the other hand, for K = 13, the residual rms errors for the cases of more than 50 percent overlap are higher than both the 50 percent overlap and the non-overlapping case. Since both K = 3 and K = 13 fall in the regimes where the approximations and assumptions of the presented algorithm are no longer valid, it is difficult to explain the exact reasons behind the observed differences. Nevertheless, in Fig. 6 we can see that the condition with 50 percent overlap has performed the best with residual rms of 0.3412 radians which is well within the Marechal’s criterion of rms error of 0.449 radians for diffraction limited performance.
In the case of symmetric quadratic phase error across the aperture which results in defocus aberration, we can find the estimate of the phase error with just two subapertures. The aperture can be split into two equal halves vertically or horizontally. The phase error term in each half has a linear slope with equal magnitude but opposite direction. The opposite slopes in each half cause the images formed by each half to shift in opposite direction relative to each other. Using the relative shift information by cross correlating the images formed using two half apertures, one can easily derive the estimate of the coefficient of the quadratic phase asEq. (9) we can estimate the quadratic phase error. This method is simple and fast as compared to the defocus correction method based on sharpness optimization which requires long iterations.
4. Experimental results
The schematic of the FF SS OCT setup for the proof of principle study is shown in Fig. 7(a). The system is a simple Michelson interferometer based on Linnik configuration with same type of microscope objectives (5 × Mitotuyo Plan Apo NIR Infinity-Corrected Objective, NA = 0.14, focal length f = 40 mm) in both the sample and the reference arm. The sample consisted of a layer of plastic, a film of dried milk and an USAF resolution test target (RTT). A thin film of dried milk was used to produce scattering and diffuse reflection. The plastic layer of non uniform thickness and structure to create random aberration was created by heating a plastic used for compact disc (CD) cases. The output beam from the sweeping laser source (Superlum BroadSweeper 840-M) incident on the lens L1 is of diameter 10 mm and the field of view on the sample is ~2 mm. The power incident on sample is 5 mW. The RTT surface was in focus while imaging. The image of the sample formed by L2 is transferred to the camera plane using a telescope formed by lens L3 and L4 with effective magnification of 2.5. A circular pupil P of diameter 4.8 mm is placed at the focal plane of lens L3 and L4 to spatially band limit the signal. Figures 7(c) and 7(e) show that the image obtained is speckled due scattering at the milk film and aberrated by the plastic layer.
For imaging, the laser is swept from wavelength , with step width, and the frames at each wavelength are acquired using a CMOS camera (Photon Focus MV1-D1312I-160-CL-12) at the frame rate of 108 fps synchronized with the laser. A total of 512 frames are acquired. After zero padding and mapping of the spectral pixels, a 1-D FFT is performed along the dimension for each lateral pixel, the standard FDOCT procedure, which provides the depth location of the different layers of the sample. The layer corresponding to the RTT is picked for aberration correction and the corresponding enface intensity image is displayed in Fig. 7(c). Figure 7(e) shows the zoomed in image of Fig. 7(c) consisting of pixels. The numbers 4 and 5 indicate the location of the 4th and 5th group elements. We can barely resolve the RTT element (4, 3). The horizontals bars are visually more affected by the aberration due to the anisotropic nature of the distorted plastic layer. FFT of the image after zero padding shows the spatial frequency information within a circular area. For further processing we filtered out a square area ( pixels) from the spatial frequency distribution. The side length was chosen approximately equal to the diameter of the circular pupil (dotted square in Fig. 7(d)).
Figure 8 shows the phase correction results after subaperture processing. The Taylor monomials upto 5th order are fitted to determine the phase error. Figure 8(a) shows the result obtained using 9 non-overlapping subapertures with . In this case we could fit only upto first 9 monomial terms given by Eq. (16), and hence monomial fitting limited to the 4th order. We also tried non-overlapping subapertures with and overlapping subaperture with 50% overlap for . Note that Figs. 8(d)-(f) show only the estimation of the phase error that may be present. Unlike the simulation where we calculated the residual phase error knowing the reference phase error, here we look for the improvement in image quality to judge the best pupil splitting condition. We can clearly appreciate the improvement in resolution as the bars are more evident after phase correction. We can resolve horizontal bars upto element (5, 1) in Fig. 8(a) and (5, 4) in Figs. 8(b) and 8(c). This corresponds to the improvement in resolution by a factor of 1.6 for Fig. 8(a), and a factor of 2 for the case in Fig. 8(b) and (c) respectively over the aberrated image with the highest resolvable element being (4, 3). The improvement in resolution is calculated using the relation where is the improvement in elements. Vertical bars in the corrected images appear much sharper after phase correction. The theoretically calculated diffraction limited resolution of our experimental setup is 6.5 . Note that in our experiment the resolution is limited by the size of the pupil P. The best resolution obtained in case of Figs. 8(b) and 8(c) corresponding to element (5,4) is about 11 which seems to be far from the theoretical limit. This is primarily because of the strong local variations of the wavefront across the pupil plane due to the highly distorted plastic layer causing anisotropic imaging condition. Also, this may be the reason why phase maps shown in Figs. 8(d)-8(e) appear different. However, they have the same general profile.
We also investigated the effect of refractive index change within the sample. We replaced the non uniform plastic layer of the sample with an uniform plastic layer of thickness ~1 mm. Without plastic layer the RTT was placed at the focal plane of the microscope objective while imaging. The presence of Plastic layer causes the image to be defocused as seen in Fig. 9(a). Figure 9(b) shows the result using nonoverlapping subapertures with . The corresponding estimated phase error in Fig. 9(d) shows the presence of quadratic and 4th order terms which is expected due to the presence of defocus and spherical aberration caused by the change in refractive index within the sample. Since spherical aberration can be balanced with defocus, we applied a simple half aperture method to find the effective defocus error according to Eq. (9) as shown in Fig. 9(e) Successful focusing is clearly evident in these images as elements upto (6, 1) in Fig. 9(b) and (5, 6) in 9(c) can be resolved which corresponds to resolution of 7.8 and 8.7 respectively. These values are close to the calculated theoretical diffraction limited resolution of 6.5 . In the subaperture processing we filtered out the square aperture, whereas the actual pupil was circular. So the subapertures at the edges were not completely filled with spatial frequency data. This may have contributed to the residual phase error.
For demonstrating the method on a biological sample we applied the defocus correction using two non-overlapping subapertures to the 3-D volume image of a grape. The digital refocusing method enables to effectively achieve an extended depth of focus. The first layer of the grape was at the focal plane of the microscope objective. Since the theoretical depth of field is only 130 µm in the sample (assuming refractive index of 1.5), the deeper layers out of focus get blurred. Figure 10(a) shows a tomogram of the grape sample with an arrow indicating a layer at the depth of 424.8 from the focal plane. Figure 10(c) shows an enface view of that layer and Fig. 10(d) shows the defocus corrected image. We can appreciate the improvement in lateral resolution as the cell boundaries are clearly visible now.
In Fig. 11 blue dotted plot shows the coefficients of the defocus error calculated using the two subaperture method, given by Eq. (9), for each layer of a grape sample. Whereas, the solid red line shows the theoretically calculated coefficients using the knowledge of the optical distance of each layer of the grape sample from the focal plane given byEq. (10). This may give an impression that the subaperture method is inaccurate in estimating defocus error at greater depth. We further investigated the effect of defocus correction, calculated theoretically, on the visual quality of the images and compared it with the defocus corrected images obtained using the two subaperture method. We found that the images obtained using both methods appear similar in visual quality till a depth of about 380 . However, the images obtained using theoretically calculated defocus corrections appear blurry at greater depths. Whereas, in case of the two subaperture method the images obtained at increased depth are visually in sharp focus. This shows that for deeper layers the theoretically corrected defocus error requires further optimization or processing to achieve the level of performance demonstrated by the single step subaperture method. Media 1 shows the fly through the depth of the original and digitally focused 3-D volume images of the grape sample. Here we can clearly see that the same lateral resolution is maintained throughout the depth after defocus correction.
Fig. 12: (a) Original grape sample layer at , (b) image obtained by applying theoretically calculated the phase error correction to (a), (c) theoretically estimated phase error in radians for (b), (d) defocus corrected image of (a) obtained using the two subaperture method, and (e) estimated phase error in radians for the case (d). (f) Original grape sample layer at , (g) image obtained by applying theoretically calculated the phase error correction to (f), (h) theoretically estimated phase error in radians for (g), (i) defocus corrected image of (f) obtained using the two subaperture method, and (j) estimated phase error in radians for the case (i) (See Media 1).
The result shown in Fig. 12(b) show that at the lower depth of z = 285 the image obtained by applying theoretically calculated defocus correction is visually sharp. However, at z = 511 the image obtained appears blurry as shown in Fig. 12(g). On the other hand, the images obtained by subaperture method appear in sharp focus for both z = 285 and z = 511 as shown in Figs. 12(d) and 12(i) respectively. This shows that higher P-V defocus errors estimated by the two subaperture method at greater depths do in fact yield better focusing performance. The difference in results between the theoretically calculated defocus error and the ones estimated using the two subaperture method may be due to the fact that for the theoretical calculations we assumed the sample to be of uniform refractive index. However, the sample may have random local refractive index fluctuations which may cause spherical aberration and with increasing depth this effect can add up. In our experimental result in Fig. 9, we have demonstrated that the two subaperture method is capable of balancing spherical aberration to achieve a focused image. The better results obtained for deeper layers of a grape sample using the subaperture method only supports this fact. The quantification of induced spherical aberration requires accurate knowledge of refractive index fluctuations within the sample. This will also require theoretical modelling of the sample and the system which is out of the scope of the present paper and will be investigated in future. Also, it is difficult to understand exactly the reason behind the abrupt strong defocus induced in the images after a certain depth which is also noticeable in Media 1. This is currently under investigation and will be answered by future research. Nevertheless, we have clearly demonstrated that the two subaperture method can correct for the defocus and spherical aberrations which are commonly encountered while imaging biological samples.
5. Discussion and conclusion
In this paper we investigated the subaperture correlation based phase correction as post processing technique for interferometric imaging where phase information can be accessed. In the theoretical analysis we showed that this method is independent of any system parameters such as propagation distance or physical detector size. We can correct for the aberration present in the images without knowing the system details provided the image is band limited and not severely aliased. The method works very well when the spatial frequency content is spread uniformly across the pupil which is the case when imaging diffuse scattering object with laser light. We further divided the pupil plane into overlapping subapertures to obtain more sampling points to reduce error in estimation of the phase error. We used Taylor monomials for fitting the phase estimate which are simple and easy to use. In our simulation we showed that the least square method for finding the coefficients of the monomials is reliable. We also conducted simple experiments to show the proof of principle using a FF SS OCT system. We were able to reduce the aberration introduced in the sample using the theory of subaperture processing. The images obtained after processing clearly shows an improvement in resolution. This method does not require long iterations as in case of optimization methods used for phase correction [10,11] and provides the phase error estimation in a single step. It can thus be seen as digital equivalent of a Hartmann Shack wavefront sensor. We have demonstrated the method using Taylor monomials as basis functions which are not orthogonal over the square aperture. Hence, in the presence of higher order aberration there might be coupling of errors between the coefficients of Taylor monomials which could leave the accurate quantification of the contributions due to various components of aberration such as defocus, spherical etc. difficult. Nevertheless, we have demonstrated that by using Taylor monomials we can reduce the overall aberration to achieve performance close to the diffraction limit. However, in the future it would be interesting to use Zernike like polynomials that are orthogonal over the sampled square aperture. This would help for an accurate quantification of various higher order aberration contributions. A potential application would be the characterization and quantification of aberrations present in the human eye while imaging the retina. For results presented in this paper the cross correlation of the subapertures were done in a sequential manner in MATLAB. The processing time for each frame is around 2 to 4 seconds. However, the speed can be greatly improved with parallel processing on a graphics processing unit (GPU). In case of 3-D imaging, the phase error correction can be applied throughout the isoplanatic volume as for example done in , where the aberration is uniform and the sample has uniform refractive index, using the phase error estimation from a single layer. Furthermore, subaperture processing opens the possibility of doing region of interest based phase correction if the aberrations and the sample are not uniform across the volume.
6. Appendix A
Let and represent any two square subapertures of the discretized, filtered and segmented pupil data given byEq. (14) and (15) by using the Fourier shift theorem. and are the low resolution image version of and we assume that both have the same intensity. This is a valid assumption if back-scattered light from the sample is diffuse which is generally the case when imaging tissue with coherent illumination. However, the Gaussian profile of the illumination and the presence of specular reflections from the sample can cause intensities to vary between subapertures. Therefore we normalize the intensities of the images to the same level. We subsequently cross correlate the intensities of and to determine the relative shift between the two images, from which we derive the relative slope information as given in Eq. (7).
We model the phase error as a linear combination of Taylor monomials given byEq. (18) at the locations of central pixels in each subaperture, we have a solution in the matrix form asEq. (19) asEq. (16). We are working with Taylor monomials as they are simple to use. Zernike polynomials can be also used, but one has to take care that they are orthogonal over the sampled aperture and the solutions are different for apertures of different shapes . We show in our simulations that Taylor monomials work well enough in the determination of the phase error.
This research was supported by Carl Zeiss Meditec Inc. Dublin, USA, Medical University Vienna, European project FAMOS (FP7 ICT, contract no. 317744), FUN OCT (FP7 HEALTH, contract no. 201880) and the Christian Doppler Society (Christian Doppler Laboratory “Laser development and their application in medicine”).
References and links
1. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [PubMed]
2. J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three-dimensional microscope point spread function using a Shack-Hartmann wavefront sensor,” J. Microsc. 205(1), 61–75 (2002). [CrossRef] [PubMed]
3. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. U.S.A. 103(46), 17137–17142 (2006). [CrossRef] [PubMed]
4. M. Pircher and R. J. Zawadzki, “Combining adaptive optics with optical coherence tomography: Unveiling the cellular structure of the human retina in vivo,” Expert Rev. Ophthalmol. 2(6), 1019–1035 (2007). [CrossRef]
5. K. Sasaki, K. Kurokawa, S. Makita, and Y. Yasuno, “Extended depth of focus adaptive optics spectral domain optical coherence tomography,” Biomed. Opt. Express 3(10), 2353–2370 (2012). [CrossRef] [PubMed]
8. T. Haist, J. Hafner, M. Warber, and W. Osten, “Scene-based wavefront correction with spatial light modulators,” Proc. SPIE 7064, 70640M, 70640M-11 (2008). [CrossRef]
9. A. E. Tippie and J. R. Fienup, “Sub-Aperture Techniques Applied to Phase-Error Correction in Digital Holography,” in Digital Holography and Three-Dimensional Imaging, OSA Techinal Digest (CD) (Optical Society of America, 2011), paper DMA4. http://www.opticsinfobase.org/abstract.cfm?URI=DH-2011-DMA4 [CrossRef]
11. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. 109(19), 7175–7180 (2012). [CrossRef] [PubMed]
12. P. Hariharan, Optical Interferometry (Academic, 2003).
13. D. Malacara, Optical Shop Testing (Wiley, 1992).
15. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2008).
17. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef] [PubMed]