## Abstract

We describe a simple and robust approach for characterizing the spatially varying pupil aberrations of microscopy systems. In our demonstration with a standard microscope, we derive the location-dependent pupil transfer functions by first capturing multiple intensity images at different defocus settings. Next, a generalized pattern search algorithm is applied to recover the complex pupil functions at ~350 different spatial locations over the entire field-of-view. Parameter fitting transforms these pupil functions into accurate 2D aberration maps. We further demonstrate how these aberration maps can be applied in a phase-retrieval based microscopy setup to compensate for spatially varying aberrations and to achieve diffraction-limited performance over the entire field-of-view. We believe that this easy-to-use spatially-varying pupil characterization method may facilitate new optical imaging strategies for a variety of wide field-of-view imaging platforms.

© 2013 OSA

## 1. Introduction

The characterization of optical system aberrations is critical in such applications as ophthalmology, microscopy, photolithography, and optical testing [1]. Knowledge of these different imaging platforms’ aberrations allows users to predict the achievable resolution, and permits system designers to correct aberrations either actively through adaptive optics or passively with post-detection image deconvolution. Digital aberration removal techniques play an especially prominent role in computational imaging platforms aimed at achieving simple and compact optical arrangements [2]. A recent important class of such platforms is geared towards efficiently creating gigapixel images with high resolution over a wide field-of-view (FOV) [2, 3]. Given the well-known linear scaling relationship between the influence of aberrations and imaging FOV [4], it is critical to characterize their effect before camera throughput can be successfully extended to the gigapixel scale.

Over the past half-century, many unique aberration characterization methods have been reported [5–17]. Each of these methods attempts to estimate the phase deviations or the frequency response of the optical system under testing. Several relatively simple non-interferometric procedures utilize a Shack-Hartmann wavefront sensor [11–13], consisting of an array of microlenses that each focus light onto a detector. The local tilt of an incident wavefront across one microlens can be calculated from the position of its detected focal spot. Using the computed local tilts from the microlenses across the entire array, the amplitude and phase of the incident wavefront can be directly approximated. Despite offering high accuracy, measuring aberrations with a Shack-Hartmann sensor often requires considerable modification to an existing optical setup. For example, insertion and removal of the wavefront sensor from the imaging platform’s pupil plane requires additional relay lenses, each subject to their own aberrations and possible misalignments.

Alternatively, wavefront aberrations can be inferred directly from intensity measurements by relying upon phase retrieval procedures [18–24]. A common phase retrieval-based strategy is to introduce phase diversity [18, 24] between multiple measurements of the intensity of an optical field. Phase diversity may be introduced either with additional optical elements or by simply inducing system defocus. Various methods for phase retrieval using defocus diversity have been reported in literature, including transport-of-intensity equation (TIE) based methods [25–28], iterative algorithms [29] and other non-iterative methods [30, 31].

By applying defocus diversity in microscopy systems, it has been shown that the complex pupil function of a high numerical aperture (NA) microscope objective lens [19, 20, 23] can be characterized with intensity-only measurements. These previous approaches, however, operated under the simplified assumption that an objective lens’s aberrations do not exhibit any variation across its image plane [18–20, 23]. This approximation of a shift-invariant point-spread-function (PSF) only remains valid for objective lenses exhibiting a very small FOV. The variability of off-axis aberrations must be considered in microscopy systems or advanced computational imaging platforms that are designed to provide a very wide FOV, as their aberrated PSFs will vary significantly in shape across the image plane. These systems geared, for example, towards gigapixel photography [2, 3] and whole slide imaging [32], typically exhibit aberrations that increase as a function of distance from the image center. While prior work reports measurement of such spatially varying aberrations in lithography systems [33, 34], they unfortunately require custom designed reticle masks and interferometry setups. Precise optical alignment and specialized sample preparation are unavoidably involved, which prevents their generalized implementation within other optical pipelines.

In this paper, we describe a characterization method that is able to map spatially varying aberrations in a robust, cost-effective and easy-to-implement manner. In brief, this method operates by collecting a set of intensity images of a calibration sample at various defocus planes. The sample must contain identical discretized objects spread over its entire viewing area. In combination with a phase-retrieval algorithm, our method first recovers the phase-and-amplitude profile of a target object located at the center of the FOV. This complex profile then serves as the ground truth image of the object (i.e., image with minimal aberration). Next, our method automatically identifies another target object at an off-axis location and initializes a set of aberration parameters at that location. We then use this set of aberration parameters, in combination with the recovered ground truth image, to generate a set of aberrated intensity images for the same number of defocus planes. For each off-axis location, we recover its associated aberration parameters by minimizing the difference between the generated aberrated intensity images and the collected experimental data. Finally, we apply the recovered off-axis aberration parameters (from ~350 locations in our experiment) to generate continuous 2D aberration function maps by parameter fitting.

To demonstrate the utility of the recovered 2D aberration maps, we experimentally show how they can be used in combination with a phase-retrieval method to render images with improved resolution performance – spatially varying aberrations can be compensated by using an information-preserving image deconvolution scheme.

This paper is structured as follows: In Section 2, we briefly review some of the concepts essential to the context of our work, including phase retrieval and spatially varying pupil aberrations. In Section 3, we describe our experimental setup and the calibration sample. In Section 4, we detail our procedure for pupil function recovery at one location off the optical axis. In Section 5, we explain how to automate the aberration characterization process, experimentally demonstrate the automated measurement of spatially varying aberration weights, and show how these weights can yield accurate 2D aberration function estimates. In Section 6, we demonstrate a specific application of these aberration function maps – improving the resolution performance of phase retrieval-based image rendering across the entire imaging FOV. Finally, we end with a discussion of some of advantages and limitations of the reported method.

## 2. Overview of phase retrieval and spatially varying pupil aberrations

#### 2.1 Phase retrieval and defocus diversity

The first concept essential to our work is the application of the phase retrieval algorithm using defocus diversity. As in many inverse problems, a common formulation of the phase retrieval problem is to seek a complex field solution that is consistent with measurements of its intensity. The Gerchberg-Saxton algorithm [35], as well as its related error reduction algorithm [36–38], were the first widely used numerical schemes for this type of problem. They consist of alternating enforcement of known information in the spatial and/or Fourier domains. Although phase retrieval algorithms work well for many cases of interest, stagnation and ambiguity problems are known to prevent strict convergence. A technique termed phase diversity has been developed to overcome these limitations [18, 24, 29, 39, 40]. This technique relies on measuring multiple intensity patterns with a known modification to the optical setup applied between each measurement. The set of captured images, along with the knowledge of the diversity function, is then used to iteratively converge to a complex field that agrees with each measurement. Stagnation and ambiguity problems are overcome by providing a set of measurements that more robustly constrain the phase retrieval process. Increased accuracy is guaranteed through an analysis of its Cramer-Rao lower bound [41].

In this work, we apply defocus diversity [29, 37] to perform phase retrieval within a conventional microscope. Two or more images must be captured with known defocus distances, as shown in Fig. 1(a). Based on these intensity measurements *I*(s) (s = −2, −1, 0, 1, 2 in Fig. 1(a)) at different defocus planes, we follow the multi-plane iterative algorithm outlined in Fig. 1(b) [29]. In this algorithm, we first initialize a complex estimate of the object function. This complex estimate is then propagated to one defocus plane (multiplication by a quadratic phase factor in the Fourier domain [42]). After propagation, the amplitude of the estimate is replaced by the square root of the corresponding measurement *I*(s), while the phase is kept unchanged. Such a propagate-and-replace process is repeated until the complex solution converges (see Section 4 for implementation details).

#### 2.2 Spatially varying pupil aberrations

Second, an understanding of spatially varying pupil aberrations is important to fully appreciating the impact of our work. In an aberration-free coherent imaging system, the light field distribution at the pupil plane (i.e., the back focal plane of the objective lens) is directly proportional to the Fourier transform of the light field at the object plane. Therefore, the spatial coordinates at the object plane and the pupil plane can be expressed as (*x, y*) and (*k _{x}*,

*k*), respectively, with

_{y}*k*and

_{x}*k*as the wave number in the

_{y}*x*and

*y*directions. Due to such a Fourier relationship, aberrations of an imaging platform are often characterized at the pupil plane for simplicity [42]. Different types of aberrations can be quantified as different Zernike modes at the pupil plane. For example, defocus aberration can be modeled as a phase factor

*p*, where ${Z}_{2}^{0}({k}_{x},{k}_{y})$denotes the corresponding Zernike polynomial for this aberration (here a quadratic function), while coefficient

_{5${Z}_{2}^{0}({k}_{x},{k}_{y})$}*p*denotes the amount of defocus aberration (subscript ‘5’ indicates the fifth Zernike mode).

_{5}A more complete aberration model uses the generalized pupil function *W(*k_{x,} k_{y}), whose phase factor is a summation of different Zernike modes with different aberration coefficients *p*_{m} (*p*_{m} denotes the amount of *m*^{th} Zernike mode; refer to Eq. (1) in Section 4). If the imaging platform is shift-invariant, each aberration coefficient *p*_{m} is constant over the entire imaging FOV and the generalized pupil function *W(*k_{x}, k_{y}) is independent of spatial coordinates *x* and *y*. However, as noted above, recent extreme-FOV computational imaging platforms push beyond the limits of conventional lens design and thus invalidate this shift-invariant assumption. Aberration coefficients *p*_{m}s are 2D functions of x and y in this case, and thus, the generalized pupil function can be expressed as a function of both k_{x}, k_{y} and x, y, i.e. *W(*k_{x}, k_{y}, x, y). Our goal here is to characterize the aberration parameters *p*_{m} (m = 1, 2, …) as a function of spatial coordinates x and y. Based on *p*_{m}(x, y), we can derive the generalized pupil function *W(*k_{x}, k_{y}, x, y) at any given spatial location (Section 5) and accurately perform post-detection image deconvolution (Section 6).

## 3. Experimental setup and sample preparation

In our experiment, we used a conventional upright microscope (BX 41, Olympus) with a 2X apochromatic lens (0.08 NA, Olympus) and a full-frame CCD camera (KAI-29050, Kodak). The tested objective lens has a relatively large FOV (~1.3 cm in diameter) with the potential to facilitate whole-slide imaging for a variety of applications [32]. However, scale-dependent geometric aberrations compound any attempt to directly capture images at a resolution commensurate with the specified NA uniformly across the entire image plane [4]. While aberrations are well-corrected near the optical axis, significant blur deteriorates image quality towards the FOV’s edge.

To characterize these spatially varying aberrations of the objective lens, we first create a calibration “target” sample containing identical discretized objects over its full viewing area. While several convenient targets exist, we found that simply spin-coating a layer of 10-micron diameter microspheres (Polysciences, Inc.) on top of a microscope slide offered an ideal calibration sample. Selecting a sparse concentration of microspheres ensures that an automated search algorithm can successfully identify each microsphere, as detailed in Section 5. For example, a slide that contains 350 microspheres distributed randomly over the 1.3 cm FOV associated with the 2X objective works well.

A microsphere target sample is easy-to-prepare, cost-effective and accessible to the average microscopist. Sample preparation time totals less than 2 minutes. The standard deviation of the microspheres’ size is about 0.3 µm, and thus, these calibration objects are effectively identical over the entire FOV. We note that while alternative fabrication methods such as e-beam or photo-lithography may also generate calibration samples, the aberrations of lithography lens, the evenness of photoresist, and the alignment of the mechanical stage would all need to be considered and jointly optimized to minimize unexpected target variations.

## 4. Off-axis pupil function recovery

With a proper calibration target prepared, we are now ready to detail our procedure for pupil function recovery at one location off the optical axis. Assuming the aberrations of the objective lens are minimal (i.e., they are well-corrected) at the center of its FOV, we use images of the object located near the FOV center to serve as the ground truth for other off-axis positions. The proposed characterization approach consists of two primary steps: 1) phase retrieval, and 2) pupil function estimation, as detailed below.

1) *Phase retrieval*. Following the general procedure outlined in Section 2, we displace the microscope stage from the focal plane at *δ* = 50 µm increments in either defocus direction, capturing a total of 17 images of the microsphere calibration target *I*(*s*), where *s* = (−8,…0,…8). The maximum defocus distance with such a scheme is 400 µm in either direction. For each image, the microsphere target is illuminated with a quasi-monochromatic collimated plane wave (632 nm).

Next, we create a 64^{2}-pixel cropped image set *I*_{c}(*s*) that contains one microsphere at the center FOV (see Fig. 2, left). We recover the complex profile of this centered microsphere using the multi-plane phase retrieval algorithm [29] from Section 2, detailed briefly as follows. First, an estimate of the complex field is initialized at the object plane. The initial estimate’s phase is set to a constant and its amplitude is set to the square root of the in-focus intensity measurement of the centered microsphere *I _{c}*(0). Second, this complex field estimate is Fourier transformed and multiplied by a quadratic phase factor exp(

*ik*), describing defocus of the field by axial distance

_{z}z*z = s∙δ*. To begin, we set

*s*= 1, corresponding to

*z =*+ 50 µm of defocus. Third, after digitally defocusing, we again replace the amplitude values of the complex field estimate with the square root of the intensity data from recorded image,

*I*(

_{c}*s*). Beginning with

*s*= 1, we first use the intensity values

*I*(1) captured at z = +50 µm for amplitude value replacement, while the estimate’s phase values remain unchanged. This digital propagate-and-replace process is repeated for all values of

_{c}*s*(all 17 cropped intensity measurements from the captured focal stack). Finally, we iterate the entire phase retrieval loop approximately 10 times. The final recovered complex image, denoted as$\sqrt{{I}_{truth}}{e}^{i{\phi}_{truth}}$, serves as a “ground truth” estimate of the complex field from a minimally aberrated microsphere, which may be digitally refocused to any position of interest.

2) *Off-axis pupil function estimation*. Next, we select a microsphere at a position (*x _{0}*,

*y*) off the optical axis and generate a new 64

_{0}^{2}-pixel cropped image set

*I*(

_{d}*s*) from our initial measurements, centered at (

*x*,

_{0}*y*) (see Fig. 2). We also initialize an estimate of the unknown location-dependent pupil function for this position,

_{0}*W*(

*k*,

_{x}*k*,

_{y}*x*,

_{0}*y*). For simplicity, we approximate the unknown pupil function

_{0}*W*(

*k*,

_{x}*k*,

_{y}*x*,

_{0}*y*) with 8 Zernike modes, ${Z}_{1}^{-1}$, ${Z}_{1}^{1}$, ${Z}_{2}^{-2}$, ${Z}_{2}^{2}$, ${Z}_{2}^{0}$, ${Z}_{3}^{-1}$,${Z}_{3}^{1}$and ${Z}_{4}^{0}$, corresponding to

_{0}*x*-tilt,

*y*-tilt,

*x*-astigmatism,

*y*-astigmatism, defocus, x-coma, y-coma and spherical aberration, respectively [1]. The point-spread function at the selected off-axis microsphere location (

*x*,

_{0}*y*) may be uniquely influenced by each mode above. We denote the coefficient for each Zernike mode with

_{0}*p*(

_{m}*x*,

_{0}*y*), where the subscript ‘

_{0}*m*’ stands for the mode’s polynomial expansion order (in our case,

*m*= 1, 2…8). With this notation, our unknown pupil function estimate

*W*(

*k*,

_{x}*k*,

_{y}*x*,

_{0}*y*) can be expressed as,

_{0}*p*(

_{m}*x*,

_{0}*y*) is a space-dependent function evaluated at (

_{0}*x*=

*x*,

_{0}*y*=

*y*), allowing the pupil function

_{0}*W*to model spatially varying aberrations. This pupil function estimate is then used along with the “ground truth” complex field of the centered microsphere found in step 2 to generate a set of aberrated intensity images,

*I*(

_{a}*s*), as follows:

*e*represents defocus of the ground truth microsphere field to plane

^{ikz δs }*s*. We then adjust the values of the 8 unknown Zernike coefficients

*p*comprising the pupil function

_{m}*W*to minimize the difference between this modeled set of aberrated intensity images

*I*(

_{a}*s*) and the actual set intensity measurements of the selected off-axis microsphere,

*I*(s). The corresponding pupil function described by 8 Zernike coefficients is recovered when the mean-squared error difference is minimized. We apply a Generalized Pattern Search (GPS) algorithm [43] to solve the following nonlinear optimization problem for pupil function recovery:

_{d}## 5. Spatially varying aberration characterization over the entire FOV

Repeating the previous section’s off-axis aberration recovery scheme for many different microspheres spread over the image plane, we are able to characterize a microscope objective’s spatially varying aberrations over its entire FOV. The center of each microsphere is automatically identified using a marker-controlled watershed segmentation algorithm [44]. We also measure the distance between each marked microsphere and its nearest neighbor. Any microsphere within a 150 µm radius of a neighbor is automatically skipped to avoid multiple computations at sphere clusters.

Figure 3(a) shows a full FOV image of the calibration target with ~350 microspheres denoted by a red dot. For each microsphere, we recover the same 8 location-specific Zernike coefficients. For example, Fig. 3(b) shows the pupil function *W* recovered following Eq. (3) at position (*x _{1}*,

*y*), the center of the black square in Fig. 3(a). Figures 3(c1)-(c5) are 5 of the 17 intensity measurements of the microsphere at position (

_{1}*x*,

_{1}*y*) under different amounts of defocus:

_{1}*I*(s = 0),

_{d}*I*(s = ± 3), and

_{d}*I*(s = ± 6). Figures 3(d1)-(d5) display the corresponding aberrated image estimates

_{d}*I*(s) generated by the recovered pupil function in Fig. 3(b). Following the convex form of Eq. (3), the applied GPS algorithm successfully minimizes the mean-squared error difference between the measurements

_{a}*I*(s) and the estimates

_{d}*I*(s).

_{a}Following this aberration recovery pipeline, 8 Zernike coefficients are calculated for approximately 350 unique spatial locations across the microscope’s FOV. Figures 4(a)-4(f) plot the recovered second, third and fourth order spatially varying aberrations of our tested 2X objective lens, corresponding to x-astigmatism, y-astigmatism, defocus, x-coma, y-coma and spherical aberration respectively (first order Zernike modes are normally not considered as aberrations, and are thus not shown). The full FOV image of our calibration target is displayed at the bottom plane of each plot, where the FOV diameter is 1.3 cm. Each blue dot in Fig. 4 represents the recovered coefficient for the corresponding Zernike mode, and the spatial location of each blue dot corresponds to one microsphere labeled in Fig. 3(a).

Finally, we fit these 350 discrete values to a continuous polynomial function *p _{m}*(

*x*,

*y*), allowing us to accurately recover the pupil function at any location across the image plane

**(**curved surfaces in Fig. 4). The order of each polynomial function can be predicted via aberration theory for a conventional imaging platform [1]. The aberrations of increasingly unconventional optical designs in computational imaging systems may not follow such predictable trends, which we may account for with alternative fitting models and/or recovering coefficients at more than 350 unique spatial locations.

We verified the accuracy of our aberration parameter recovery process with an additional simple experiment. We defocused the calibration target by +50 µm along the optical axis and again implemented our aberration parameter recovery process (using the same ground truth images as before). For the tested wide-field microscope objective, Fig. 5 displays two of these fitted polynomial functions for spatially varying defocus - one computed for an in-focus target and one for the target under +50 µm of defocus. The major difference between the two polynomial fits is a constant offset corresponding to Δz = 48.9 µm, which is in a good agreement with the experimentally induced +50 µm displacement distance. As a reference, the depth-of-focus of the objective lens is about 80 µm.

## 6. Image deconvolution using the recovered aberration parameters

We will now demonstrate that our recovered 2D aberration maps can be used in an image deconvolution process to render images with improved resolution performance. The image deconvolution process is comprised of two main steps: 1) phase retrieval, 2) segment decomposition and shift-invariant image deconvolution, as outlined below.

1) *Full-FOV phase retrieval.* We use the multi-plane phase retrieval algorithm described in Section 2 to recover the amplitude and phase of a sample over the microscope’s entire FOV. This complex image contains the objective lens’s spatially varying aberrations.

2) *Segment decomposition and shift-invariant image deconvolution.* We then divide the full-FOV complex image into smaller 128 x 128 pixel image segments, denoted by *I*_{seg}(*n*) (*n* = 1, 2,… 1600 for our employed detector). Aberrations within each small segment are treated as shift-invariant, a common strategy for wide FOV imaging processing [45]. The pupil function *W*(*k _{x}*,

*k*,

_{y}*x*(

_{c}*n*),

*y*(

_{c}*n*)) is then calculated for each small segment following Eq. (1), where (

*x*(

_{c}*n*),

*y*(

_{c}*n*)) represents the central spatial location of the

*n*

^{th}segment. We then perform image deconvolution to recover the corrected image segment

*I*

_{cor}(

*n*) as follows:

To characterize the resolution performance of the above deconvolution process at different image plane locations, we perform a first experiment using a shifted USAF resolution target as our sample. Figures 6(b1)-(d1) are the raw image segments *I _{seg}* directly captured using the aberrated objective lens, while Fig. 6(b2)-(d2) are the corresponding processed images

*I*using Eq. (4). From Fig. 6(b2)-(d2), Group 7, element 1 (line width of 3.9 µm) of the USAF target can be resolved, in a good agreement with the Abbe diffraction limit of 3.94 µm of our 0.08 NA objective lens. This simple experiment indicates our aberration correction scheme can correct this particular objective’s aberration blur to yield diffraction-limited performance across its entire image FOV.

_{cor}Based on Eq. (4), we can also recombine all the corrected image segments *I*_{cor}(*n*) to form a correct full FOV image. Figure 7 and Fig. 8 show the results of a second experiment, where the entire FOV of images of two samples are corrected. An alpha blending algorithm [46] is used to remove edge artifacts at the segment boundary. Specifically, we cut away 2 pixels at the edge of each segment and use another 5 pixels to overlap with the adjacent portions. This blending comes at a small computational cost of redundantly processing the regions of overlap twice.

The sample in Fig. 7 is the calibration target discussed in Section 3, and the sample in Fig. 8 is a new test target with a mixture of microspheres of different diameters (5-20 µm) on a microscope slide. The 4 regions outlined by red squares in Fig. 7(a) and Fig. 8(a) are highlighted for detailed observation. The corresponding pupil functions of these four regions are shown in Figs. 7(b1)-7(e1) and Figs. 8(b1)-8(e1). Figures 7(b2)-7(e2) and Figs. 8(b2)-8(e2) display their associated corrected (i.e., deconvolved) images, while Figs. 7(b3)-7(e3) and Figs. 8(b3)-8(e3) display their original images without aberration correction. From these two examples, it is clear that our aberration characterization procedure can digitally compensate for the spatially varying aberrations across a microscope objective’s full FOV.

Finally, we note that the deconvolution scheme in Eq. (4) is based on inverting the coherent transfer function (i.e., the complex pupil function) of the objective lens. For the case of incoherent illumination, the incoherent optical transfer function can be directly calculated from the complex pupil function through a close form equation [42], and image deconvolution can be performed in the Fourier domain accordingly.

## 7. Conclusion

In summary, we report a phase retrieval-based procedure to efficiently recover the spatially varying wavefront aberrations common in wide-FOV imaging systems. In our demonstration, we applied a generalized pattern search algorithm to measure the spatially varying aberration coefficients of a wide-FOV microscope objective at ~350 off-axis positions. These pupil functions were then used to generate 2D aberration maps by parameter fitting. We demonstrated the application of our characterization process with an example of shift-variant image deconvolution, which successfully accounts for induced aberrations over a 2X objective’s entire FOV (1.3 cm diameter). The proposed computational approach does not require any optical modifications or additional hardware. The entire aberration recovery process is fully automated and easy to implement. We believe the characterization of spatially varying pupil aberrations is an attractive way to quantify the performance of many wide-FOV image platforms.

In our characterization scheme, we assume that the aberration is well-corrected at the center FOV. The object located at the center FOV serves as the ground truth for off-axis positions. If the objective lens under testing is not well-corrected at the center of the FOV, we can use other well-corrected optics (such as a high NA, small FOV objective) to capture the ground truth image. Finally, we note that future work will be aimed at extending the proposed aberration characterization pipeline beyond recovery of 8 Zernike modes. For more unconventional imaging designs, 10-15 Zernike modes may be required for accurate aberration characterization. A GPU implementation of the proposed pipeline can significantly shorten the associated processing time. Furthermore, this work tested an objective lens with an assumed 100% transmissive circular back aperture. Using our framework to model objective lens apertures with non-perfect transmission, or containing apodizing filters or coded modulation masks will be an additional future research direction.

## Acknowledgments

We acknowledge funding support from National Institute of Health under Grant No. 1R01AI096226-01.

## References and links

**1. **H. Gross, W. Singer, M. Totzeck, F. Blechinger, and B. Achtner, *Handbook of Optical Systems* (Wiley Online Library, 2005), Vol. 2.

**2. **O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A **28**(12), 2540–2553 (2011). [CrossRef] [PubMed]

**3. **D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature **486**(7403), 386–389 (2012). [CrossRef] [PubMed]

**4. **A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. **28**(23), 4996–4998 (1989). [CrossRef] [PubMed]

**5. **F. Berny and S. Slansky, “Wavefront determination resulting from Foucault test as applied to the human eye and visual instruments,” in *Optical Instruments and Techniques* (Oriel, 1969), pp. 375–386.

**6. **S. Yokozeki and K. Ohnishi, “Spherical aberration measurement with shearing interferometer using Fourier imaging and moiré method,” Appl. Opt. **14**(3), 623–627 (1975). [CrossRef] [PubMed]

**7. **M. Ma, X. Wang, and F. Wang, “Aberration measurement of projection optics in lithographic tools based on two-beam interference theory,” Appl. Opt. **45**(32), 8200–8208 (2006). [CrossRef] [PubMed]

**8. **M. Takeda and S. Kobayashi, “Lateral aberration measurements with a digital Talbot interferometer,” Appl. Opt. **23**(11), 1760–1764 (1984). [CrossRef] [PubMed]

**9. **J. Sung, M. Pitchumani, and E. G. Johnson, “Aberration measurement of photolithographic lenses by use of hybrid diffractive photomasks,” Appl. Opt. **42**(11), 1987–1995 (2003). [CrossRef] [PubMed]

**10. **Q. Gong and S. S. Hsu, “Aberration measurement using axial intensity,” Opt. Eng. **33**(4), 1176–1186 (1994). [CrossRef]

**11. **L. N. Thibos, “Principles of hartmann-shack aberrometry,” in *Vision Science and its Applications*, (Optical Society of America, 2000)

**12. **J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three - dimensional microscope point spread function using a Shack - Hartmann wavefront sensor,” J. Microsc. **205**(1), 61–75 (2002). [CrossRef] [PubMed]

**13. **L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. **216**(4-6), 313–319 (2003). [CrossRef]

**14. **R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. **31**(32), 6902–6908 (1992). [CrossRef] [PubMed]

**15. **D. Debarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express **15**(13), 8176–8190 (2007). [CrossRef] [PubMed]

**16. **T. Čižmár, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics **4**(6), 388–394 (2010). [CrossRef]

**17. **M. J. Booth, “Adaptive optics in microscopy,” Philos Trans A Math Phys Eng Sci **365**(1861), 2829–2843 (2007). [CrossRef] [PubMed]

**18. **R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” JOSA A **9**(7), 1072–1085 (1992). [CrossRef]

**19. **B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. **28**(10), 801–803 (2003). [CrossRef] [PubMed]

**20. **B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase - retrieved pupil functions in wide - field fluorescence microscopy,” J. Microsc. **216**(1), 32–48 (2004). [CrossRef] [PubMed]

**21. **J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. **32**(10), 1737–1746 (1993). [CrossRef] [PubMed]

**22. **J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble Space Telescope characterized by using phase-retrieval algorithms,” Appl. Opt. **32**(10), 1747–1767 (1993). [CrossRef] [PubMed]

**23. **G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express **14**(2), 474–486 (2006). [CrossRef] [PubMed]

**24. **R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. **21**(5), 215829 (1982). [CrossRef]

**25. **L. Waller, L. Tian, and G. Barbastathis, “Transport of Intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express **18**(12), 12552–12561 (2010). [CrossRef] [PubMed]

**26. **N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. **49**(1), 6–10 (1984). [CrossRef]

**27. **T. E. Gureyev and K. A. Nugent, “Rapid quantitative phase imaging using the transport of intensity equation,” Opt. Commun. **133**(1-6), 339–346 (1997). [CrossRef]

**28. **S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (TI-DIC) microscopy for quantitative phase imaging,” Opt. Lett. **35**(3), 447–449 (2010). [CrossRef] [PubMed]

**29. **L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. **199**(1-4), 65–75 (2001). [CrossRef]

**30. **Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett. **29**(15), 1787–1789 (2004). [CrossRef] [PubMed]

**31. **B. Das and C. S. Yelleswarapu, “Dual plane in-line digital holographic microscopy,” Opt. Lett. **35**(20), 3426–3428 (2010). [CrossRef] [PubMed]

**32. **Y. Kawano, C. Higgins, Y. Yamamoto, J. Nyhus, A. Bernard, H.-W. Dong, H. J. Karten, and T. Schilling, “Darkfield adapter for whole slide imaging: Adapting a darkfield internal reflection illumination system to extend wsi applications,” PLoS ONE **8**(3), e58344 (2013). [CrossRef] [PubMed]

**33. **H. Nomura, K. Tawarayama, and T. Kohno, “Aberration measurement from specific photolithographic images: a different approach,” Appl. Opt. **39**(7), 1136–1147 (2000). [CrossRef] [PubMed]

**34. **H. Nomura and T. Sato, “Techniques for measuring aberrations in lenses used in photolithography with printed patterns,” Appl. Opt. **38**(13), 2800–2807 (1999). [CrossRef] [PubMed]

**35. **R. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) **35**, 237 (1972).

**36. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**(15), 2758–2769 (1982). [CrossRef] [PubMed]

**37. **J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” JOSA A **3**(11), 1897–1907 (1986). [CrossRef]

**38. **J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” JOSA A **4**(1), 118–123 (1987). [CrossRef]

**39. **M. R. Bolcar and J. R. Fienup, “Sub-aperture piston phase diversity for segmented and multi-aperture systems,” Appl. Opt. **48**(1), A5–A12 (2009). [CrossRef] [PubMed]

**40. **M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express **16**(10), 7264–7278 (2008). [CrossRef] [PubMed]

**41. **B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A **20**(8), 1490–1504 (2003). [CrossRef] [PubMed]

**42. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts & Company Publishers, 2005).

**43. **C. Audet and J. E. Dennis Jr., “Analysis of generalized pattern searches,” SIAM J. Optim. **13**(3), 889–903 (2002). [CrossRef]

**44. **X. Yang, H. Li, and X. Zhou, “Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and kalman filter in time-lapse microscopy,” Circuits and Systems I: Regular Papers, IEEE Transactions on **53**, 2405–2414 (2006). [CrossRef]

**45. **B. K. Gunturk and X. Li, *Image Restoration: Fundamentals and Advances* (CRC Press, 2012), Vol. 7.

**46. **T. McReynolds and D. Blythe, *Advanced graphics programming using OpenGL* (Morgan Kaufmann, 2005).