We present IsoSense, a wavefront sensing method that mitigates sample dependency in image-based sensorless adaptive optics applications in microscopy. Our method employs structured illumination to create additional high spatial frequencies in the image through custom illumination patterns. This improves the reliability of image quality metric calculations and enables sensorless wavefront measurement even in samples with sparse spatial frequency content. We demonstrate the feasibility of IsoSense for aberration correction in a deformable-mirror-based structured illumination super-resolution fluorescence microscope.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Sensorless adaptive optics (AO)  is an aberration measurement technique in which optimum aberration correction is inferred from a collection of intentionally aberrated images. It has found widespread use in various forms of high-resolution microscopy, in addition to optical coherence tomography [2,3], spectroscopy , and laser communications . In microscopy, sensorless AO is particularly useful for correction of specimen-induced aberrations when other, sensor-based, methods are not feasible. It can be used as a standalone wavefront sensing method or in conjunction with sensor-based techniques .
Sensorless AO works by applying different amounts of bias for chosen aberration modes and assessing the image quality by calculating a pre-selected quality metric for each image. Depending upon the application, the metric could be based upon, for example, intensity, contrast, or the spatial frequency content of the image. The necessary aberration correction is then estimated from the metric measurements using a mathematical model of the imaging process. The main challenge here is that the quality metric must be chosen carefully to work well with particular classes of imaged objects . However, even when the metric is chosen using prior knowledge about the sample, the measurements still rely on the local structure of the object that was imaged. In microscopy, sensorless AO techniques commonly rely on two groups of image quality indicators. First is signal intensity [8,9]; the second relies on structural information, for example, spatial frequency content optimization (here, the term “spatial frequency content” refers to the amplitude of sinusoidal components, as determined by Fourier analysis of an image) [10–12] or wavelet decomposition . In some limited cases, phase retrieval has been successfully demonstrated [14–16], but it is applicable only to a narrow range of specimens.
Previous work has observed that the performance of the image-based algorithms is, to some degree, dependent on the specimen structures present. When the specimen contains a fairly isotropic distribution of structures—or, more specifically, spatial frequencies—then the algorithms behave reliably. Conversely, when there is a strong anisotropy to the structures, such as when the specimen consists of aligned linear components, there can be a bias in the measurements. The reason for this bias is that certain aberration modes have a greater effect than others on particular spatial frequencies. It follows that the modes to which the system is insensitive are not effectively corrected .
Here we propose a sensorless AO method that relies on structured illumination to create multiple frequency-shifted copies of image information to ensure adequate sampling of the microscope optical transfer function (OTF). Our method improves the sensitivity and reliability of sensorless AO when used for imaging typical objects: it permits accurate assessment of image quality and enables precise correction of sample-induced aberrations to recover the optimal OTF of the instrument for widefield fluorescence imaging. We have demonstrated its performance in structured illumination microscopy (SIM) [18–20], because this technique is particularly sensitive to the fidelity of imaging high spatial frequencies , and the equipment for its implementation is readily capable of forming custom illumination patterns.
2. PRINCIPLES OF IsoSense
A. Image-Quality-Based Wavefront Sensing
Image-quality-based wavefront sensing relies on using images obtained with the microscope as an input for an algorithm that estimates the optical aberrations present in the system. A classical implementation of sensorless AO is depicted in the flowchart in Fig. 1(a). Here, a series of object images are acquired with different amounts of selected aberration modes applied with an adaptive optical element. Ideally, the aberration modes form a set of orthogonal eigenmodes so that each has an effect on the image quality that is independent of the other modes. In practice, a standard basis set, such as Zernike polynomials, is used for convenience. An image quality metric is selected and evaluated for each image [Fig. 1(b)]. A parabola (or other suitable function) is fitted to the measured points and the mode coefficient corresponding to the estimated peak is applied as the correction. Subsequent modes are then corrected in a similar manner [Fig. 1(c)]. For the example presented in our paper we were sequentially correcting pre-selected Zernike modes to reach the optimal correction. In particular circumstances, it has been shown that to correct aberration modes, a minimum of measurements are needed. However, more measurements may be beneficial where the signal-to-noise ratio (SNR) of the data is low.
Ideally, the image quality metric chosen for an imaging system should be sensitive to small changes in aberrations and would be compatible with a wide variety of samples. Typical characteristics associated with high-quality fluorescence microscopy images are brightness and sharpness. Optical aberrations reduce high spatial frequency content in widefield fluorescence images, reducing image sharpness, and also affect the light intensity distribution through broadening of the point spread function (PSF). Quality metrics that exploit intensity distribution or brightness tend to be either insensitive in widefield mode (as in the case of total brightness measurements) or dependent upon object structures (as observed with histogram-based or contrast measurements); they can also be affected by bleaching. Metrics that rely on sharpness are more robust against bleaching, but they also tend to be sample specific, working well only where the sample exhibits well-defined structures. In this paper, to numerically assess the image quality and calculate the quality metric, we used high spatial frequency content as a proxy for image quality. The merit function was calculated by using the absolute value of the image FFT, integrated over an annular region of the image spectrum from to , where is the radius of the calculated region of support of the image spectrum. We have omitted the central 15% of the information around the DC component to minimize the effect related to bleaching, and also we have omitted the outer 15% due to very low SNR in this region.
The OTF provides a way of understanding the operation of an imaging system in terms of its transmission of spatial frequencies. One does not normally have access to a measurement of the OTF, although the image spatial frequency spectrum may provide a proxy, if a wide spread of spatial frequencies is present in the object. A typical example of a sample that contains a wide range of spatial frequencies is a lawn of beads with diameters smaller than the diffraction limit [Figs. 2(a)–2(d)]. Correcting for aberrations shows an increase in the high spatial frequency content of the image.
Figure 2(e) is an image of a bovine pulmonary artery endothelial cell imaged in the green (Alexa Fluor 488 phalloidin, labeling actin), while 2f is the amplitude component of its Fourier transform. The main structures in the image are preferentially oriented long actin filaments of about 7 nm diameter. This anisotropy in the sample means that the image spectrum does not sample the OTF uniformly: there is detailed information for some regions of the OTF, but little usable information for other regions. It is possible to measure only the effects of aberrations that affect the regions in the OTF for which information with a good SNR is present in the image spectrum. With some commonly occurring biological samples, the signal can fall below the noise floor at mid or high spatial frequencies, resulting in sub-optimal adaptive correction. Therefore, an alternative approach is needed that can be applied to a wider variety of objects.
B. Custom Illumination Patterns for Enhanced Sampling of the OTF
In the simple case of a widefield incoherent imaging system, the optical transfer function [Fig. 3(a)] can be calculated as the autocorrelation of the pupil function [Eq. (1)] [Fig. 3(b)]. Therefore, every point on the horizontal axis in Fig. 3(b) would correspond to a different magnitude of in Fig. 3(a). The absolute value of the OTF is numerically identical to the contrast of the image of a sine grating at the corresponding spatial frequency. An important consequence of this is obvious: the value of the OTF at a particular value of (or spatial frequency) is dependent only on the regions of the pupil function that have contributed to the calculation. In Eq. (1), the star represents correlation and is a complex conjugate of the pupil function:3 further illustrates the simplified case of an object that is defined by a single sinusoidal frequency; the object spectrum consists of a zero frequency component in addition to the positive and negative frequencies of the sinusoid. If we consider the contribution of the sinusoidal component, when the spatial frequency is high [Figs. 3(c)–3(f)], only the two dark-shaded segments of the pupil contribute to the imaging of this particular frequency. For lower frequencies, the information is distributed across larger areas of the pupil [Figs. 3(g)–3(j)]. One consequence of this is that, for an object containing only a limited set of spatial frequencies, the image quality is affected predominantly by aberrations in only a sub-set of the pupil function. The sensing method would be mostly insensitive to aberrations in the other pupil areas. IsoSense aims to eliminate these “blind spots” by applying structured illumination [Figs. 3(k)–3(n)] to fill the whole pupil with information.
It also should be considered that the efficiency of frequency transfer drops severely for high spatial frequencies [Fig. 3(b)], and the OTF only can be measured only if the information is above the detection noise floor. Therefore, even if the information is present at low levels, additional contrast resulting from structured illumination may help with detection. We can also note that for a low spatial frequency input, almost the whole pupil contributes to the OTF. However, the complex conjugation of one of the pupil function terms in Eq. (1) means that the phase terms almost cancel out. These low-frequency components are thus only weakly sensitive to aberrations.
The IsoSense illumination method relies on the fact that if the sample is illuminated with a pattern containing several spatial frequencies [Figs. 4(a) and 4(b)], the spatial frequency content in the resulting fluorescence image would be , a convolution of the frequency content of the illumination pattern and the frequency content of the object . Therefore, at the pupil, multiple copies of the sample spatial frequency content would be created around the carrier frequencies that originated from the illumination pattern [Fig. 4(d)]. Appropriate selection of illumination pattern leads to a significantly improved sampling of the system OTF, especially at the high-frequency periphery. As a practical compromise, to ensure sufficient pupil sampling, we chose the pattern that permits two carrier frequencies to be visible diagonally in the image FFT, as depicted in [Fig. 4(d)]. This is a simple but effective solution requiring no prior information about the sample.
To generate custom illumination patterns, a grayscale phase mask was computed by multiplying two 2D sine waves [Eq. (2)] and applied to the spatial light modulator (SLM; see Section 4 for a description of the optical setup):2), , denote the vertical and horizontal coordinates in pixels, controls carrier frequency, and controls the 2D pattern orientation. The resulting beams diffracted from the SLM were focused on the microscope pupil plane. A uniform regular interference pattern resembling an array of light beads was formed once all the beams were recombined at the sample plane after the objective lens. Figure 4(a) depicts a simulated illumination pattern resulting from interference of recombined beams. Figure 4(b) depicts a 2D Fourier transform of Fig. 4(a), and it clearly shows the main frequency components of the illumination pattern, which are in effect carrier frequencies for shifted copies of the sample structure information as shown in Figs. 4(c) and 4(d) of Alexa 488 phalloidin labeled actin in a fixed bovine pulmonary artery endothelial cell [Figs. 4(c) and 4(d)].
3. INTERFEROMETRIC WAVEFRONT SENSING FOR DEFORMABLE MIRROR CONTROL
A prerequisite for reliable operation of sensorless AO is the ability to precisely control the adaptive element, in this case a deformable mirror. The shape of the thin membrane forming the mirror can be controlled by applying the voltage to individual actuators that are situated behind the membrane that can push or pull at the corresponding membrane points. As there is no feedback on the position of the actuators or the shape of the membrane, we measure the mirror shape with external wavefront sensing techniques. For deformable mirror control training and initial mirror flattening, we chose to use an interferometric wavefront-sensor-based approach .
An adapted version of a Mach–Zehnder interferometer was built into the system to include a deformable mirror and part of the imaging system. The collimated and expanded beam from an He–Ne laser (R-33361, Research Electro-Optics) was split into object and reference paths, and the reference path was diverted toward the interference camera by using two steering mirrors, M1 and M2. The steering mirrors were also used to introduce a wavefront tilt, necessary for single-shot wavefront measurements. In the object path, immediately after the cube beam splitter (BS013, Thorlabs), we have included one additional lens to ensure that, at the DM pupil, the light from the He–Ne laser is collimated. The light was then reflected from the deformable mirror that was positioned at the conjugate pupil plane and reflected back from a flat silver mirror that was positioned at the sample plane after the objective. The reflected light was picked up by 50:50 pellicle beam splitter (BP150, Thorlabs) and launched toward the interference camera (XiQ MQ042MG-CM, Ximea), which was positioned at the conjugate pupil plane, where it interfered with the reference beam.
For wavefront sensing, the interferogram was processed with custom fringe analysis software using a LabView implementation. Figure 5 illustrates major wavefront retrieval steps. The recorded interferogram [Fig. 5(b)] was transformed by an FFT [Fig. 5(c)], and the frequencies around the carrier were cropped, masked out, and shifted back to the zero-frequency position [Fig. 5(d)]. The inverse FFT was then computed. The phase profile corresponding to the optical phase of the wavefront was then obtained after phase unwrapping [Fig. 5(e)] by using the Goldstein 2D phase unwrapping algorithm . Finally, Zernike modes were fitted to decompose the wavefront.
The interferograms were captured and then used for (1) mirror training to produce Zernike modes and (2) mirror flattening. Mirror training was performed by capturing a set of interferograms obtained after poking individual actuators at various voltages (we used five steps at increments, each corresponding to 7.5% of full range). The wavefronts were decomposed into Zernike modes, allowing the contribution of each actuator to a given Zernike mode to be calculated, giving a “poke matrix.” The control matrix is the inverse of this poke matrix. For a detailed description of the training method, please see . Once the control matrix was calculated, we used the feedback from the interferometer to flatten the deformable mirror. This was typically carried out after each alteration or realignment of the optical system.
4. OPTICAL SETUP FOR DEMONSTRATION OF IsoSense IN SIM
We chose a SIM setup to demonstrate the IsoSense wavefront sensing method, because the microscope already has a capability to produce the arbitrary structured illumination patterns that are necessary for wavefront sensing. We built an SLM-based structure illumination microscope based on an optical light path design originating from John Sedat at University of California, San Francisco (UCSF) , which will be published elsewhere in detail. A simplified schematic of the setup is presented in Fig. 6.
The light from two lasers, a 488 nm diode laser (LDM488.200.TA, Omicron Laserage), which is used for fluorescence excitation, and a 543 nm He–Ne laser (R-33361, Research Electro-Optics), which is used for interferometric wavefront sensing, were combined to a single path. The combined beams were launched at the SLM (HSPDM512-480-640, Boulder Nonlinear Systems, Inc.), which was positioned at the object plane conjugate. The diffracted beams from the SLM formed multiple spots on the deformable mirror (DM-69 BIL103, Alpao), which was positioned at the conjugate to the objective pupil plane. The DM was de-magnified to match the pupil size of the microscope objective (LUMFL N Olympus). The excitation beams interfered at the object plane to produce full-field structured illumination. Additional passive and active polarization control optics (not shown in the figure) were used to ensure optimal fringe contrast at the object.
The fluorescence light was collected by the objective lens and propagated back along the same path until the dichroic beam splitter, where it was separated from the excitation path and launched toward an EMCCD camera (iXon Ultra 897, Andor).
The image timing and image acquisition were controlled using a custom digital signal processing unit (M62/67, Innovative Integration), driven by the Cockpit microscopy control environment that was developed in conjunction with UCSF. The SIM images were reconstructed using the fairSIM  plugin for ImageJ.
5. RESULTS AND DISCUSSION
A. Sample Preparation
FluoCells prepared slide #1 (F36924) containing bovine pulmonary artery endothelial cells stained with red fluorescent MitoTracker Red CMXRos, green fluorescent Alexa Fluor 488 phalloidin (actin), and DAPI was purchased from Invitrogen.
Drosophila macrophages (hemocytes) were isolated from wandering third-instar larva expressing Jupiter:GFP and Histone:RFP ubiquitously, as described in . To prepare samples for imaging, four to five larvae were collected, washed with gentle vortexing, and the cuticle pierced to release the hemolymph in 200l Schneider’s culture medium (Thermofisher) in a 35 mm glass bottom dish (MatTek). The preparation was allowed to stand in a humid chamber for at least 30 min to allow macrophages to adhere before additional medium was added for imaging. This preparation yielded predominantly plasmatocyes with occasional larger, flatter lamellocytes .
B. Imaging and AO Correction
Custom illumination patterns have long been used in SIM to downshift high spatial frequency components in order that they fit within the pass band of the system. However, using SIM reconstructions for image-quality-based adaptive wavefront sensing is impractical for two reasons: (1) a single reconstruction requires multiple images to be taken, increasing the effects related to phototoxicity and bleaching, and (2) if the microscope OTF is compromised by severe optical aberrations, common reconstructions would fail . Furthermore, as the IsoSense method relies on indirect estimates of OTF quality through the high spatial frequency components of raw images, no additional reconstructions are needed. The image quality metric representing system performance in SIM mode can be calculated from a single image.
IsoSense is particularly well suited for SIM imaging systems that rely on dynamic optical components for creating SIM patterns, as the same illumination control element can be used for both: SIM imaging and wavefront sensing. Therefore, we chose a liquid crystal SLM-based structured illumination microscope for initial proof-of-concept demonstration to verify the performance of the method.
Prior to imaging, we trained the mirror and flattened the system using interferometer feedback for Zernike modes 5–100, following the Noll notation . To get the reference to the “system flat” correction and to correct non-common-path aberrations, we performed image-based correction for sub-diffraction sized fluorescent beads (0.1 μm diameter) scattered on the coverslip, using distilled water immersion. Once the system calibration is completed, the resulting control matrix and “system flat” file can be used for further imaging experiments if the optical system remains unchanged. Prior to each individual imaging experiment, a sensorless adaptive correction was performed for each isoplanatic patch in the object to correct for sample-induced aberrations. “System flat” correction was determined for the sake of having a fair comparison between AO-corrected images and images for the equivalent non-AO microscope where the deformable mirror was bypassed as illustrated in Fig. 7. Therefore “system flat” includes both interferometric correction and sensorless correction of a non-aberrating sample to account for non-common-path aberrations. For system flat correction, we corrected first 5–22 Zernike modes in two iterations. System flat correction using beads is not necessary for day-to-day imaging experiments, as non-common-path aberrations are also detected and corrected during sensorless AO correction of sample-induced aberrations.
Next, we corrected sample and non-common-path aberrations. Sample-induced aberrations are caused by the sample structure and immersion medium, while non-common-path aberrations can occur due to the slightly different optical paths used during system calibration (interferometric path) and imaging (excitation path, fluorescence path) shown in Fig. 6. From prior experiments, we knew that the first two orders of spherical aberration tend to dominate due to refractive index mismatch between sample and immersion medium. Also, we used Schneider’s Drosophila medium (ThermoFisher) with a water immersion objective, so the immersion medium was not fully compatible with the immersion lens. After correcting two orders of spherical aberration, which were present due to refractive index mismatch, we have corrected coma, astigmatism, and trefoil—these aberrations most likely were non-common-path aberrations that were caused by the presence of the pellicle beam splitter in the interferometric wavefront sensing path during interferometric mirror flattening. For cell images presented in our paper, correcting Zernike modes (5–10, 22) was sufficient; however, this might be different for different types of cells and tissues. To ensure robust noise performance during sensorless AO correction, we were using five images per mode.
The first example, a Drosophila larval macrophage expressing Jupiter::GFP labeling microtubules demonstrates the effect of AO correction in SIM (Fig. 8). The effect of correction is much less obvious in corrected [Fig. 8(b)] and system flat [Fig. 8(a)] widefield images, but it is much more prominent in SIM reconstructions where AO correction results in reduced SIM reconstruction artifacts [Fig. 8(d)] compared to the case where the system flat was used [Fig. 8(c)].
The second example clearly demonstrates the benefit of the IsoSense wavefront sensing approach. Here the aberrations were first corrected with sensorless AO using widefield illumination [Figs. 9(a) and 9(c)], and then the correction was repeated using the IsoSense approach. Figure 9 represents a typical example where the majority of the object structure has a clear preferential orientation, resulting in insufficient sampling of the OTF. As a result, a better correction was achieved along the direction that is perpendicular to the orientation of the majority of microtubules; along the other direction, the microtubules appear washed out. Again, this is particularly visible in the SIM reconstruction [Fig. 9(c)]. The IsoSense approach led to better overall correction, preserving all the structures in the sample [Fig. 9(d)].
The IsoSense structured-illumination-based sensorless AO approach removes the barriers to accurate image quality metric calculation, commonly caused by anisotropic object structure or lack of sharp and well-defined structures. IsoSense improves the sampling of the microscope OTF by filling it with spatial-frequency-shifted image information in the otherwise poorly populated OTF regions. Consequently, it allows us to detect the aberration components in the corresponding pupil regions. Finally, we have demonstrated the performance of our method in a SIM setup for imaging live Drosophila macrophage cells expressing a GFP tagged microtubule marker, where our method effectively improved the accuracy of AO correction and image quality.
This demonstration used a versatile SLM to generate the patterns. However, the method should be compatible with alternative pattern generation techniques that do not require an active optical element (speckle patterns, a phase mask, transmissive masks). Even though in this paper the IsoSense method was demonstrated using interference patterns, coherent illumination is not inherently required. The method can be easily extended to compatible imaging systems with low coherence or incoherent illumination, such as light-sheet structured illumination microscopy  or lattice light-sheet microscopy .
The technique can also be extended to include adaptive pattern generation. For example, when imaging strongly aberrated objects, the carrier frequencies could be positioned at lower frequencies, closer to pupil center, in the first iteration and gradually shifted to higher frequencies during subsequent correction iterations. Alternatively, the pattern generation algorithm can be automated to identify inadequately sampled OTF regions and adjust the phase mask to generate the carrier frequencies that fill these blind spots.
We anticipate that IsoSense will be useful in conjunction with other imaging techniques that rely on AO for aberration correction. AO light-sheet microscopy  is particularly well suited to benefit from IsoSense, as these techniques already contain both AO and excitation pattern modulation devices.
MRC/EPSRC/BBSRC (MR/K01577X/1); H2020 European Research Council (ERC) (695140); Wellcome Trust (091911/Z/10/Z, 096144/Z/11/Z, 107457/Z/15/Z, 203285/C/16/Z, 209412/Z/17/Z).
We thank John Sedat for fruitful discussions and for sharing his optical light path design for a SLM-based SIM system with adaptive optics, on which we based our microscope design. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
1. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32, 5–7 (2007). [CrossRef]
2. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. USA 109, 7175–7180 (2012). [CrossRef]
3. T. DuBose, D. Nankivil, F. LaRocca, G. Waterman, K. Hagan, J. Polans, B. Keller, D. Tran-Viet, L. Vajzovic, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Handheld adaptive optics scanning laser ophthalmoscope,” Optica 5, 1027–1036 (2018). [CrossRef]
4. H. J. B. Marroux, A. P. Fidler, D. M. Neumark, and S. R. Leone, “Multidimensional spectroscopy with attosecond extreme ultraviolet and shaped near-infrared pulses,” Sci. Adv. 4, eaau3783 (2018). [CrossRef]
5. W. Lianghua, P. Yang, Y. Kangjian, C. Shanqiu, W. Shuai, L. Wenjing, and B. Xu, “Synchronous model-based approach for wavefront sensorless adaptive optics system,” Opt. Express 25, 20584–20597 (2017). [CrossRef]
6. M. Žurauskas, O. Barnstedt, M. Frade-Rodriguez, S. Waddell, and M. J. Booth, “Rapid adaptive remote focusing microscope for sensing of volumetric neural activity,” Biomed. Opt. Express 8, 4369–4379 (2017). [CrossRef]
7. S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52, 5523–5532 (2013). [CrossRef]
8. B. Thomas, A. Wolstenholme, S. N. Chaudhari, E. T. Kipreos, and P. Kner, “Enhanced resolution through thick tissue with structured illumination and adaptive optics,” J. Biomed. Opt. 20, 026006 (2015). [CrossRef]
9. F. Huang, G. Sirinakis, E. S. Allgeyer, L. K. Schroeder, W. C. Duim, E. B. Kromann, T. Phan, F. E. Rivera-Molina, J. R. Myers, I. Irnov, M. Lessard, Y. Zhang, M. A. Handel, C. Jacobs-Wagner, C. P. Lusk, J. E. Rothman, D. Toomre, M. J. Booth, and J. Bewersdorf, “Ultra-high resolution 3D imaging of whole cells,” Cell 166, 1028–1040 (2016). [CrossRef]
10. D. Débarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express 15, 8176–8190 (2007). [CrossRef]
11. D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2, 177–185 (2015). [CrossRef]
12. K. F. Tehrani, J. Xu, Y. Zhang, P. Shen, and P. Kner, “Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) using a genetic algorithm,” Opt. Express 23, 13677–13692 (2015). [CrossRef]
13. J. Antonello, X. Hao, E. S. Allgeyer, J. Bewersdorf, J. Rittscher, and M. J. Booth, “Sensorless adaptive optics for isoSTED nanoscopy,” Proc. SPIE 10502, 1050206 (2018). [CrossRef]
14. M. J. Mlodzianoski, P. J. Cheng-Hathaway, S. M. Bemiller, T. J. McCray, S. Liu, D. A. Miller, B. T. Lamb, G. E. Landreth, and F. Huang, “Active PSF shaping and adaptive optics enable volumetric localization microscopy through brain sections,” Nat. Methods 15, 583–586 (2018). [CrossRef]
15. P. N. Petrov, Y. Shechtman, and W. E. Moerner, “Measurement-based estimation of global pupil functions in 3D localization microscopy,” Opt. Express 25, 7945–7959 (2017). [CrossRef]
16. P. Kner, L. Winoto, D. A. Agar, and J. W. Sedat, “Closed loop adaptive optics for microscopy without a wavefront sensor,” Proc. SPIE 7570, 75709 (2010). [CrossRef]
17. D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16, 9290–9305 (2008). [CrossRef]
18. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). [CrossRef]
19. L. Schermelleh, P. M. Carlton, S. Haase, L. Shao, L. Winoto, P. Kner, B. Burke, M. C. Cardoso, D. A. Agard, M. G. L. Gustafsson, H. Leonhardt, and J. W. Sedat, “Subdiffraction multicolor imaging of the nuclear periphery with 3D structured illumination microscopy,” Science 320, 1332–1336 (2008). [CrossRef]
20. M. G. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94, 4957–4970 (2008). [CrossRef]
21. M. Arigovindan, J. W. Sedat, and D. A. Agard, “Effect of depth dependent spherical aberrations in 3D structured illumination microscopy,” Opt. Express 20, 6527–6541 (2012). [CrossRef]
22. M. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterization of deformable membrane mirrors,” Appl. Opt. 44, 5131–5139 (2005). [CrossRef]
23. M. D. Pritt and D. C. Ghiglia, Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software (Wiley, 1998).
24. J. Antonello, “Optimisation-based wavefront sensorless adaptive optics for microscopy,” Ph.D. thesis (Delft University of Technology, 2014).
25. J. Sedat, University of California, San Francisco (UCSF) (personal communication, 2010).
26. M. Müller, V. Mönkemöller, S. Hennig, W. Hübner, and T. Huser, “Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ,” Nat. Commun. 7, 10980 (2016). [CrossRef]
27. R. M. Parton, A. Valles, I. Dobbie, and I. Davis, “Pushing the limits of live cell imaging in Drosophila,” in Live Cell Imaging: A Laboratory Manual, R. Goldman, D. Spector, and J. Swedlow, eds., 2nd ed., (Cold Spring Harbor Laboratory, 2010).
28. M. Stofanko, S. Y. Kwon, and P. Badenhorst, “A misexpression screen to identify regulators of Drosophila larval hemocyte development,” Genetics 180, 253–267 (2008). [CrossRef]
29. J. Demmerle, C. Innocent, A. J. North, G. Ball, M. Müller, E. Miron, A. Matsuda, I. M. Dobbie, Y. Markaki, and L. Schermelleh, “Strategic and practical guidelines for successful structured illumination microscopy,” Nat. Protoc. 12, 988–1010 (2017). [CrossRef]
30. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66, 207–211 (1976). [CrossRef]
31. P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Methods 7, 637–642 (2010). [CrossRef]
32. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Böhme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science 346, 1257998 (2014). [CrossRef]
33. T.-L. Liu, S. Upadhyayula, D. E. Milkie, V. Singh, K. Wang, I. A. Swinburne, K. R. Mosaliganti, Z. M. Collins, T. W. Hiscock, J. Shea, A. Q. Kohrman, T. N. Medwig, D. Dambournet, R. Forster, B. Cunniff, Y. Ruan, H. Yashiro, S. Scholpp, E. M. Meyerowitz, D. Hockemeyer, D. G. Drubin, B. L. Martin, D. Q. Matus, M. Koyama, S. G. Megason, T. Kirchhausen, and E. Betzig, “Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms,” Science 360, eaaq1392 (2018). [CrossRef]