Abstract

Aberrations arising from sources such as sample heterogeneity and refractive index mismatches are constant problems in biological imaging. These aberrations reduce image quality and the achievable depth of imaging, particularly in super-resolution microscopy techniques. Adaptive optics (AO) technology has been proven to be effective in correcting for these aberrations, thereby improving the image quality. However, it has not been widely adopted by the biological imaging community due, in part, to difficulty in set-up and operation of AO. The methods for doing so are not novel or unknown, but new users often waste time and effort reimplementing existing methods for their specific set-ups, hardware, sample types, etc. Microscope-AOtools offers a robust, easy-to-use implementation of the essential methods for set-up and use of AO elements and techniques. These methods are constructed in a generalised manner that can utilise a range of adaptive optics elements, wavefront sensing techniques and sensorless AO correction methods. Furthermore, the methods are designed to be easily extensible as new techniques arise, leading to a streamlined pipeline for new AO technology and techniques to be adopted by the wider microscopy community.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Many of the recent innovations in biological imaging have revolved around the quest for greater resolving power, ultimately culminating in the advent of super-resolution microscopy techniques [13]. However, there is often a difference between the theoretical resolution and the practical resolution obtained in biological imaging. This is particularly true for live, thick samples, which are interesting to biological researchers for their ability to show dynamic biological processes in situ. How close the theoretical and practical resolutions are to one another is largely dependent on the optical aberrations present, most of which arise from the heterogeneity of the biological sample itself [4,5]. These aberrations compromise image quality, decreasing contrast and resolution, by distorting the optical wavefront [6,7]. Implementing adaptive optics (AO) in microscopy has already been shown to be highly effective at reducing these aberrations and yielding significant improvements to image quality [8,9]. The widespread use of AO in microscopy would therefore be a significant boon to biological research.

Unfortunately, whilst multiple proof of principle systems in AO microscopy have been demonstrated [1013], use of AO has yet to be widely adopted. This is due, in large part, to the complicated nature of measuring the wavefront deformations (and therefore the aberrations) present in a sample. While methods for directly measuring the wavefront do exist, they carry additional complications such as limiting what kind of biological specimens can be imaged for correction e.g. only those with point sources present [14]. Therefore indirect wavefront sensing, or sensorless AO, is generally preferred [8,15]. In sensorless AO methods, some quality of the sample images, such as contrast or spatial frequency content, is evaluated and this quality is maximised by varying the wavefront deformation applied to the corrective element. Most proofs of principle AO microscopes implement a single method of correction that is particularly suited to the specific imaging modality and/or sample being used. So far, a robust, generalised, easy-to-use implementation which incorporates multiple AO methods for multiple sample types and imaging modalities has yet to be presented [16].

Microscope-AOtools (https://github.com/MicronOxford/microscope-aotools) provides such a generalised solution. It utilises Python-Microscope, an open-source hardware control software package, to provide control over the physical hardware necessary for AO implementations. It incorporates methods for calibrating an AO element, evaluating the success of the calibration in recreating aberrations and performing both direct wavefront sensing and, so called, sensorless adaptive optics corrections where aberrations are detected directly from images. The methods for sensorless AO correction can utilise a number of different image quality metrics. The methods presented are built in such a manner to enable easy switching between different AO elements, wavefront sensing techniques, and image quality metrics. They are designed to be easily extensible so that new technology and techniques can be readily incorporated.

2. Principles behind Microscope-AOtools

Designing an AO enabled system follows a predicable workflow outlined in Fig. 1 consisting of four phases:

  • 1. System Design: A potential user should consider the needs of their imaging modality, system constraints, and desired sample types before deciding on the appropriate AO element to implement.
  • 2. Installation: The user installs the chosen AO element into their beam path.
  • 3. Set-up: The AO element is calibrated to correct for optical aberrations. This calibration is checked and the system aberrations are corrected.
  • 4. Sample Correction: The sample correction routine is designed. This will typically fall into one of two categories; sensorless AO or direct wavefront sensing.

 figure: Fig. 1.

Fig. 1. Flowchart depicting the general process for building a system utilising AO. a) System Design Phase. User decides whether the system needs AO and, if so, what type. b) Installation Phase. The chosen AO elements are added to the system. c) Set-up Phase. The chosen AO element is calibrated. Typically this involves mapping the variable components of the AO element (e.g. deformable mirror actuators) to a useful set of basis functions which represent optical aberrations. d) Sample Correction Phase Here the user designs the methods to be used for correcting their desired sample.

Download Full Size | PPT Slide | PDF

Microscope-AOtools does not contain methods relevant to the System Design or Installation phases, although resources do exist to aid with these [17,18]. Utilising Microscope-AOtools requires that the adaptive element that the user has decided on is a Python-Microscope compatible device and that the user has some kind of wavefront sensor installed (e.g. interferometer, Shack-Hartmann sensor). Microscope-AOtools has the following Python package dependencies: numpy [19], scipy [20], scikit-image [21], sympy [22], and AOtools [23]. These packages are installed with Microscope-AOtools, are freely available and supported at time of writing. There are no other requirements for using Microscope-AOtools. Microscope-AOtools provides all the methods necessary for the Set-up and Sample Correction phases, easing the development of AO enabled microscopes.

3. Methods

3.1 Calibration

The general principle of aberration correction is to measure the overall aberration of the optical wavefront and apply the opposite phase deformation to the adaptive element. An AO device is generally composed of $N$ variable components e.g. a deformable mirror has $N$ actuators which control the shape of the mirror surface. Measuring the phase wavefront directly, these $N$ variable components can shape the adaptive element to correct for the local phase distortions. However, in many AO devices these components have responses which are coupled, non-linear, or otherwise non-ideal. This is the case for one of the most common AO devices, the continuous membrane deformable mirror, where the membrane shape at one point on the mirror’s surface is affected by the membrane shape at adjacent points [24]. If the phase wavefront is not directly observable, attempting aberration correction by varying individual components of the AO device is prohibitively difficult. Therefore, the use of adaptive elements requires a map between the variable components and the aberrations we wish to correct, allowing the whole of the adaptive element to be configured at once to correct for phase distortions. Constructing this map is the calibration process.

Consider a continuous membrane deformable mirror as our adaptive element. Assuming that the overall mirror shape is the linear superposition of all the individual actuator deflections, we can define the overall mirror shape, $S(x,y)$ as:

$$S(x,y) = \sum_{h=1}^{N} d_{h}\phi_{h}(x,y),$$
where $S(x,y)$ is the change in the deformable mirror shape from its original position, $d_h$ is the $h$-th actuator control signal (an arbitrary value related to applied voltage which determines the position of the $h$-th actuator in its overall movement range) and $\phi _{h}(x,y)$ is the $h$-th influence function, so called as they describe how the elements of the device influence the phase wavefront. We can convert this set of basis functions to a different basis set. An obvious alternative basis set is the Zernike polynomials since they are defined on the unit circle, are orthogonal, and the wavefront distortion can be well approximated by the linear addition of a limited number of Zernike polynomials [25,26]. Describing $\phi _{h}(x,y)$ in terms of Zernike polynomials we obtain:
$$\phi_{h}(x,y) = \sum_{g=1}^{M} b_{g,h}z_{g}(x,y),$$

where $b_{g,h}$ is the coefficient corresponding to the $g$-th Zernike polynomial due to $d_h$, the $h$-th actuator control signal (i.e. the relationship between how the $h$-th actuator position affects the $g$-th Zernike mode amplitude). This leads to:

$$\begin{aligned} S(x,y) & = \sum_{h=1}^{N} d_{h}\left[\sum_{g=1}^{M} b_{g,h}z_{g}(x,y)\right]\\& =\sum_{g=1}^{M} \left(\sum_{h=1}^{N} d_{h} b_{g,h}\right) z_{g}(x,y)\\ & =\sum_{g=1}^{M} a_{g} z_{g}(x,y), \end{aligned}$$
where the new Zernike coefficients, $a_{g}$, are defined as:
$$a_{g} = \sum_{h=1}^{N} b_{g,h} d_{h} \textrm{~for~} g=1,2,\ldots,M.$$
Converting this to a matrix form yields:
$$\begin{aligned} \bar{a} &= \boldsymbol{B} \bar{d}\\ \Rightarrow \bar{d} &= \boldsymbol{C} \bar{a}, \end{aligned}$$

where $\bar {d}$ is a length $N$ vector of the actuator control signals, $\bar {a}$ is the length $M$ vector of the Zernike polynomial amplitudes and $\boldsymbol {B}$ is the $M \times N$ matrix representing the response characteristics of the deformable mirror. However, we actually want its inverse, $\boldsymbol {B}^{-1} =\boldsymbol {C}$, otherwise called the control matrix, in order to convert from Zernike polynomial amplitudes to actuator control signals.

Microscope-AOtools implements an automated calibration routine to obtain $\boldsymbol {C}$. Each actuator is moved through $p$ set positions and a wavefront is extracted. Figure 2(a) shows an observed wavefront during the calibration routine. The wavefront is then decomposed into $M$ Zernike modes [23]. A row vector $\boldsymbol {z}$ is obtained for each actuator position containing the computed Zernike mode amplitudes:

$$\boldsymbol{z} = \begin{bmatrix} z_{1} & z_{2} & \cdots & z_{g} & \cdots & z_{m} \end{bmatrix},$$

 figure: Fig. 2.

Fig. 2. (a) An example observed wavefront obtained during the calibration process for an Alpao-69 actuator deformable mirror. The $26^{th}$ actuator is at the first of the 5 positions. (b) A simulated wavefront created from the 69 Zernike mode amplitudes measured in the observed wavefront. (c) The difference in the observed and simulated wavefronts. (d) Plot of row vector, $\boldsymbol {z_{1}}$, of the 69 Zernike mode amplitudes measured for the observed wavefront (e) The influence function fitting for Zernike mode 11 (Noll index) for the $26^{\textrm {th}}$ actuator.

Download Full Size | PPT Slide | PDF

where the $g$-th element is the amplitude of the $g$-th Zernike mode. Figure 2(d) shows a plot of the row vector containing the amplitudes of the $M$ Zernike modes measured for Fig. 2(a). Figure 2(b) show a simulated wavefront constructed using these Zernike mode amplitudes. As previously mentioned, any phase wavefront can be constructed from an infinite linear combination of Zernike modes [26]. However, using a finite number of Zernike modes produces an approximation of the original wavefront. Figure 2(c) show the difference between the two and a “ringing” analogous to the Gibbs phenomena when recreating periodic signals using a truncated Fourier series is present [27].

By collecting the row vectors of each position for the $h$-th actuator we can obtain:

$$A_h = \begin{bmatrix} \boldsymbol{z_{1}}\\ \boldsymbol{z_{2}}\\ \vdots\\ \boldsymbol{z_{p}} \end{bmatrix} = \begin{bmatrix} z_{1,1} & z_{1,2} & \cdots & z_{1,m} \\ z_{2,1} & z_{2,2} & \cdots & z_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ z_{p,1} & z_{p,2} & \cdots & z_{p,m} \end{bmatrix}.$$

The row vector shown in Fig. 2(d) was obtained at the first of the $p$ positions i.e. $\boldsymbol {z_{1}}$. Applying linear regression to each column, $\begin {bmatrix} z_{1,i} & z_{2,i} & \cdots & z_{p,i} \end {bmatrix}^{T}$, yields the response characteristics between the $h$-th actuator’s position and the $g$-th Zernike mode, $b_{g,h}$. Figure 2(e) shows the influence function fitting for Zernike mode 11 (Noll index) for the $26^{\textrm {th}}$ actuator. The gradient of the slope is, in this case, $b_{11,26}$. This gradient is acquired for each actuator, for each Zernike mode. In this way, we construct $\boldsymbol {B}$ and then calculate $\boldsymbol {C}$. Figure 3 shows a flowchart of this process as implemented in Microscope-AOtools.

 figure: Fig. 3.

Fig. 3. (a) Flowchart depicting the generalised calibration routine implemented in Microscope-AOtools (b) Flowchart depicting the process for calibrating the $h$-th actuator of the deformable mirror, the dashed blue process in (a). The influence functions returned are $b_{g,h}$ described in Eq. (2). This process is performed for each of $N$ actuators and used to obtain $\boldsymbol {C}$ described in Eq. (5).

Download Full Size | PPT Slide | PDF

In general, $\boldsymbol {B}$ is singular, or near singular, and therefore has no true inverse. So we must use a pseudo-inverse, calculated using single value decomposition (SVD). Actuators that have little influence on particular Zernike modes will have small values in the $\boldsymbol {B}$ matrix. These small influence values occur due to a combination of the actuators being in physical positions where they have limited influence over certain Zernike modes and noise, which leads to small perturbations in the measured Zernike mode amplitudes and are unrelated to the actuator movement. A control matrix calculated without thresholding out these small values before inversion will quickly lead to a saturation of the deformable mirror actuators (i.e. actuators at their maximum stroke length) when corrections are calculated [28]. This occurs because small values in $\boldsymbol {B}$ become large values in the $\boldsymbol {C}$, which results in large actuator signals, $d_{i}$, even at low Zernike mode amplitudes for certain actuators which try to correct modes which they have minimal influence over. Therefore, the calibration method incorporates a threshold by default and the exact threshold can be varied by experienced users.

Typically, a calibration routine is designed around a particular wavefront sensing method for one specific adaptive element and requires redesign for any new wavefront sensing technique. Microscope-AOtools does not make an assumption about the wavefront sensing technique used. Instead a raw image from the wavefront sensor is passed to one of a suite of phase acquisition methods and a phase image is returned. Which method in the suite is used is defined at start-up and can be changed by the user at any time.

Although the calibration routine has been defined in terms of a deformable mirror and its actuators, in principle it can be used to create a control matrix for an arbitrary adaptive element with $N$ variable components (i.e. degrees of freedom). Microscope-AOtools queries the Python-Microscope device to discover the number of variable elements and is therefore able to calibrate for an arbitrary AO device with $N$ degrees of freedom. This attribute is fetched from the device and used by Microscope-AOtools to calibrate every actuator/variable element for any arbitrary adaptive element with $N$ degrees of freedom. By constructing the calibration workflow in this generalised manner Microscope-AOtools can be used on any arbitrary Python Microscope compatible adaptive element with any wavefront sensing technique.

3.1.1 Characterisation

Feedback on the quality of the calibration process is essential. Ideally, once the adaptive element is calibrated we have a linear map which allows known quantities of Zernike modes to be applied exactly. This linear map is never exact in practice due to a range of issues such as, the fact that some parameters, like the number of steps used to calibrate each actuator and the threshold used in the SVD pseudo-inversion, are chosen empirically. Additionally, the approximate nature of the pseudo-inverse and discretisation errors (due to discrete sampling of a continuous functions) in the measuring of Zernike modes influence the quality of the linear map. It is therefore necessary to have some measure of how well the adaptive element is able to recreate desired Zernike modes. This process is called characterisation. It involves applying a fixed amplitude of single Zernike mode to the adaptive element, measuring the Zernike modes present in the wavefront, and comparing to that applied. An automated implementation of this process is present in Microscope-AOtools with the results returned to the user for interrogation. Figure 4 shows a flowchart of this method in Microscope-AOtools.

 figure: Fig. 4.

Fig. 4. Flowchart depicting the process for characterising an adaptive element as implemented in Microscope-AOtools.

Download Full Size | PPT Slide | PDF

In an ideal situation, where the control matrix provided a perfect linear map from Zernike mode amplitudes to the adaptive elements degrees of freedom, a characterisation assay like Fig. 5(a) is expected, where only the Zernike mode applied has a non-zero measured amplitude. In practice, the adaptive element is better at recreating particular Zernike modes and some Zernike mode coupling is observed, i.e. modes which were not applied to the adaptive element have measurable amplitude. This leads to characterisation plots like Fig. 5(b). Here we present a characterisation assay obtained for an Alpao-69 actuator deformable mirror.

 figure: Fig. 5.

Fig. 5. (a) An ideal characterisation assay, measuring the recreation accuracy of 68 Zernike modes with applied amplitude of 1 for each, (b) An example of a characterisation assay obtained from a calibrated Alpao-69 actuator deformable mirror, measuring the recreation accuracy of 68 Zernike modes with applied amplitude of 1 for each

Download Full Size | PPT Slide | PDF

From this characterisation assay, various measures of calibration accuracy can be extracted, principally the amplitude of the applied Zernike mode and the amplitudes of the other, coupled, Zernike modes. Clearly not all Zernike modes are recreated equally well and different modes exhibit varying degrees of mode coupling. As previously mentioned, this arises from mathematical approximations, computational errors and the physical characteristics of the adaptive element. Microscope-AOtools provides the tools to assess the accuracy of Zernike mode recreation, which can be used to inform which modes should be included in the aberration correction algorithms.

The characterisation routine relies on the same generalised phase acquisition method used in the calibration workflow. Recall that this is a user selected method from a suite of phase acquisition methods. The number of Zernike modes assessed, $N$, is the number of modes that have been measured in the calibration step by default, but this can be varied by the user. Once again this preserves generalisability and allows the characterisation method to be used on any arbitrary adaptive element, calibrated for any number of modes and utilising any desired wavefront sensing technique.

3.2 System aberration correction

Microscope-AOtools implements a method for correcting the system aberrations via direct wavefront sensing, designed to be used after calibration and characterisation. The workflow is shown in Fig. 6. The wavefront is obtained through whatever direct wavefront sensing method has been implemented and selected, a number of Zernike modes determined by the user are fitted to the wavefront, and an equal and opposite magnitude of these modes are applied to the adaptive element. The RMS wavefront error is then obtained. This process repeats until $N$ iterations have been performed or the RMS wavefront error is below a user defined error threshold, $\delta$.

 figure: Fig. 6.

Fig. 6. Flowchart depicting the process for correcting directly measured wavefront as implemented in Microscope-AOtools

Download Full Size | PPT Slide | PDF

It is necessary to perform this process for a number of iterations to ensure the optimal wavefront is obtained due to the limitations of Zernike mode recreation accuracy discussed previously. Figure 7 shows an example of one such wavefront correction, performed using the same Alpao-69 actuator deformable mirror as before. The wavefront was obtained by interferometry and Zernike modes 5-29 (using Noll indices). These modes were selected using the characterisation assay presented in Fig. 5 and were corrected over 20 iterations. Figure 7(c) shows the Zernike mode amplitudes before and after correction. Both the Zernike mode amplitudes and RMS wavefront error show a significant improvement in wavefront quality.

 figure: Fig. 7.

Fig. 7. (a) An example aberrated wavefront. RMS wavefront error = 3.818 radians (b) The example wavefront after 20 iterations of correction. RMS wavefront error = 0.986 radians. RMS wavefront error of the central 95% of the phase wavefront = 0.712 radians (c) The Zernike modes measured in the aberrated (blue) and corrected (red) wavefronts (a) - (b) are all presented on the same colour scale (in radians of 543 nm HeNe laser) and were obtained via interferometry

Download Full Size | PPT Slide | PDF

For system aberration corrections it may be preferable to set a minimum wavefront error and continue to iterate until this threshold is reached. However a user may wish to only spend $N$ iterations correcting the wavefront. Microscope-AOtools implements both options to ensure generalisability and which criteria is used can be set by the user. As with the calibration and characterisation methods, the wavefront flattening routine relies on a user selected phase acquisition technique from the suite of implemented methods. This ensures that the applicability of Microscope-AOtools to any adaptive element with any wavefront sensing technique is preserved throughout all the set-up methods.

4. Sample correction methods

4.1 Direct wavefront sensing correction

Performing AO correction for biological samples by directly measuring the phase wavefront has been well documented. In many cases the phase acquisition methods used are the same as those a user might implement to calibrate the adaptive element, usually a Shack-Hartmann wavefront sensor [2931]. Occasionally, other methods are used [32]. Fortunately, since the wavefront correction workflow shown in Fig. 6 has been kept generalised to allow any phase wavefront sensing technique to be used, the same workflow can be used for correcting the sample induced aberration as the system aberrations. Critically, the phase acquisition method, number of iterations, and error threshold do not have to be the same in both processes. This is important since correcting for sample induced aberrations adds additional limitations. Biological samples can suffer damage when exposed to excessive light (phototoxicity). Repeated activation can also cause chemical alteration to fluorophores leading to inactivation (photobleaching). Microscope-AOtools is designed so a user can correct for as many iterations as required until a desired wavefront flatness is achieved or for exactly $N$ iterations. The former is designed for system aberration correction, while the latter is designed for correcting sample induced aberrations in order to limit phototoxicity and photobleaching.

4.2 Sensorless correction

In many biological applications, direct wavefront sensing is not possible and so we rely on sensorless techniques to determine the best correction to apply. The generalised methodology for this is shown in Fig. 8 for a biological specimen. Some metric $S$, which gives a useful measure of the image quality, is chosen. This metric should be a numerical value and should increase to a global maximum as the aberrations present decrease. Often these metrics are related to common measures of image quality, such as sharpness or contrast. For each Zernike mode, $Z_{i}$, a number of amplitudes of that mode, $a_{j}$, are applied and an image of the sample obtained. The quality of each image, $S_{j}$ is calculated. Assuming that $S$ is a function of the Zernike mode amplitude applied, fitting a Gaussian function to the $S_{j}$ values yields a Zernike mode amplitude, $a_{max}$, which theoretically yields the best image quality, $S_{max}$. The complexity of sensorless AO correction lies in selecting the most appropriate image quality metric. There have been numerous metrics developed which have been shown to be effective on certain sample types or imaging modalities [10,3335].

 figure: Fig. 8.

Fig. 8. Principle of sensorless AO correction. The inset images are Drosophila Neuro-muscular Junction (NMJ). For each Zernike mode, $Z_i$, NMJ images are acquired for different amplitudes of the $i$-th Zernike mode. A value of the image quality metric, $S$, is obtained for each (blue dots). A Gaussian function is then fitted to these values and the amplitude, $a$ corresponding to the maximum image quality, $S_{max}$, is obtained (green dot). The inset figure for the green spot shows the NMJ image acquired after the correction for the $i$-th Zernike mode was applied

Download Full Size | PPT Slide | PDF

An automated sensorless AO routine is not implemented in Microscope-AOtools as this is outside its scope as it would require giving Microscope-AOtools control over the complete imaging system. Python-Microscope, which Microscope-AOtools uses, already fulfils this role. There are three options for sensorless correction workflows. The first, Fig. 9(a), an amplitude, $a_{j}$, of the $i$-th Zernike mode is applied, an image of the sample is taken, and the image quality metric is evaluated. This process is repeated for $M$ measurements and then the Zernike mode amplitude corresponding to the maximum image quality, $a_{max}$, is calculated and applied. This process is repeated for $N$ Zernike modes. The second, Fig. 9(b), is broadly similar with the exception that the image quality metric for the $M$ images of the current Zernike mode are measured after they have all been acquired rather than as soon as each image is acquired. The final workflow option, Fig. 9(c), differs further by not applying each Zernike mode correction sequentially but rather calculating the image quality metrics for all $NM$ images at the end of the imaging routine and calculating the correction to be applied for all $N$ Zernike modes simultaneously.

 figure: Fig. 9.

Fig. 9. Flowcharts depicting the sensorless correction routine options (a) An image for each amplitude of the $i$-th Zernike mode is taken and the image quality metric is immediately evaluated. Once all the images for the $i$-th Zernike mode have been taken, the best Zernike amplitude is found as described in Fig. 8 and applied, (b) All $M$ images are taken, then the quality metric is obtained for all $M$ images, the best Zernike amplitude is found and applied, (c) All the images for all the $N$ Zernike modes are obtained with no correction applied in between modes. The image quality metric then measured for every image and the best amplitude for each Zernike mode is found. The correction for all modes is applied simultaneously at the end of the workflow.

Download Full Size | PPT Slide | PDF

Similar to the set-up methods, a sensorless AO workflow is typically developed for a specific sample type or imaging modality and any change to these specifics requires redesigning the entire workflow. The only significant difference between these implementations will be the image quality metric used. Microscope-AOtools makes no assumption about the desired image quality metric. Instead a raw image is passed to one of a suite of image quality metrics and a metric value is returned. The image quality metric used can be easily changed, allowing the user to select a quality metric optimised for their sample type and imaging modality. Microscope-AOtools also implements the methods necessary for all three of the workflows shown in Fig. 9 allowing a user to select their preferred workflow.

4.3 IsoSense

Anisotropies in the sample structure can bias the corrections towards improving the image quality in a non-uniform manner. There has recently been a technique developed to overcome this issue: IsoSense, which relies on producing spatially structured light in order to fill empty sections of the image Fourier spectrum. [36] IsoSense is designed to be used in structured illumination microscopy (SIM) setups since they often incorporate spatial light modulators (SLM) as high-speed, dynamic diffraction gratings and SIM is particularly sensitive to Fourier space anisotropies.

Microscope-AOtools incorporates the methods necessary to implement IsoSense. Figure 10 shows both the structured illumination pattern used for IsoSense, which is applied to an SLM, and the location of the beams in Fourier space. The illumination pattern shown in Fig. 10(a) is the inverse Fourier transform of the 4-beam interference pattern in Fig. 10(b). The location of these beams are: $(0,0)$, $(0,\gamma w)$, $(0,-\gamma w)$, $(\gamma w, 0)$, $(-\gamma w, 0)$, $(\frac {\gamma w}{2}, \frac {\gamma w}{2})$, $(-\frac {\gamma w}{2}, \frac {\gamma w}{2})$, $(\frac {\gamma w}{2}, -\frac {\gamma w}{2})$, $(-\frac {\gamma w}{2}, -\frac {\gamma w}{2})$. $w$ is the Abbe diffraction limit and $\gamma$ is a user defined fill fraction. This fill fraction controls the positions of the beams in the interference pattern and hence the region of the Fourier spectrum which will be enhanced over normal illumination. Placing these beams is a skill for more advanced users, however the implementation in Microscope-AOtools has a sensible default and advanced users can improve their AO correction further by manipulating this if needed.

 figure: Fig. 10.

Fig. 10. (a) A simulated IsoSense pattern created with a 4-beam interference. (b) A diagram of a 4 beam interference pattern in Fourier space.

Download Full Size | PPT Slide | PDF

5. Future expansion

So far we have discussed the specific methods implemented in Microscope-AOtools. The Set-up and Sample Correction methods rely on suites of techniques, wavefront sensing and image quality metric assessment respectively. These are designed to be easily extensible by users as new techniques are developed. The functions defining the existing wavefront sensing and image quality metric assessment techniques are stored in separate files. These files, aoDev, aoAlg and aoMetrics, form a hierarchy with aoDev dependent on aoAlg and aoAlg dependent on aoMetrics. Which wavefront sensing technique will be used is an attribute of the highest level of the code hierarchy and is used to select a wavefront sensing technique from the unwrapping method dictionary. Similarly, which image quality metric assessment will be used is an attribute of a lower level of the code hierarchy and is used to select a image quality metric assessment technique from the dictionary of image quality metrics. A detailed guide of where the appropriate classes and dictionaries are located and how to add new wavefront sensing and image quality metric assessment techniques is included in the README.md file for Microscope-AOtools. Briefly, these suites are composed of functions with a defined set of input and output variables. A user creates a new wavefront sensing or image quality metric assessment functions with the input and outputs defined in the README.md, adds this function to the correct file and then adds the function option to the correct suite dictionary. Microscope-AOtools allows for individual components (i.e. actuators) of the adaptive elements to be configured with the send method. This would allow for additional types of indirect optimisation, such as pupil segmentation [37], to be performed using Microscope-AOtools. This would, however, require implementation of additional image segmentation methods.

6. Discussion

Microscope-AOtools has been designed so a user can take an adaptive element in an arbitrary set-up, calibrate the adaptive element and use it on any sample type in a range of imaging modalities. Since Microscope-AOtools leverages Python-Microscope it already supports a number of adaptive elements, mostly deformable mirrors, which will expand as hardware support in Python-Microscope expands. Adding new devices to Python-Microscope is relatively simple. Refer to Python-Microscope (https://www.python-microscope.org/) for more details. Microscope-AOtools only requires that the adaptive element be a Python-Microscope device which has an attribute n_actuators which defines the number of variable components of the device.

The manufacturers of such adaptive elements often have their own software implementations. In general, these implementations lack sensorless correction capabilities, although this is not always the case. There are a number of benefits which Microscope-AOtools possesses over these implementation. First, since Microscope-AOtools already exists as an extension of the Python-Microscope hardware control software, it is easily incorporated into larger hardware control solutions reliant on Python-Microscope. Second, any methods which are available to one adaptive elements are available to all elements, removing a degree of hardware dependency. Thirdly, Microscope-AOtools offers a greater degree of control and flexibility over the precise methods employed for AO set-up and correction and allows a user to easily switch between methods at will. Finally, due to the project being open-source, it is far more responsive to developments in the field of adaptive optics. A new image quality metric, wavefront sensing technique or sensorless correction routine can be incorperated as soon as it is public. Such novel or experimental methods can also be developed and refined within Microscope-AOtools.

The process of setting up an adaptive element requires a wavefront sensor to observe the shape of the phase wavefront and calibrate how the variable components of the adaptive element affect this wavefront. By designing the set-up methods in Microscope-AOtools to accept any method from a suite of wavefront sensing techniques, Microscope-AOtools is both generalised and easily extensible. If the desired wavefront sensing technique is not already incorporated then a user only has to add the function necessary to perform the wavefront sensing step rather than to reimplement the set-up methods in their entirety. Microscope-AOtools is further generalised as it allows for the control matrix to be acquired by some external method and then set in Microscope-AOtools with the set_controlMatrix method. This ensures that a user with an existing calibration routine wishing to access the sensorless AO methods in Microscope-AOtools can do so without having to repeat work they have already performed. It also allows control matrices acquired from routines using different phase acquisition techniques to be compared. Characterisation assays can be acquired for each method’s control matrix and the accuracy of the Zernike mode recreation compared.

The generalised nature of Microscope-AOtools continues into the Sample Correction methods. By allowing the user to swap between wavefront sensing technique, Microscope-AOtools already possesses all the methods necessary for performing sample correction using direct wavefront sensing, provided the wavefront sensing technique is already included in the suite of methods. Similarly Microscope-AOtools utilises a suite of image quality metrics suited to different sample types and imaging modalities. A user can select a pre-existing metric well suited to their application. If no appropriate metric currently exists an new one can easily be implemented and added to the suite of metrics. Once implemented it can be used in any of the sensorless AO analysis methods outlined in Fig. 8. Furthermore, Microscope-AOtools allows for Zernike mode amplitudes to be set directly with the set_phase method. This means that if a user has an offline analysis technique, such as a machine learning approach, Microscope-AOtools can be used to calibrate the deformable mirror, the sample induced calculations are performed offline and then the appropriate correction applied through Microscope-AOtools.

Microscope-AOtools is free and open-source. It is intended to be a resource for the microscopy community at large and it is designed to minimise the time and effort spent replicating work other AO users have already done. As Microscope-AOtools acquires a larger base of users, some adding their own wavefront sensing techniques or image quality metrics to expand the existing suites, future and existing users will have a wider array of usability options. This will accelerate the adoption of novel techniques by the microscopy community and lower the barrier to entry to set-up an AO system.

Beyond the open-ended task of expanding the existing suite of phase acquisition techniques and image quality metrics, there are a number of future developments that could be made to Microscope-AOtools. There does not currently exist a universal image quality metric, although strides have been made in that direction [38]. Image quality metrics attempt to assign a numerical value for how ‘good’ an image is, but what makes a ‘good’ image varies between imaging modalities, sample type and even users. Most metrics pick some aspect of the image deemed to be significant (e.g. contrast, sharpness, maximum intensity, etc) and optimise it. Since Microscope-AOtools has access to multiple image quality metrics, one development would be designing a sensorless AO routine which measures multiple image qualities simultaneously, assigns some weight to each metric measurement and maximises the image quality based on several criteria.

7. Conclusion

For some time, there has been a call for a robust, generalised implementation for AO in microscopy. Such an implementation should incorporate all the methods needed to setup and operate an AO element for a range of imaging modalities and sample types. Microscope-AOtools includes methods for calibration, direct wavefront and sensorless correction. In particular, it already incorporates several image quality metrics suited to sensorless correction in a number of different imaging modalities. It also includes a characterisation method for assessing the accuracy of the calibration step. It has also been designed in a modular manner allowing for new wavefront sensing techniques and image quality metrics to be added with minimal disruption to the rest of the workflows and, therefore, minimal work duplication. With time and community support, such an implementation has scope to go beyond its current state of “generalised software implementation” and become a universal software implementation for AO in microscopy.

Funding

Medical Research Council (EP/L016052/1, MR/K01577X/1); Engineering and Physical Sciences Research Council (EP/L016052/1); European Research Council (AdOMIS 695140); Wellcome Trust (107457, 096144, 209412).

Acknowledgements

The authors would like to thank Mick Philips, Mantas Žurauskas, Jacopo Antonello and David Pinto for their helpful comments and suggestions during the development of Microscope-AOtools

Disclosures

No competing interests were disclosed.

References

1. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

2. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

3. M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]  

4. M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004). [CrossRef]  

5. M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004). [CrossRef]  

6. M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007). [CrossRef]  

7. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).

8. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

9. J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009). [CrossRef]  

10. D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2(2), 177–185 (2015). [CrossRef]  

11. D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination,” Opt. Lett. 36(21), 4206–4208 (2011). [CrossRef]  

12. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]  

13. P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

14. D. Wilding, P. Pozzi, O. Soloviev, G. Vdovin, and M. Verhaegen, “Adaptive illumination based on direct wavefront sensing in a light-sheet fluorescence microscope,” Opt. Express 24(22), 24896–24906 (2016). [CrossRef]  

15. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018). [CrossRef]  

16. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

17. M. J. Booth, A basic introduction to adaptive optics for microscopy (University of Oxford, 2019).

18. I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019). [CrossRef]  

19. S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011). [CrossRef]  

20. P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020). [CrossRef]  

21. S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014). [CrossRef]  

22. A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017). [CrossRef]  

23. M. Townson, O. Farley, G. O. de Xivry, J. Osborn, and A. Reeves, “Aotools: a python package for adaptive optics modelling and analysis,” Opt. Express 27(22), 31316–31329 (2019). [CrossRef]  

24. L. Zhu, P.-C. Sun, D.-U. Bartsch, W. R. Freeman, and Y. Fainman, “Adaptive control of a micromachined continuous-membrane deformable mirror for aberration compensation,” Appl. Opt. 38(1), 168–176 (1999). [CrossRef]  

25. F. Zernike, “Diffraction theory of the cutting process and its improved form, the phase contrast method,” Physica 1, 689–704 (1934).

26. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

27. E. W. Weisstein, “Gibbs phenomenon,” https://mathworld.wolfram.com/ (2003).

28. M. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterization of deformable membrane mirrors,” Appl. Opt. 44(24), 5131–5139 (2005). [CrossRef]  

29. O. Azucena, J. Crest, S. Kotadia, W. Sullivan, X. Tao, M. Reinig, D. Gavel, S. Olivier, and J. Kubby, “Adaptive optics wide-field microscopy using direct wavefront sensing,” Opt. Lett. 36(6), 825–827 (2011). [CrossRef]  

30. X. Tao, B. Fernandez, O. Azucena, M. Fu, D. Garcia, Y. Zuo, D. C. Chen, and J. Kubby, “Adaptive optics confocal microscopy using direct wavefront sensing,” Opt. Lett. 36(7), 1062–1064 (2011). [CrossRef]  

31. X. Tao, J. Crest, S. Kotadia, O. Azucena, D. C. Chen, W. Sullivan, and J. Kubby, “Live imaging using adaptive optics with fluorescent protein guide-stars,” Opt. Express 20(14), 15969–15982 (2012). [CrossRef]  

32. S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52(22), 5523–5532 (2013). [CrossRef]  

33. M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002). [CrossRef]  

34. J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003). [CrossRef]  

35. D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16(13), 9290–9305 (2008). [CrossRef]  

36. M. Žurauskas, I. M. Dobbie, R. M. Parton, M. A. Phillips, A. Göhler, I. Davis, and M. J. Booth, “Isosense: frequency enhanced sensorless adaptive optics through structured illumination,” Optica 6(3), 370–379 (2019). [CrossRef]  

37. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]  

38. J. Antonello, A. Barbotin, E. Z. Chong, J. Rittscher, and M. J. Booth, “Multi-scale sensorless adaptive optics: application to stimulated emission depletion microscopy,” Opt. Express 28(11), 16749–16763 (2020). [CrossRef]  

References

  • View by:

  1. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
    [Crossref]
  2. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994).
    [Crossref]
  3. M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
    [Crossref]
  4. M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004).
    [Crossref]
  5. M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
    [Crossref]
  6. M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007).
    [Crossref]
  7. J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).
  8. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014).
    [Crossref]
  9. J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
    [Crossref]
  10. D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2(2), 177–185 (2015).
    [Crossref]
  11. D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination,” Opt. Lett. 36(21), 4206–4208 (2011).
    [Crossref]
  12. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
    [Crossref]
  13. P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.
  14. D. Wilding, P. Pozzi, O. Soloviev, G. Vdovin, and M. Verhaegen, “Adaptive illumination based on direct wavefront sensing in a light-sheet fluorescence microscope,” Opt. Express 24(22), 24896–24906 (2016).
    [Crossref]
  15. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018).
    [Crossref]
  16. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017).
    [Crossref]
  17. M. J. Booth, A basic introduction to adaptive optics for microscopy (University of Oxford, 2019).
  18. I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019).
    [Crossref]
  19. S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011).
    [Crossref]
  20. P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
    [Crossref]
  21. S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
    [Crossref]
  22. A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
    [Crossref]
  23. M. Townson, O. Farley, G. O. de Xivry, J. Osborn, and A. Reeves, “Aotools: a python package for adaptive optics modelling and analysis,” Opt. Express 27(22), 31316–31329 (2019).
    [Crossref]
  24. L. Zhu, P.-C. Sun, D.-U. Bartsch, W. R. Freeman, and Y. Fainman, “Adaptive control of a micromachined continuous-membrane deformable mirror for aberration compensation,” Appl. Opt. 38(1), 168–176 (1999).
    [Crossref]
  25. F. Zernike, “Diffraction theory of the cutting process and its improved form, the phase contrast method,” Physica 1, 689–704 (1934).
  26. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976).
    [Crossref]
  27. E. W. Weisstein, “Gibbs phenomenon,” https://mathworld.wolfram.com/ (2003).
  28. M. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterization of deformable membrane mirrors,” Appl. Opt. 44(24), 5131–5139 (2005).
    [Crossref]
  29. O. Azucena, J. Crest, S. Kotadia, W. Sullivan, X. Tao, M. Reinig, D. Gavel, S. Olivier, and J. Kubby, “Adaptive optics wide-field microscopy using direct wavefront sensing,” Opt. Lett. 36(6), 825–827 (2011).
    [Crossref]
  30. X. Tao, B. Fernandez, O. Azucena, M. Fu, D. Garcia, Y. Zuo, D. C. Chen, and J. Kubby, “Adaptive optics confocal microscopy using direct wavefront sensing,” Opt. Lett. 36(7), 1062–1064 (2011).
    [Crossref]
  31. X. Tao, J. Crest, S. Kotadia, O. Azucena, D. C. Chen, W. Sullivan, and J. Kubby, “Live imaging using adaptive optics with fluorescent protein guide-stars,” Opt. Express 20(14), 15969–15982 (2012).
    [Crossref]
  32. S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52(22), 5523–5532 (2013).
    [Crossref]
  33. M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
    [Crossref]
  34. J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003).
    [Crossref]
  35. D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16(13), 9290–9305 (2008).
    [Crossref]
  36. M. Žurauskas, I. M. Dobbie, R. M. Parton, M. A. Phillips, A. Göhler, I. Davis, and M. J. Booth, “Isosense: frequency enhanced sensorless adaptive optics through structured illumination,” Optica 6(3), 370–379 (2019).
    [Crossref]
  37. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
    [Crossref]
  38. J. Antonello, A. Barbotin, E. Z. Chong, J. Rittscher, and M. J. Booth, “Multi-scale sensorless adaptive optics: application to stimulated emission depletion microscopy,” Opt. Express 28(11), 16749–16763 (2020).
    [Crossref]

2020 (2)

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

J. Antonello, A. Barbotin, E. Z. Chong, J. Rittscher, and M. J. Booth, “Multi-scale sensorless adaptive optics: application to stimulated emission depletion microscopy,” Opt. Express 28(11), 16749–16763 (2020).
[Crossref]

2019 (3)

2018 (1)

C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018).
[Crossref]

2017 (2)

N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017).
[Crossref]

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

2016 (1)

2015 (1)

2014 (3)

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014).
[Crossref]

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

2013 (1)

2012 (1)

2011 (4)

2010 (1)

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
[Crossref]

2009 (1)

J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
[Crossref]

2008 (2)

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16(13), 9290–9305 (2008).
[Crossref]

2007 (1)

M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007).
[Crossref]

2006 (1)

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

2005 (1)

2004 (2)

M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004).
[Crossref]

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

2003 (1)

J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003).
[Crossref]

2002 (1)

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

1999 (1)

1994 (1)

1992 (1)

J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).

1976 (1)

1934 (1)

F. Zernike, “Diffraction theory of the cutting process and its improved form, the phase contrast method,” Physica 1, 689–704 (1934).

Agard, D.

P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

Agard, D. A.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Antonello, J.

Azucena, O.

Barbotin, A.

Bartsch, D.-U.

Betzig, E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination,” Opt. Lett. 36(21), 4206–4208 (2011).
[Crossref]

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
[Crossref]

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Bewersdorf, J.

Bonifacino, J. S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Booth, M.

Booth, M. J.

J. Antonello, A. Barbotin, E. Z. Chong, J. Rittscher, and M. J. Booth, “Multi-scale sensorless adaptive optics: application to stimulated emission depletion microscopy,” Opt. Express 28(11), 16749–16763 (2020).
[Crossref]

M. Žurauskas, I. M. Dobbie, R. M. Parton, M. A. Phillips, A. Göhler, I. Davis, and M. J. Booth, “Isosense: frequency enhanced sensorless adaptive optics through structured illumination,” Optica 6(3), 370–379 (2019).
[Crossref]

D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2(2), 177–185 (2015).
[Crossref]

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014).
[Crossref]

S. A. Rahman and M. J. Booth, “Direct wavefront sensing in adaptive optical microscopy using backscattered light,” Appl. Opt. 52(22), 5523–5532 (2013).
[Crossref]

D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16(13), 9290–9305 (2008).
[Crossref]

M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007).
[Crossref]

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004).
[Crossref]

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

M. J. Booth, A basic introduction to adaptive optics for microscopy (University of Oxford, 2019).

Botcherby, E. J.

Boulogne, F.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Bright, J.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Bronner, M. E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Burke, D.

Burovski, E.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Cande, W. Z.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Carlton, P. M.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Certík, O.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Chen, D. C.

Chong, E. Z.

Colbert, S. C.

S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011).
[Crossref]

Cournapeau, D.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Creath, K.

J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).

Crest, J.

Davidson, M. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Davis, I.

de Xivry, G. O.

Débarre, D.

Dobbie, I.

I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019).
[Crossref]

Dobbie, I. M.

Engerer, P.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Fainman, Y.

Farley, O.

Fernandez, B.

Fienup, J.

J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003).
[Crossref]

Freeman, W. R.

Fu, M.

Garcia, D.

Gavel, D.

Girkin, J. M.

J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
[Crossref]

Göhler, A.

Golubovskaya, I. N.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Gommers, R.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Gouillart, E.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Gustafsson, M. G.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Haberland, M.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Hall, N.

I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019).
[Crossref]

Hell, S. W.

Hess, H. F.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Huang, F.

Ivanov, S.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Ji, N.

C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018).
[Crossref]

N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017).
[Crossref]

D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination,” Opt. Lett. 36(21), 4206–4208 (2011).
[Crossref]

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
[Crossref]

Juškaitis, R.

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

Kam, Z.

P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

Kawata, S.

Kirpichev, S. B.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Kner, P.

P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

Kotadia, S.

Kubby, J.

Kumar, A.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Lindwasser, O. W.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Lippincott-Schwartz, J.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Meurer, A.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Milkie, D. E.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination,” Opt. Lett. 36(21), 4206–4208 (2011).
[Crossref]

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
[Crossref]

Miller, J.

J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003).
[Crossref]

Misgeld, T.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Moore, J. K.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Mumm, J.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Neil, M. A.

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

Noll, R. J.

Nunez-Iglesias, J.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Olenych, S.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Oliphant, T. E.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Olivier, S.

Osborn, J.

Ota, T.

Paprocki, M.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Parton, R. M.

Patterson, G. H.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Patton, B.

Peterson, P.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Phillips, M. A.

Pinto, D.

I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019).
[Crossref]

Poland, S.

J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
[Crossref]

Pozzi, P.

Rahman, S. A.

Reddy, T.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Reeves, A.

Reinig, M.

Rittscher, J.

Rocklin, M.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Rodríguez, C.

C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018).
[Crossref]

Saxena, A.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Schönberger, J. L.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Schwertner, M.

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004).
[Crossref]

Sedat, J.

P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

Sedat, J. W.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Shao, L.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Singh, S.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Smith, C. P.

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Soloviev, O.

Sougrat, R.

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Sullivan, W.

Sun, H.-B.

Sun, P.-C.

Tao, X.

Townson, M.

v. d. Walt, S.

S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011).
[Crossref]

Van der Walt, S.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Varoquaux, G.

S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011).
[Crossref]

Vdovin, G.

Verhaegen, M.

Virtanen, P.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Wang, C. R.

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Wang, K.

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

Warner, J. D.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Weckesser, W.

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

Weisstein, E. W.

E. W. Weisstein, “Gibbs phenomenon,” https://mathworld.wolfram.com/ (2003).

Wichmann, J.

Wilding, D.

Wilson, T.

D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured illumination microscopy,” Opt. Express 16(13), 9290–9305 (2008).
[Crossref]

M. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterization of deformable membrane mirrors,” Appl. Opt. 44(24), 5131–5139 (2005).
[Crossref]

M. Schwertner, M. J. Booth, and T. Wilson, “Characterizing specimen induced aberrations for high na adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004).
[Crossref]

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

Wright, A. J.

J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
[Crossref]

Wyant, J. C.

J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).

Yager, N.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Yu, T.

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

Zernike, F.

F. Zernike, “Diffraction theory of the cutting process and its improved form, the phase contrast method,” Physica 1, 689–704 (1934).

Zhu, L.

Zuo, Y.

Žurauskas, M.

Appl. Opt. (3)

Applied optics and optical engineering (1)

J. C. Wyant and K. Creath, “Basic wavefront aberration theory for optical metrology,” Applied optics and optical engineering 11, 28–39 (1992).

Biophys. J. (1)

M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008).
[Crossref]

Comput. Sci. Eng. (1)

S. v. d. Walt, S. C. Colbert, and G. Varoquaux, “The numpy array: a structure for efficient numerical computation,” Comput. Sci. Eng. 13(2), 22–30 (2011).
[Crossref]

Curr. Opin. Biotechnol. (1)

J. M. Girkin, S. Poland, and A. J. Wright, “Adaptive optics for deeper imaging of biological samples,” Curr. Opin. Biotechnol. 20(1), 106–110 (2009).
[Crossref]

Curr. Opin. Neurobiol. (1)

C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018).
[Crossref]

J. Microsc. (1)

M. Schwertner, M. J. Booth, M. A. Neil, and T. Wilson, “Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry,” J. Microsc. 213(1), 11–19 (2004).
[Crossref]

J. Opt. Soc. Am. (2)

J. Fienup and J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. 20(4), 609–620 (2003).
[Crossref]

R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976).
[Crossref]

Light: Sci. Appl. (1)

M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014).
[Crossref]

Nat. Methods (4)

N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017).
[Crossref]

K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014).
[Crossref]

P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, and J. Bright, “Scipy 1.0: fundamental algorithms for scientific computing in python,” Nat. Methods 17(3), 261–272 (2020).
[Crossref]

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010).
[Crossref]

Opt. Express (6)

Opt. Lett. (4)

Optica (2)

PeerJ (1)

S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ 2, e453 (2014).
[Crossref]

PeerJ Comput. Sci. (1)

A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K. Moore, and S. Singh, “Sympy: symbolic computing in python,” PeerJ Comput. Sci. 3, e103e103 (2017).
[Crossref]

Philos. Trans. R. Soc., A (1)

M. J. Booth, “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007).
[Crossref]

Physica (1)

F. Zernike, “Diffraction theory of the cutting process and its improved form, the phase contrast method,” Physica 1, 689–704 (1934).

Proc. Natl. Acad. Sci. (1)

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. 99(9), 5788–5792 (2002).
[Crossref]

Science (1)

E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006).
[Crossref]

Wellcome Open Res. (1)

I. Dobbie, N. Hall, and D. Pinto, “Beamdelta: simple alignment tool for optical systems,” Wellcome Open Res. 4, 194 (2019).
[Crossref]

Other (3)

E. W. Weisstein, “Gibbs phenomenon,” https://mathworld.wolfram.com/ (2003).

P. Kner, Z. Kam, D. Agard, and J. Sedat, “Adaptive optics in wide-field microscopy,” MEMS Adaptive Optics V, vol. 7931 (International Society for Optics and Photonics, 2011), p. 79310K.

M. J. Booth, A basic introduction to adaptive optics for microscopy (University of Oxford, 2019).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Flowchart depicting the general process for building a system utilising AO. a) System Design Phase. User decides whether the system needs AO and, if so, what type. b) Installation Phase. The chosen AO elements are added to the system. c) Set-up Phase. The chosen AO element is calibrated. Typically this involves mapping the variable components of the AO element (e.g. deformable mirror actuators) to a useful set of basis functions which represent optical aberrations. d) Sample Correction Phase Here the user designs the methods to be used for correcting their desired sample.
Fig. 2.
Fig. 2. (a) An example observed wavefront obtained during the calibration process for an Alpao-69 actuator deformable mirror. The $26^{th}$ actuator is at the first of the 5 positions. (b) A simulated wavefront created from the 69 Zernike mode amplitudes measured in the observed wavefront. (c) The difference in the observed and simulated wavefronts. (d) Plot of row vector, $\boldsymbol {z_{1}}$, of the 69 Zernike mode amplitudes measured for the observed wavefront (e) The influence function fitting for Zernike mode 11 (Noll index) for the $26^{\textrm {th}}$ actuator.
Fig. 3.
Fig. 3. (a) Flowchart depicting the generalised calibration routine implemented in Microscope-AOtools (b) Flowchart depicting the process for calibrating the $h$-th actuator of the deformable mirror, the dashed blue process in (a). The influence functions returned are $b_{g,h}$ described in Eq. (2). This process is performed for each of $N$ actuators and used to obtain $\boldsymbol {C}$ described in Eq. (5).
Fig. 4.
Fig. 4. Flowchart depicting the process for characterising an adaptive element as implemented in Microscope-AOtools.
Fig. 5.
Fig. 5. (a) An ideal characterisation assay, measuring the recreation accuracy of 68 Zernike modes with applied amplitude of 1 for each, (b) An example of a characterisation assay obtained from a calibrated Alpao-69 actuator deformable mirror, measuring the recreation accuracy of 68 Zernike modes with applied amplitude of 1 for each
Fig. 6.
Fig. 6. Flowchart depicting the process for correcting directly measured wavefront as implemented in Microscope-AOtools
Fig. 7.
Fig. 7. (a) An example aberrated wavefront. RMS wavefront error = 3.818 radians (b) The example wavefront after 20 iterations of correction. RMS wavefront error = 0.986 radians. RMS wavefront error of the central 95% of the phase wavefront = 0.712 radians (c) The Zernike modes measured in the aberrated (blue) and corrected (red) wavefronts (a) - (b) are all presented on the same colour scale (in radians of 543 nm HeNe laser) and were obtained via interferometry
Fig. 8.
Fig. 8. Principle of sensorless AO correction. The inset images are Drosophila Neuro-muscular Junction (NMJ). For each Zernike mode, $Z_i$, NMJ images are acquired for different amplitudes of the $i$-th Zernike mode. A value of the image quality metric, $S$, is obtained for each (blue dots). A Gaussian function is then fitted to these values and the amplitude, $a$ corresponding to the maximum image quality, $S_{max}$, is obtained (green dot). The inset figure for the green spot shows the NMJ image acquired after the correction for the $i$-th Zernike mode was applied
Fig. 9.
Fig. 9. Flowcharts depicting the sensorless correction routine options (a) An image for each amplitude of the $i$-th Zernike mode is taken and the image quality metric is immediately evaluated. Once all the images for the $i$-th Zernike mode have been taken, the best Zernike amplitude is found as described in Fig. 8 and applied, (b) All $M$ images are taken, then the quality metric is obtained for all $M$ images, the best Zernike amplitude is found and applied, (c) All the images for all the $N$ Zernike modes are obtained with no correction applied in between modes. The image quality metric then measured for every image and the best amplitude for each Zernike mode is found. The correction for all modes is applied simultaneously at the end of the workflow.
Fig. 10.
Fig. 10. (a) A simulated IsoSense pattern created with a 4-beam interference. (b) A diagram of a 4 beam interference pattern in Fourier space.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

S ( x , y ) = h = 1 N d h ϕ h ( x , y ) ,
ϕ h ( x , y ) = g = 1 M b g , h z g ( x , y ) ,
S ( x , y ) = h = 1 N d h [ g = 1 M b g , h z g ( x , y ) ] = g = 1 M ( h = 1 N d h b g , h ) z g ( x , y ) = g = 1 M a g z g ( x , y ) ,
a g = h = 1 N b g , h d h ~for~ g = 1 , 2 , , M .
a ¯ = B d ¯ d ¯ = C a ¯ ,
z = [ z 1 z 2 z g z m ] ,
A h = [ z 1 z 2 z p ] = [ z 1 , 1 z 1 , 2 z 1 , m z 2 , 1 z 2 , 2 z 2 , m z p , 1 z p , 2 z p , m ] .

Metrics