Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Differentiable model-based adaptive optics with transmitted and reflected light

Open Access Open Access

Abstract

Aberrations limit optical systems in many situations, for example when imaging in biological tissue. Machine learning offers novel ways to improve imaging under such conditions by learning inverse models of aberrations. Learning requires datasets that cover a wide range of possible aberrations, which however becomes limiting for more strongly scattering samples, and does not take advantage of prior information about the imaging process. Here, we show that combining model-based adaptive optics with the optimization techniques of machine learning frameworks can find aberration corrections with a small number of measurements. Corrections are determined in a transmission configuration through a single aberrating layer and in a reflection configuration through two different layers at the same time. Additionally, corrections are not limited by a predetermined model of aberrations (such as combinations of Zernike modes). Focusing in transmission can be achieved based only on reflected light, compatible with an epidetection imaging configuration.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Introduction

Machine learning offers novel approaches to correct for aberrations encountered when imaging though scattering materials, [14] from astronomy [59] to microscopy with transmitted (for example [1012]) and reflected light [13]. To find aberration corrections in these situations, machine learning typically relies on large synthetic datasets. Large datasets are required, first, because the many parameters of deep neural networks need to be adjusted to work under a wide range of conditions and all these conditions need to be covered in the training data. Secondly, machine learning models are typically agnostic about the underlying image generating process. Therefore, even a priori known information, for example the transformations inside the optical system, needs to be learned from data.

In practice, training datasets are often based on combinations of Zernike polynomials [513] which might however not accurately capture all aspects of experimentally encountered aberrations. Additionally, for more strongly scattering samples, which require increasingly higher orders of Zernike modes, covering all potential scattering situations by sampling a sufficient number of different mode combinations eventually results in very large datasets. This is in particular the case if aberrations in multiple layers are combined, for example when using reflected light in an epidetection configuration [13].

While finding inverse models through such data driven strategies is well suited for situations where the underlying physical model is undetermined, the image formation process in an optical system is typically at least partially known. This is the basis of model-based adaptive optics, where optical systems modeling is combined with optimization to find an unknown phase aberration [1420].

Similar situations where models based on a well known underlying physical process are learned from data are also encountered in other imaging modalities [21], and more broadly many areas of engineering and physics (for example [2234]). To take advantage of such prior information, methods have been developed that combine physical process models with machine learning optimization.

For such optimization, first a model is described as a differentiable function mapping input to output. Since the function is differentiable, one can take advantage of automatic differentiation, which is more accurate and computationally efficient than finite differences [3537], and is one of the cornerstones of machine learning frameworks such as Tensorflow. Automatic differentiation is used in these frameworks to compute gradients for optimization of a loss function with respect to parameters of interest. The loss function compares model output to a target output and the discrepancy is minimized by adjusting model parameters [2234]).

Here, we employ this model optimization strategy for adaptive optics: we describe light propagation through the optical system, including unknown aberrations represented as parameters, with a differentiable model (Fig. 1). For matching the input-output relationship of the computational model to the experimental setup, we record a number of output images resulting from corresponding input phase modulations and optimize model parameters using Tensorflow. We show that this allows extracting an accurate description of the introduced aberrating layer(s) as verified by focusing in transmission through a single layer, as well as in a reflection, through two layers. In the latter epidetection configuration only reflected light is used for optimization and transmission focusing.

 figure: Fig. 1.

Fig. 1. Schematic of experimental setup. Light reflected off a spatial light modulator (SLM) passes through an aberration (A) and is focused onto a camera (transmission camera, shown with illustration of imaged light distribution). For experiments in an epidetection configuration, light reflected off the mirror M at the sample plane is additionally recorded with a second camera (reflection camera). BS = beam splitter, L = lens (see main text and Methods for details).

Download Full Size | PDF

Results

The experimental setup is shown schematically in Fig. 1. An expanded and collimated laser beam is reflected off a spatial light modulator (SLM) with a beam splitter cube (BS$_1$) and a part of the beam is imaged onto a camera (transmission camera) over a beam splitter (BS$_3$). For experiments with reflected light, the remaining part of the beam is additionally sent to a mirror at the sample plane (same focal plane as the transmission camera) which serves as a proxy for a reflecting sample. Light reflected by the mirror is imaged onto a second camera (reflection camera) with a beam splitter (BS$_2$). Aberrations (a layer of nail polish on a microscope slide, see Methods) are introduced between beam splitters BS$_2$ and BS$_3$. The beam undergoes aberrations once to the transmission camera, and twice to the reflection camera.

A single transmission pass through the setup is described by the function (see Methods for details)

$$S(\phi_\mathrm{SLM}, \phi_\mathrm{aberration}) = \left|P_{f_1}(\exp\left[i\phi_{\mathrm{aberration}}\right]\exp\left[i\phi_{\mathrm{lens}}\right] P_{f_1}(U_{0}\exp\left[i\phi_\mathrm{SLM}\right]))\right|^{2},$$
where $P_{d}$ is a propagation operator over the distance $d$, $U_{0}$ is the complex amplitude of the unmodulated beam at the SLM, $\phi _{\mathrm {lens}}$ is the phase representation of the lens $L_1$, and $f_1$ is its focal length; $\phi _{\mathrm {SLM}}$ is the (known) SLM phase modulation, and $\phi _{\mathrm {aberration}}$ is the (unknown) introduced aberration.

For computational efficiency, the aberration is simulated at the same plane as lens $L_1$ (see Methods). Finding the unknown aberration which maximizes the similarity, measured with Pearson’s correlation coefficient $r$, of the simulated camera images $S(\phi _{\mathrm {SLM}}, \phi _{\mathrm {aberration}})$ and experimentally recorded images $I$ was solved in Tensorflow using automatic differentiation and gradient-based optimization (see Methods):

$$\phi_{\mathrm{aberration}} = \operatorname*{arg\,max}_{\phi_{\mathrm{aberration}}} (r\left[S(\phi_{\mathrm{SLM}}, \phi_{\mathrm{aberration}}), I\right]).$$
To further refine the focus after a first optimization step, a second step was performed with a new set of modulations and corresponding images. In this second step, the correction obtained in the first optimization step was added to all modulations displayed, $\phi _{\mathrm {SLM}}+\phi _{\mathrm {correction_1}}$. The final correction was the sum of the first and second step correction $\phi _{\mathrm {correction_1}}+\phi _{\mathrm {correction_2}}$. We used 180 modulations for transmission and 540 modulations for reflection experiments in each of the two iteration steps, respectively. Representative examples of the aberrations at the lens plane (unmodified, direct result of model optimization after the first of two optimization steps, see Methods) based on transmitted and reflected light, respectively, are shown in Fig. 2 a to n. Representative examples of corrections displayed at the SLM (after two model optimization steps and transfer of the correction to the SLM as described in Methods) for focusing based on transmitted or reflected light, respectively, are shown in Fig. 3 a to l.

 figure: Fig. 2.

Fig. 2. Model matching and optimization: a-f Transmission-based. a, d Two examples of measured, aberrated transmission light distributions and b, e matching simulated light distributions after first optimization step. c, f Corresponding phase profile (obtained at lens plane, after first optimization step). g-n Reflection-based. g, k Two examples of measured, aberrated, reflection light distributions and h, l matching simulated light distribution after first optimization step. i, m Corresponding transmission and j, n reflection phase results after first optimization step at the planes of lenses $L_1$ and $L_2$, respectively. o Transmission, orange: loss function for training of first example (a-c), and validation (dotted). Vertical line separates first and second optimization step, see Methods. Transmission, teal: same as orange for second example (d-f). Reflection, green: same for reflection experiment (g-j). Reflection, violet: same for second example in reflection experiment (k-n). $r$ in b, e, h, l is Pearson’s correlation coefficient (with corresponding image in in a, d, g, k), field of view is 1766 $\mu$m by 1766 $\mu$m.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. a-f Focusing in transmission. a, d Two examples of aberrations recorded in transmission. b, e Focus after correction with blow-up of focal spot (white frame and inset). c, f Wavefront correction at SLM after two-step optimization. g-l Focusing in transmission using reflected light. g, j Two examples of aberrations recorded in transmission. h, k Transmission focus after correction only using reflected light, with blow-up of focal spot (white frame and inset). i, l Transmission wavefront correction at SLM after two-step optimization recovered from only reflected light measurements. In each subfigure, max indicates maximum of colorbar, $\eta$ is enhancement (see Methods), field of view is 1766 $\mu$m by 1766 $\mu$m.

Download Full Size | PDF

As seen for the two examples in Fig. 2 a, b and d, e, respectively, optimization (which results in the corresponding phase profile in Fig. 2 c, f) leads to closely matching (correlation coefficient $r$ is indicated in Fig. 2 b, e) measured and predicted light distributions at the sample (transmission camera) after applying the correction at the SLM. The similarity is quantified with the loss function $1-r$ in Fig. 2 o. To verify the achieved correction, the optimized aberration found at the plane of lens $L_1$ was propagated back to the SLM (see Methods) and the corresponding correction (Fig. 3 c and f) was displayed. This led to a focus at the sample or camera plane as shown with two representative examples in Fig. 3 b, e (Fig. 3 a, d shows the focus before correction) together with an increase in enhancement by a factor of 10 and 3.4, respectively (see Methods for definition of enhancement and details). (Note the difference in color scale between different images, normalized to maximum (max) values indicated in the subfigures.)

In an epidetection configuration, as typical for imaging in biological samples, only reflected light can be used for finding a correction. Reflected light however accumulates a first aberration encountered in the excitation pass and a (generally different) second aberration in the reflection pass [4,13]. These need to be disentangled for example to recover a transmission pass correction required for generating a focus inside a sample. For focusing in transmission using only reflected light therefore the function $S$ (Eq. (1)) is extended (see Methods) to include the reflection pass from the mirror at the sample plane through the aberration to the second camera (reflection camera), now including an aberration in the transmission as well as in the reflection pass. To additionally constrain the model we assumed symmetry in terms of distance to the focal plane between aberrations in transmission and reflection. This model is fitted to match observed reflected light distributions by optimizing at the same time two independent aberrations. Optimization is performed as before (see Methods).

Two representative examples of predicted and measured light distributions at the sample plane (transmission camera) are shown in Fig. 2 g, h and k, l, respectively, and the loss function quantifying the similarity is shown in Fig. 2 o (correlation coefficient $r$ between predicted and measured distributions is indicated in Fig. 2 h, l). The corresponding transmission and reflection phase aberrations at the plane of lens $L_1$ and $L_2$ are shown in Fig. 2 i, j and m, n, respectively. To verify the correction, we generated a focus at the sample plane by displaying the corresponding transmission correction on the SLM (see Methods). Figure 3 shows two representative examples (g-i and j-l) of aberrated focus, corrected focus, and corresponding correction (resulting in an increase in enhancement by a factor of 10.4 and 8.7, respectively, see Methods). In reflection-based transmission control, the obtained focus was not necessarily centered in the field of view (as for example seen in Fig. 3 j, k), due to tilt introduced by the sample that was not corrected. Importantly, in reflection-based transmission control experiments, focusing in transmission is achieved only using reflected light, compatible with an epidetection configuration.

In order to additionally validate the corrections that result from model optimization, we compared transmission- and reflection-based focusing at the same location of an aberrating sample. Two representative examples are shown in Fig. 4 a to d and e to h, respectively. In a reflection configuration the tilt component of aberrations cancels between the forward and the return pass and is therefore not detected [13,38]. For comparison of the resulting transmission- and reflection-based aberrations (direct result of model optimization), the tilt component of the correction found using reflected light was therefore adjusted manually such that the similarity between the aberrations (Fig. 4 b and d, and f and h, respectively) was clearly visible. As seen in Fig. 4, both transmission- and reflection-based control resulted in similar transmission aberrations.

 figure: Fig. 4.

Fig. 4. Comparison of transmission- and reflection-based control of focusing. a Corrected focus: full filed of view and blow-up of focal spot (white frame and inset) and b corresponding result of sample aberration at lens plane after optimization. c Same as a, focusing is achieved only using reflected light at the same field of view (or sample aberration) used in a and d corresponding result of sample aberration at lens plane after optimization. For comparison with b (not for focusing), tilt was adjusted manually. e-h Same as a-d for a second field of view. In each subfigure, max indicates maximum of colorbar, $\eta$ is enhancement (see Methods), field of view is 1766 $\mu$m by 1766 $\mu$m.

Download Full Size | PDF

Methods

Experimental setup and data acquisition

The laser was from Toptica (iBeam smart, 640 nm), the spatial light modulator from Meadowlark (SLM, ODP512-1064-P8), cameras were from Basler (acA2040-55um). All optical parts were from Thorlabs: $BS_1$ in Fig. 1 was BS016, $BS_2$ and $BS_3$ were EBS2. Lenses were visible achromats, $L_1$ with focal length 300 mm (AC254-150-A) and $L_2$ with 150 mm (AC254-150-A). Data were collected by placing an aberrating sample in the optical path between BS2 and BS3 (see Fig. 1), displaying random SLM phase modulations, and recording the resulting $512\times 512$ images with the transmission camera, and additionally with the reflection camera for reflection experiments. As indicated in Fig. 1, the aberrating sample was located approximately halfway between lens $L_1$ and the mirror $M$, this location was however not specified in the model (see below). Random SLM phase modulations were generated by summing the first 78 Zernike modes with random coefficients drawn from a normal distribution with standard deviation $\pi$ and displayed at a resolution of $512\times 512$ pixels on the SLM.

The light intensity in transmission and reflection can vary by several orders of magnitude, exceeding the dynamic range of the cameras. Therefore, in order to capture the full range of intensities, multiple frames with different exposure times were recorded (each frame with a 12-bit per pixel resolution). In transmission, each frame was recorded with exposures of 60, 120, and 250 ms, respectively, and the resulting image was the sum of the recorded frames weighted by the inverse of the exposure time. Saturated pixels as well as pixels below the noise threshold were discarded. For the reflection camera, images were taken with exposures of 60, 120, 250, 500, and 1000 ms, respectively. Additionally, transmission light intensity was reduced with a neutral density filter wheel (NDM2/M, Thorlabs).

Computational model

Light travelling through the setup is modeled as a complex amplitude $U(x, y, z)$, initialized with $U_0 = U(x, y, 0)$, and propagating through a sequence of planar phase objects and intermittent free space along the optical axis ($z$-axis; x, y, z are spatial coordinates). A wavefront $U(x, y, d)$ interacting with a phase object $\phi (x, y, d)$ at plane $d$ is described as a multiplication

$$U(x, y, d)\exp\left[{i\phi(x,y,d)}\right].$$
Free space propagation of the wavefront over a distance $d$ is calculated using the angular spectrum method with the following operator [39]:
$$\begin{aligned}U(x, y, z+d) & = P_d(U(x, y, z)) = \iint {{A(f_X, f_Y; z)}}\,\mathrm{circ}\left(\sqrt{(\lambda f_X)^{2}+(\lambda f_Y)^{2}}\right)\\ & \times H\exp\left[i2\pi(f_Xx+f_Yy)\right] \,\mathrm{d}f_X\,\mathrm{d}f_Y. \end{aligned}$$
Here, $A(f_X, f_Y; z)$ is the Fourier transform of $U(x, y, z)$, $f_X$, $f_Y$ are spatial frequencies, the circ function is 1 inside the circle with the radius in the argument and 0 outside [39], and $H(f_X, f_Y) = \exp \left [i2\pi \frac {d}{\lambda }\sqrt {1-(\lambda f_X)^{2}-(\lambda f_Y)^{2}}\right ]$ is the optical transfer function. The intensity measured by the camera is given by
$$I(x, y, z) = \left|U(x, y, z)\right|^{2}.$$

Model optimization

We used a Python library for diffractive optics [40] to calculate the known factors of (1). By providing the focal lengths and setup dimensions, discretized versions of the optical transfer functions for the propagation operators and the phase representation of the lenses were determined. The resulting function which relates displayed, known SLM phase modulations and unknown sample aberrations to camera images, was then transferred to Tensorflow. The position of the aberrating sample, as seen in expression (1), was simulated at the plane of lens $L_1$. This saves computations and memory, since each intermediate plane requires additional wavefront propagation calculations. Similarly, for computational efficiency, a single lens $L_2$ with focal length $f_2=\frac {f_1}{2}$ was used to focus reflected light onto the reflection camera. While the parameters of the optical model for transmission and reflection were adjusted manually to match the setup, they can equally be tuned using the optimization approach described below, for example to obtain a systems correction.

The model of light propagation in the setup, Eq. (1), was incorporated in the loss function according to Eq. (2):

$$\mathrm{loss} = 1-r\left[S(\phi_{\mathrm{SLM}}, \phi_{\mathrm{aberration}}), I\right],$$
where $\phi _{\mathrm {SLM}}$ and $\phi _{\mathrm {aberration}}$, are the phase modulations at the SLM and due to the introduced aberration, respectively. All variables are $512\times 512$ real-valued tensors, and $\phi _{\mathrm {aberration}}$ is the optimization variable.

Similar to the training of neural networks, we used batches and split the data into training and validation sets. The former is used to fit the model, while the latter is only passed through the model in the validation step for calculating the validation loss. This ensures that the model generalizes well and gives correct predictions not only on training data, but also on previously not introduced data. We used Adam optimizer with learning rate 0.1 and batch size 30. Optimization with the loss function resulted in matching simulated and experimentally recorded images and yielded the phase profile of the aberration in the setup. The quality of the solution was quantified with the correlation between modelled and recorded images in the validation part of the dataset and was used as the criterion for stopping the optimization. Convergence of the optimization process depended on the magnitude and spatial frequencies of the aberration and the number of samples. Typically a solution with $r>0.6$ was sufficient for focusing.

For experiments with reflected light, the simulation of the setup was extended to include the reflected light pass,

$$\begin{aligned}S(\phi_{\mathrm{SLM}}, \phi_{\mathrm{trans}}, \phi_{\mathrm{refl}}) & = \left|\right.P_{f_1}(\exp\left[i\phi_{\mathrm{lens_2}}\right]\exp\left[i\phi_{\mathrm{ref}}\right]\\ & \times P_{2\cdot f_1}(\exp\left[i\phi_{\mathrm{trans}}\right]\exp\left[i\phi_{\mathrm{lens}}\right] P_{f_1}(U_{0}\exp\left[i\phi_\mathrm{SLM}\right])))\left.\right|^{2}, \end{aligned}$$
and the loss function (6) is optimized with variables $\phi _{\mathrm {trans}}$ and $\phi _{\mathrm {ref}}$.

Evaluation

After $\phi _{\mathrm {aberration}}$ is found through optimization at the lens plane, the corresponding correction at the SLM is found by propagating the conjugate phase of the aberration backwards to the SLM plane $\phi _{\mathrm {correction}}= \mathrm {arg}(P_{-f_1}(\exp \left [-i\phi _{\mathrm {aberration}}\right ]))$. The solution found by model optimization has high frequency noise which comes from discretization noise in the light propagation calculation and overfitting of the noise present in the actual system. Therefore, we additionally smooth the found correction with a low-pass spatial frequency filter: discrete Fourier transform is applied to $\mathrm {exp}\left [i\phi _{\mathrm {correction}}\right ]$ and frequencies exceeding $0.1$ of the pattern resolution are discarded. When displayed on the SLM, $\phi _{\mathrm {correction}}$, this results in a compensation of the aberration.

Aberrations were introduced with a thin layer of transparent nail polish distributed on a microscope slide (inserted between $BS_2$ and $BS_3$). Two different aberrating samples were used for transmission and reflection experiments. A weaker aberrating sample was used in reflection-based focusing experiments to ensure that sufficient signal reached the reflection camera. A weaker aberrating sample was therefore also used for comparing transmission- and reflection-based control in Fig. 4. In this case only one optimization step was needed for transmission-based focusing, whereas two optimization steps were used for reflection-based focusing. Generally, the strength of aberrations varies depending on sample positioning. Optimization parameters (such as number of phase modulations or learning rate) were only adjusted once for transmission experiments, and once for reflection experiments. As a simple measure for quantifying the shape of the uncorrected light distributions, we use its maximum extension as measured by the length of the first principle component of the pixels above a 30 % intensity threshold, $\sigma$. To quantify the change in the distribution before and after correction we compared the uncorrected and corrected distribution, $\sigma _{\mathrm {rel}} = \sigma _{\mathrm {u}}/\sigma _{\mathrm {c}}$. To additionally quantify the quality of the aberration correction, we also used an enhancement metric defined as ratio of maximum intensity to mean intensity in the frame, $\eta =\mathrm {max}(I)/\mathrm {mean}(I)$, comparing it before and after correction $\eta _{\mathrm {rel}}=\eta _{\mathrm {c}}/\eta _{\mathrm {u}}$. The distribution of these values ($\eta _{\mathrm {rel}}, \sigma _{\mathrm {rel}}$) for a series of 7 transmission experiments was: (10.0, 25.3), (1.7, 6.5), (3.4, 12.1), (2.6, 10.4), (0.8, 1.2), (16.2, 25.4), (3.8, 9.8), $\langle \eta _{\mathrm {rel}}\rangle =5.5\pm 5.2$, $\langle \sigma _{\mathrm {rel}}\rangle =13.0\pm 8.5$; and in a series of 7 reflection-based transmission control experiments was: (10.5, 17.6), (10.4, 32.6), (11.6, 11.1), (7.2, 13.0), (8.7, 4.3), (4.4, 12.3), (1.9, 2.6), $\langle \eta _{\mathrm {rel}}\rangle =7.8\pm 3.3$, $\langle \sigma _{\mathrm {rel}}\rangle =13.4\pm 9.2$.

Discussion

Differentiable model-based approaches for image reconstruction have been introduced in several domains of imaging [21], for example in ptychography [31]. Instead of directly optimizing model parameters, an additional deep neural network (a deep image prior [41,42]) has also been introduced, for example for phase imaging [32,33] or ptychography [34]. Even without such additional regularization, the optimization converged reliably to smooth phase patterns (Fig. 2 c, f, i, m, j, n and Fig. 3 c, f, i, l), but for example a DIP could also be combined with the introduced method to further reduce the number of modulations used for optimization.

The similarity of the transmission corrections (Fig. 4) obtained based on transmitted or based on reflected light, suggests that the optimization process is sufficiently constrained to converge to the actual correction in both cases. For imaging with reflected light, it was additionally assumed that the excitation and detection aberrations are located symmetrically, that is, at an equal distance from the focal plane.

For reflection-based transmission control, aberrations in two different focal planes are independently computed at the same time. This is similar to multiconjugate adaptive optics, where typically however an additional SLM is used to correct for a second focal plane [4345]. Additionally, while we generated single focal spots, arbitrary other focal distributions could be generated as well (for example for applications in optogenetics).

The number of required samples depends on the magnitude and spatial frequencies of the aberrations, requiring more samples with stronger aberrations. This can be compared to the training of deep neural networks, where the number of required samples for model training similarly increases with increasing aberrations. Compared to deep neural networks, including a physical model of the imaging process allows finding aberration corrections with a small number of samples (albeit only for a single field of view at a time). Different from neural networks which are trained on a predetermined distribution of aberrations (for example based on Zernike polynomials), optimization is achieved independently in each pixel without prior assumptions about aberrations.

Similar to other techniques that require multiple measurements for finding a correction [4,14], a limitation of the presented approach for dynamic samples is the time it takes to find a correction. In addition to reducing the number of modulations required for optimization as discussed above, the optimization time, several minutes on a single GPU, could also be reduced by using multiple GPUs. Generally, the gap between optimization and control (corresponding to rapidly changing corrections in response to aberrations) is expected to narrow with increasing computational power [46].

Thanks to advanced computational frameworks [40,47], the introduced model-based optimization can easily be combined with any optical setup equipped with a spatial light modulator and a camera without requiring additional hardware such as wavefront sensors or interferometers. For example, the described technique could be combined with imaging through scattering materials in a microscope with a high numerical aperture objective in an epidetection configuration [13]. In summary, we expect that the developed method will be useful in many situation that can benefit from correcting aberrations through single and multiple layers.

Funding

research center caesar; Max-Planck-Gesellschaft.

Disclosures

The authors declare no conflicts of interest.

References

1. J. N. Kerr and W. Denk, “Imaging in vivo: watching the brain in action,” Nat. Rev. Neurosci. 9(3), 195–205 (2008). [CrossRef]  

2. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018). [CrossRef]  

3. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89(1), 015005 (2017). [CrossRef]  

4. S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

5. J. R. P. Angel, P. Wizinowich, M. Lloyd-Hart, and D. Sandler, “Adaptive optics for array telescopes using neural-network techniques,” Nature 348(6298), 221–224 (1990). [CrossRef]  

6. S. W. Paine and J. R. Fienup, “Machine learning for improved image-based wavefront sensing,” Opt. Lett. 43(6), 1235–1238 (2018). [CrossRef]  

7. R. Swanson, M. Lamb, C. Correia, S. Sivanandam, and K. Kutulakos, “Wavefront reconstruction and prediction with convolutional neural networks,” in Adaptive Optics Systems VI, vol. 10703 (International Society for Optics and Photonics, 2018), pp. 107031F1–10.

8. T. Andersen, M. Owner-Petersen, and A. Enmark, “Neural networks for image-based wavefront sensing for astronomy,” Opt. Lett. 44(18), 4618–4621 (2019). [CrossRef]  

9. T. Andersen, M. Owner-Petersen, and A. Enmark, “Image-based wavefront sensing for astronomy using neural networks,” J. Astron. Telesc. Instruments, Syst. 6(3), 1 (2020). [CrossRef]  

10. Y. Jin, Y. Zhang, L. Hu, H. Huang, Q. Xu, X. Zhu, L. Huang, Y. Zheng, H.-L. Shen, W. Gong, and K. Si, “Machine learning guided rapid focusing with sensor-less aberration corrections,” Opt. Express 26(23), 30162–30171 (2018). [CrossRef]  

11. L. Hu, S. Hu, W. Gong, and K. Si, “Learning-based shack-hartmann wavefront sensor for high-order aberration detection,” Opt. Express 27(23), 33504–33517 (2019). [CrossRef]  

12. S. Cheng, H. Li, Y. Luo, Y. Zheng, and P. Lai, “Artificial intelligence-assisted light control and computational imaging through scattering media,” J. Innovative Opt. Health Sci. 12(04), 1930006 (2019). [CrossRef]  

13. I. Vishniakou and J. D. Seelig, “Wavefront correction for adaptive optics with reflected light and deep neural networks,” Opt. Express 28(10), 15459–15471 (2020). [CrossRef]  

14. R. A. Gonsalves, “Perspectives on phase retrieval and phase diversity in astronomy,” in Adaptive Optics Systems IV, vol. 9148 (International Society for Optics and Photonics, 2014), pp. 91482P1–10.

15. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41(11), 2095–2102 (2002). [CrossRef]  

16. B. M. Hanser, M. G. Gustafsson, D. Agard, and J. W. Sedat, “Phase-retrieved pupil functions in wide-field fluorescence microscopy,” J. Microsc. 216(1), 32–48 (2004). [CrossRef]  

17. H. Song, R. Fraanje, G. Schitter, H. Kroese, G. Vdovin, and M. Verhaegen, “Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system,” Opt. Express 18(23), 24070–24084 (2010). [CrossRef]  

18. H. Linhai and C. Rao, “Wavefront sensorless adaptive optics: a general model-based approach,” Opt. Express 19(1), 371–379 (2011). [CrossRef]  

19. H. Yang, O. Soloviev, and M. Verhaegen, “Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects,” Opt. Express 23(19), 24587–24601 (2015). [CrossRef]  

20. J. Antonello and M. Verhaegen, “Modal-based phase retrieval for adaptive optics,” J. Opt. Soc. Am. A 32(6), 1160–1170 (2015). [CrossRef]  

21. G. Ongie, A. Jalal, C. A. M. R. G. Baraniuk, A. G. Dimakis, and R. Willett, “Deep learning techniques for inverse problems in imaging,” IEEE J. on Sel. Areas Inf. Theory pp. 39–56 (2020).

22. M. M. Loper and M. J. Black, “Opendr: An approximate differentiable renderer,” in European Conference on Computer Vision, (Springer, 2014), pp. 154–169.

23. T.-M. Li, M. Aittala, F. Durand, and J. Lehtinen, “Differentiable monte carlo ray tracing through edge sampling,” ACM Trans. Graph. 37(6), 1–11 (2019). [CrossRef]  

24. J. Degrave, M. Hermans, J. Dambre, and F. Wyffels, “A differentiable physics engine for deep learning in robotics,” Front. neurorobotics 13, 1–9 (2019). [CrossRef]  

25. M. Giftthaler, M. Neunert, M. Stäuble, M. Frigerio, C. Semini, and J. Buchli, “Automatic differentiation of rigid body dynamics for optimal control and estimation,” Adv. Robotics 31(22), 1225–1237 (2017). [CrossRef]  

26. F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter, “End-to-end differentiable physics for learning and control,” in Advances in Neural Information Processing Systems, (2018), pp. 7178–7189.

27. E. Heiden, D. Millard, H. Zhang, and G. S. Sukhatme, “Interactive differentiable simulation,” arXiv preprint arXiv:1905.10706 (2019).

28. C. Schenck and D. Fox, “Spnets: Differentiable fluid dynamics for deep neural networks,” arXiv preprint arXiv:1806.06094 (2018).

29. D. Vilsmeier, M. Bai, and M. Sapinski, “Transfer line optics design using machine learning techniques,” in 10th Int. Particle Accelerator Conf.(IPAC’19), Melbourne, Australia, 19-24 May 2019, (JACOW Publishing, Geneva, Switzerland, 2019), pp. 139–142.

30. E. Heiden, Z. Liu, R. K. Ramachandran, and G. S. Sukhatme, “Physics-based simulation of continuous-wave lidar for localization, calibration and tracking,” arXiv preprint arXiv:1912.01652 (2019).

31. M. Kellman, E. Bostan, M. Chen, and L. Waller, “Data-driven design for fourier ptychographic microscopy,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

32. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

33. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

34. K. C. Zhou and R. Horstmeyer, “Diffraction tomography with a deep image prior,” Opt. Express 28(9), 12872–12896 (2020). [CrossRef]  

35. A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic differentiation in machine learning: a survey,” The J. Mach. Learn. Res. 18, 5595–5637 (2017).

36. C. C. Margossian, “A review of automatic differentiation and its efficient implementation,” Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9, e1305 (2019). [CrossRef]  

37. L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, “Deepxde: A deep learning library for solving differential equations,” arXiv preprint arXiv:1907.04502 (2019).

38. M. J. Booth, “Adaptive optics in microscopy,” Opt. Digit. Image Process. Fundamentals Appl. pp. 295–322 (2011).

39. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

40. L. M. Sanchez Brea, “Diffractio, python module for diffraction and interference optics,” https://pypi.org/project/diffractio/ (2019).

41. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 9446–9454

42. R. Heckel and M. Soltanolkotabi, “Denoising and regularization via exploiting the structural bias of convolutional generators,” arXiv preprint arXiv:1910.14634 (2019).

43. F. Rigaut and B. Neichel, “Multiconjugate adaptive optics for astronomy,” Annu. Rev. Astron. Astrophys. 56(1), 277–314 (2018). [CrossRef]  

44. Z. Kam, P. Kner, D. Agard, and J. W. Sedat, “Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc. 226(1), 33–42 (2007). [CrossRef]  

45. J. Thaung, P. Knutsson, Z. Popovic, and M. Owner-Petersen, “Dual-conjugate adaptive optics for wide-field high-resolution retinal imaging,” Opt. Express 17(6), 4454–4467 (2009). [CrossRef]  

46. S. L. Brunton, B. R. Noack, and P. Koumoutsakos, “Machine learning for fluid mechanics,” Annu. Rev. Fluid Mech. 52(1), 477–508 (2020). [CrossRef]  

47. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” (2015). Software available from tensorflow.org.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic of experimental setup. Light reflected off a spatial light modulator (SLM) passes through an aberration (A) and is focused onto a camera (transmission camera, shown with illustration of imaged light distribution). For experiments in an epidetection configuration, light reflected off the mirror M at the sample plane is additionally recorded with a second camera (reflection camera). BS = beam splitter, L = lens (see main text and Methods for details).
Fig. 2.
Fig. 2. Model matching and optimization: a-f Transmission-based. a, d Two examples of measured, aberrated transmission light distributions and b, e matching simulated light distributions after first optimization step. c, f Corresponding phase profile (obtained at lens plane, after first optimization step). g-n Reflection-based. g, k Two examples of measured, aberrated, reflection light distributions and h, l matching simulated light distribution after first optimization step. i, m Corresponding transmission and j, n reflection phase results after first optimization step at the planes of lenses $L_1$ and $L_2$, respectively. o Transmission, orange: loss function for training of first example (a-c), and validation (dotted). Vertical line separates first and second optimization step, see Methods. Transmission, teal: same as orange for second example (d-f). Reflection, green: same for reflection experiment (g-j). Reflection, violet: same for second example in reflection experiment (k-n). $r$ in b, e, h, l is Pearson’s correlation coefficient (with corresponding image in in a, d, g, k), field of view is 1766 $\mu$m by 1766 $\mu$m.
Fig. 3.
Fig. 3. a-f Focusing in transmission. a, d Two examples of aberrations recorded in transmission. b, e Focus after correction with blow-up of focal spot (white frame and inset). c, f Wavefront correction at SLM after two-step optimization. g-l Focusing in transmission using reflected light. g, j Two examples of aberrations recorded in transmission. h, k Transmission focus after correction only using reflected light, with blow-up of focal spot (white frame and inset). i, l Transmission wavefront correction at SLM after two-step optimization recovered from only reflected light measurements. In each subfigure, max indicates maximum of colorbar, $\eta$ is enhancement (see Methods), field of view is 1766 $\mu$m by 1766 $\mu$m.
Fig. 4.
Fig. 4. Comparison of transmission- and reflection-based control of focusing. a Corrected focus: full filed of view and blow-up of focal spot (white frame and inset) and b corresponding result of sample aberration at lens plane after optimization. c Same as a, focusing is achieved only using reflected light at the same field of view (or sample aberration) used in a and d corresponding result of sample aberration at lens plane after optimization. For comparison with b (not for focusing), tilt was adjusted manually. e-h Same as a-d for a second field of view. In each subfigure, max indicates maximum of colorbar, $\eta$ is enhancement (see Methods), field of view is 1766 $\mu$m by 1766 $\mu$m.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

S ( ϕ S L M , ϕ a b e r r a t i o n ) = | P f 1 ( exp [ i ϕ a b e r r a t i o n ] exp [ i ϕ l e n s ] P f 1 ( U 0 exp [ i ϕ S L M ] ) ) | 2 ,
ϕ a b e r r a t i o n = a r g m a x ϕ a b e r r a t i o n ( r [ S ( ϕ S L M , ϕ a b e r r a t i o n ) , I ] ) .
U ( x , y , d ) exp [ i ϕ ( x , y , d ) ] .
U ( x , y , z + d ) = P d ( U ( x , y , z ) ) = A ( f X , f Y ; z ) c i r c ( ( λ f X ) 2 + ( λ f Y ) 2 ) × H exp [ i 2 π ( f X x + f Y y ) ] d f X d f Y .
I ( x , y , z ) = | U ( x , y , z ) | 2 .
l o s s = 1 r [ S ( ϕ S L M , ϕ a b e r r a t i o n ) , I ] ,
S ( ϕ S L M , ϕ t r a n s , ϕ r e f l ) = | P f 1 ( exp [ i ϕ l e n s 2 ] exp [ i ϕ r e f ] × P 2 f 1 ( exp [ i ϕ t r a n s ] exp [ i ϕ l e n s ] P f 1 ( U 0 exp [ i ϕ S L M ] ) ) ) | 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.