Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Machine learning guided rapid focusing with sensor-less aberration corrections

Open Access Open Access

Abstract

Non-invasive, real-time imaging and deep focus into tissue are in high demand in biomedical research. However, the aberration that is introduced by the refractive index inhomogeneity of biological tissue hinders the way forward. A rapid focusing with sensor-less aberration corrections, based on machine learning, is demonstrated in this paper. The proposed method applies the Convolutional Neural Network (CNN), which can rapidly calculate the low-order aberrations from the point spread function images with Zernike modes after training. The results show that approximately 90 percent correction accuracy can be achieved. The average mean square error of each Zernike coefficient in 200 repetitions is 0.06. Furthermore, the aberration induced by 1-mm-thick phantom samples and 300-µm-thick mouse brain slices can be efficiently compensated through loading a compensation phase on an adaptive element placed at the back-pupil plane. The phase reconstruction requires less than 0.2 s. Therefore, this method offers great potential for in vivo real-time imaging in biological science.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, the development of biological imaging was focusing on real-time, high resolution and deep in vivo imaging [1,2]. Over the past two decades, researchers have overcome the diffraction limit and provided new insights into subcellular structures, yet the spatial resolution was improved at the cost of the temporal resolution. Adaptive optics (AO) becomes a valuable technique for high-resolution microscopy. It compensates the aberrations introduced by the specimens and obtains high-resolution images in deep biological tissue [3]. AO is originally developed for telescopes to overcome the atmospheric distortions, which degrade the image qualities of the extraterrestrial objects. Recently, it has been applied in optical microscopy to recover diffraction-limited imaging deep in the biological tissue [4–6] by using an active element such as a deformable mirror (DM) or a spatial light modulator (SLM). However, the imaging speed is fundamentally limited by the refresh rate of the active element. Moreover, the total fluorescent photon budget is also limited, which means that to obtain a higher signal to background ratio, fewer photons should be used as the feedback signal to measure the wavefront aberrations. Traditional adaptive optics systems utilize a wavefront sensor such as a Shack-Hartman wavefront sensor to measure the aberrations [7,8]. For example, Kai Wang et al. used the de-scanning, laser-guided star and the direct wavefront detection methods to achieve the rapid adaptive optical recovery [9]. However, the wavefront sensor is high-cost, complicated to implement and may introduce measurement errors [10]. An alternative approach is to use model-based wavefront sensor-less schemes [11]. Na Ji et al. proposed a mCOAT method to reconstruct the accurate wavefront for even discontinuous wavefront and improve the correction speed with parallel measurements [12]. However, when increasing the number of the pupil segments for finer wavefront corrections, mCOAT might cause much more time consumptions.

In this paper, we demonstrate a sensor-less AO method based on the machine-learning algorithm, which employs a Convolutional Neural Network (CNN) to obtain the intricate non-linear mappings from the distorted point spread function images to the wavefront aberrations expressed as the Zernike coefficients. The magnified point spread function images are collected by the CMOS camera. This method is capable to rapidly compensate the wavefront aberrations with high-speed, less photobleaching and photodamage. Although there are some previous methods combined with machine-learning enabling faster and gentler high-throughput and live-cell super-resolution imaging [13,14], this is a new attempt to combine the machine learning algorithm with AO correction method for aberrations compensation. Due to the general expression of the wavefront with Zernike modes, our method is suitable for any AO system with good compatibility. To examine the effectiveness of our method, we applied our method to correct the aberrations induced by 1-mm-thick phantoms and 300-µm-thick mouse brain slices.

2. Methods

2.1 Experimental setup

The schematic diagram of the machine learning guided fast AO system is illustrated in Fig. 1. A laser beam (OBIS 637 nm LX 140 mW, Coherent) is filtered by a pinhole before being expanded. The combination of the half-wave plate and the polarized beam splitter (PBS) are used to decrease the power of the laser beam for consideration of the exposure limits of the Complementary Metal-Oxide Semiconductor (CMOS, DMK 23UV024, 640 × 480 (0.3 MP) Y800 @ 115 fps, The imaging source) camera. The PBS only allows horizontally linear polarized light to exit. To further ensure the polarization direction of light is consistent with the direction required by the SLM, we placed a polarizer after the PBS. The incident laser beam is phase-modulated by using a spatial light modulator (SLM, PLUTO-NIR-011-A, pure phase, 60Hz, Holoeye photonics), on which the compensation patterns are loaded. The BS is used to make the laser beam perpendicularly project to the SLM and reflect. The reflected laser beam focuses on the focal plane after passing through the relay lenses L5 and L6, objective OBJ1 (RMS4X, Olympus, 4X / 0.10 NA) and sample. To detect the point spread function, another objective OBJ2 (RMS4X, Olympus, 4X / 0.10 NA) and relay lens L7 are mounted and the intensity information is collected by a CMOS camera. Before experiments, the SLM has been calibrated. We utilized the interferometric method to calibrate the phase modulation [15] and set the SLM for a linear 2π phase distribution over all 8-bit gray level to assure the phase response as stable as possible. Furthermore, the number of the pixels and the usage area of the SLM match the pixel interval of the CMOS camera.

 figure: Fig. 1

Fig. 1 The schematic diagram of the machine learning guided fast AO system. A 637nm laser beam is filtered by a pinhole and expanded by telescope system L3-L4 before passing through a half-wave plate, a PBS, and a polarizer sequentially. After that, the laser beam projects and reflects perpendicularly to the SLM plane by mounting a beam splitter (BS) at the front of SLM. L5-L6 forms a relay system which conjugates the SLM to the back-pupil plane of the objective OBJ1. The objective OBJ2 is used to collect light information and the point spread function can be detected by a CMOS camera through a relay lens L7. (L, lens; M, mirror; OBJ1 and OBJ2, both are objective lenses (RMS4X, Olympus, 4X / 0.10 NA); PH, pinhole; AP, aperture; HWP, half-wave plate; PBS, polarized beam splitter; BS, non-polarizing beam splitter; P, linear polarizer; SLM, spatial light modulator).

Download Full Size | PDF

2.2 Machine learning-guided fast AO compensation algorithm

The aberrations of the wavefront can be quantized as the difference in its phase or optical path length from the ideal (e.g., spherical or planar) form. Mathematically, it can be described as a summation of the Zernike polynomials, a set of basic functions that are orthogonal within a unit circle [16] (in this case, the objective back pupil). The phase distribution can be expressed as:

ψphase(x,y)=a1Z1(x,y)+a2Z2(x,y)+a3Z3(x,y)++a10Z10(x,y)+
whereψphase(x,y)is the phase distribution on the pupil plane;Zn(x,y)(n=1,2,3,) are Zernike modes;an(n=1,2,3,) are Zernike coefficients. Low-order Zernike modes are related to the primary aberrations such as spherical aberration, coma, and astigmatism. Different combinations of the Zernike coefficients in a phase distribution at the back-pupil plane gives distinguishable point spread function at the focal plane. In other words, if we can reconstruct the phase distribution with proper Zernike modes and coefficients at the back-pupil plane, the aberrations induced by the scattering can be compensated. Therefore, the aim is to establish a mapping between the Zernike coefficients and the point spread function.

Zernike mode Z1means a piston where the size of its coefficient has no effects on the point spread function at the back-pupil plane. Therefore, the phase reconstruction is without regard to modeZ1. Then the other Zernike modes are divided into two categories: The tip-tilt modes (Zernike modesZ2,Z3) and the high-order modes (Zernike modes Zi(i=4,5,6,)). The aberrations in tip-tilt modes have a linear relationship with their coefficients, which can be calculated directly by Eq. (2)–(3):

a2=πdxλfD2
a3=πdyλfD2
where λ is the wavelength of the laser beam, f is the focal length of lens L7 and D is the beam diameter on the SLM. dx and dy are the displacements of the center of the point spread function in horizontal and vertical directions as illustrated in Fig. 3(a), respectively. For the high-order modes, a machine learning based reconstruction is proposed. There is a non-linear mapping between the Zernike coefficients and the point spread function through training on the experimental data set. It should be noticed that when the defocus mode dominates, either positive or negative with the same absolute coefficient value leads to similar point spread function. As a result, the symbol of the defocus mode is necessary to be redetermined. The proposed method can achieve the compensations for more complicated aberrations after training on sufficient data. For simplification, only low-order aberrations compensation is presented in this paper. The machine learning guided fast AO compensation algorithm is illustrated in Fig. 2(b). The point spread function obtained by the CMOS camera is the input of the algorithm, and the top ten Zernike coefficients to reconstruct the phase distribution are outputs for distortion compensation. The specific procedures are: Step 1, calculate the tip-tilt coefficients through the method mentioned above from the point spread function and load compensation phase to the SLM for tip-tilt corrections subsequently. Step 2, input the point spread function obtained after tip-tilt corrections into the trained neural network to reconstruct the 4th-10th Zernike coefficients. Step 3, redetermine the symbol of the defocus mode. The compensation phase with negative and positive value of the 5th Zernike coefficient are loaded to the SLM successively, afterward the better one is selected. Step 4, recalculate the tip-tilt coefficients from the point spread function and generate the final compensation phase pattern.

 figure: Fig. 2

Fig. 2 The principle of machine learning guided fast AO correction method. (a) Expression and name of each Zernike mode from 1st to 15th. (b) Flowchart of the machine learning algorithm.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 (a) Description of Tip-tilt correction. Transparent red point spread function is at the ideal position and red point spread function has a dx and dy displacement in horizontal and vertical directions relative to the ideal position. (b) The network architecture of the training model based on Alexnet. (c) Radar map of Mean Square Error (RMSE) about calculated Zernike coefficient with KNN, ELM, MLP, and CNN, respectively.

Download Full Size | PDF

The net architecture we used is based on the convolutional neural network (CNN), a type of deep learning neural network. Deep learning is a kind of machine learning techniques that uses multilayered artificial neural networks to analyze signals or data [17]. CNN contains several convolutional layers compared to fully connected networks. Convolutional layers apply a convolution operation to the input and the convolution emulates the response of an individual neuron to visual stimuli [18]. The convolution filters in these layers are randomly initialized and trained to learn how to extract specific features of the visual task. This means that the network automatically learns the feature extraction that was hand-engineered in traditional techniques. It is a major advantage that the prior knowledge and human effort in feature designs are dispensable. CNN forms a rapidly growing research field in a variety of applications including computed tomography [19], magnetic resonance imaging [20], photoacoustic tomography [21]. In this paper, the aberration compensation is regarded as a Zernike coefficient regression. The input of the network is the point spread function, and the output is the 4th–10th Zernike coefficient vector. Several types of learning techniques are compared in this paper (Fig. 3(c)), and CNN is chosen for reconstruction according to the experimental performance. The network (Fig. 3(b)) is based on Alexnet [22]. It contains five convolutional and three fully-connected layers. The input point spread function is first encoded into a dense feature representation, through two 5 × 5 convolutional layers each with a 2 × 2 max-pooling layer, which is followed by three 3 × 3 convolutional layers and one 2 × 2 max-pooling layer. The result is an encoded representation of the image data. Afterward, in the regression stage, three fully-connected layers build the mapping from encoded feature to parameter space. All layers, for both convolutional and three fully-connected, are followed by dropout operation to avoid overfitting, and then a Rectified Linear Unit (ReLU) nonlinearity. ReLU is a kind of activation function for artificial neural networks which is defined as:

f(x)=max(0,x),
where x is the input to a neuron. The activation function introduces nonlinearity to a neuron. Mean square error is chosen as the loss function.

Training image set is built from a mimetic experiment, where 18 thousands 128 × 128-pixel images of the point spread function with random phase pattern are collected. The random phase patterns are built by randomly assigned 4th–15th Zernike coefficients. Each Zernike mode has its empirical coefficient value range according to the system property, distortion type. The result is a set of 18 thousands pairs of point spread functions (128 × 128 pixels) alongside the Zernike coefficient vector, which defines the phase pattern, correspondingly. 81% of the data set are randomly chosen for training, 9% for validation, and 10% for the test. The point spread function is normalized using the minimum and maximum of the data set without additional data augmentation.

3. Results and discussion

The proposed machine learning guided fast AO method was first applied on a phase-mask, which is a phase pattern composed by a set of random Zernike coefficients (1st–15th) and loaded on the SLM at the back-pupil plane. This scatter introduces low-order aberrations leading to a distorted point spread function at the focal plane which can be collected through objective OBJ2 and then be detected by the CMOS camera. After the compensation phase pattern was obtained, it was superimposed onto the phase-mask and then loaded to SLM. We conducted 200 repeated experiments by only changing the phase-mask and get a statistical result that more than 80% distorted point spread functions were improved. Figure 4 shows four groups of the compensation results in 200 repetitions. Figure 4(a) records the point spread functions before (left) and after (right) compensation, respectively. Comparing the intensity profile at the center section of the point spread functions in Fig. 4(d), we can find that the center of the point spread function moved to the ideal spot location after tip-tilt correction and there is not only an increase on the intensity and a decrease on full width at half maxima (FWHM) after compensation. The reconstructed first ten Zernike coefficients (in orange bar) and corresponding 1st–15th Zernike coefficients (in the blue bar) of phase-mask are illustrated in Fig. 4(b). The mean square error (MSE) of coefficients between the reconstructed phase and the phase-mask illustrate the ability to reconstruct the aberration. MSE of these four groups are 0.034, 0.14, 0.011, 0.015 and the average MSE of 200 repetitions is 0.060. Figure 4(c) is the compensation phase pattern applied on the SLM consists of the reconstructed first ten Zernike modes. According to the experimental performance, we infer that when the main parts of coefficients are calculated correctly, the distortion can be compensated even though some minors are inaccurate.

 figure: Fig. 4

Fig. 4 Four groups of results in 200 repetitions compensating for random phase-masks. (a) Four groups of the point spread functions before (left) and after (right) correction at the focal plane gained by CMOS camera and the intensity is normalized. (b) The comparison of Zernike coefficient amplitudes between phase-mask (in blue bars) and reconstructed phase (in orange bars). (c) is the reconstructed phase pattern loaded on SLM for each group. (d) The intensity profile at the center section of Airy (ideal spot), NO AO (without AO corrected point spread function), T-T corr (tip-tilt corrected point spread function) and ML-AO (machine learning guided AO corrected point spread function) of four groups mentioned in (a).

Download Full Size | PDF

To verify the performance of our method in real scattering media, 1-mm-thick phantoms and 300-µm-thick mouse brain slices were applied. We directly mounted the real scattering medium on a vertical stage between two objectives and the system would calculate the proper Zernike coefficients according to the distorted point spread function.

Figure 5 provides the compensation results of a 1-mm-thick phantom. The distorted point spread functions depicted in Fig. 5(a)–5(c) are more irregular than that induced by phase-mask and the corrected point spread functions were not as smooth as that in Fig. 4(a). This is because that the phantom induces multiply scattered light, which contains high-order aberrations. The section intensity profile of the point spread functions in Fig. 5(d) illustrates that the proposed AO method dramatically improves the point spread function quality where the intensity increased by 3–5 times.

 figure: Fig. 5

Fig. 5 Experimental compensation results of the 1-mm-thick phantom slice. (a)–(c) Point spread functions scattered by three different areas in a 1-mm-thick phantom sample (up) and corrected by our machine learning guided AO system (down). Inside the colored dotted boxes are the enlarged views of each point spread function. (d) Section intensity profile of the point spread functions without correction (NO AO), after tip-tilt correction (T-T corr) and after machine learning fully correction (ML-AO). The scale bar in (a)–(c) is 100 μm.

Download Full Size | PDF

The 300-µm-thick brain tissue slices are prepared as follows: Mice were rapidly anesthetized with chloral hydrate (5%, w/v, 0.01 ml/g, i.p.) and transcardially perfused with ice-cold 0.01 mol phosphate-buffered saline (PBS, Solarbio) and paraformaldehyde (4% in PBS w/v, Sinopharm Chemical Reagent Co.Ltd). Brain tissues were collected and incubated in the paraformaldehyde solution at 4°C overnight for uniform fixation through the sample. To remove the water remained in the brain tissue, incubated the sample in sucrose (30% (w/v) in PBS) at 4 °C for 24–48 hours until the specimen sank to the bottom of the tube. After that, 300-µm-thick brain slices were sectioned by using a cryostat (CM3050S, Leica). Sections were immediately embedded into optical clearing reagent in 2 minutes and mounted in the holder. Figure 6 presents two typical compensation results within 300-µm-thick mouse brain slices. The distortions induced by mouse brain slices are more complicated than phase-mask and phantoms. As shown in Fig. 6(c)–6 (d), although our method only compensates the first ten orders, we can still improve the intensity and FWHM of the point spread function of the samples, which contain Zernike modes more than 10 orders.

 figure: Fig. 6

Fig. 6 Experiment compensation results of 300-µm-thick mouse brain slices. (a)–(b) are two typical scattered (NO AO) and corrected (ML-AO) point spread functions. The corresponding blue-dashed and magenta-dashed ROI are enlarged as below. (c)–(d) Intensity profile at the center of the point spread function before and after correction (indicated with blue and magenta arrows respectively). (e)–(f) Amplitude distribution of Zernike coefficients calculated with our method. The inserted pictures demonstrate the compensate phase pattern loaded on SLM. The scale bar in (a)–(b) is 100 μm.

Download Full Size | PDF

4. Conclusion

We proposed a rapid AO aberration compensation method based on the machine learning algorithm. The time consumption for each phase reconstruction is less than 0.2 s (CPU Intel Xeon(R) E5–2667 v4, NVIDIA Tesla P4). The experimental correction accuracy is larger than 85% and each compensation process with CMOS camera (DMK 23UV024, 640 × 480 (0.3 MP) Y800 @ 115 fps) signal collection and SLM (PLUTO-NIR-011-A, pure phase, 60Hz) pattern loading is approximately 0.08 s. It is also capable of compensating low-order aberrations from both 1-mm-thick phantoms and 300-µm-thick mouse brain slices, respectively. If we utilize GPU acceleration or FPGA control acceleration, we are able to further expand the order of the Zernike modes as training sets, thus achieving more complex aberration corrections, and deeper imaging depth. Based on the advantages of the machine learning method, although the training set takes a few hours of training, the illumination time required for the corrections on the experiment is very short, thus dramatically reducing photobleaching and photodamage.

In conclusion, we can achieve high-speed wavefront aberration corrections with machine learning and recover near diffraction-limited focal spots. With these advantages, our method has great potential to be applied to rapid deep tissue imaging in biological science.

Funding

National Natural Science Foundation of China (81771877, 61735016, 31571110, 81527901, 61427818); National Basic Research Program of China (973 Program) (2015CB352005); Natural Science Foundation of Zhejiang Province (LY16F050002, LZ17F050001); Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences (2017PT31038, 2018PT31041).

Acknowledgments

We thank Sanhua Fang and Qiaoling Ding (Core Facilities of Zhejiang University Institute of Neuroscience for technical assistance) for guidance to imaging systems; Shuangshuang Liu and Junli Xuan (Imaging Facility, Core Facilities, Zhejiang University School of Medicine) for imaging technical assistance.

References

1. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]   [PubMed]  

2. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017). [CrossRef]   [PubMed]  

3. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9(9), 563–571 (2015). [CrossRef]   [PubMed]  

4. C. Rodríguez and N. Ji, “Adaptive optical microscopy for neurobiology,” Curr. Opin. Neurobiol. 50, 83–91 (2018). [CrossRef]   [PubMed]  

5. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light Sci. Appl. 3(4), e165 (2014). [CrossRef]  

6. J. W. Hardy and L. Thompson, “Adaptive optics for astronomical telescopes,” Phys. Today 53(4), 69 (2000). [CrossRef]  

7. W. H. Southwell, “Wave-front estimation from wave-front slope measurements,” J. Opt. Soc. Am. 70(8), 998–1006 (1980). [CrossRef]  

8. J.-W. Cha, J. Ballesta, and P. T. C. So, “Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,” J. Biomed. Opt. 15(4), 046022 (2010). [CrossRef]   [PubMed]  

9. K. Wang, W. Sun, C. T. Richie, B. K. Harvey, E. Betzig, and N. Ji, “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun. 6(1), 7276 (2015). [CrossRef]   [PubMed]  

10. P. Yang, Y. Liu, M. Ao, S. Hu, and B. Xu, “A wavefront sensor-less adaptive optical system for a solid-state laser,” Opt. Lasers Eng. 46(7), 517–521 (2008). [CrossRef]  

11. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]   [PubMed]  

12. R. Liu, D. E. Milkie, A. Kerlin, B. MacLennan, and N. Ji, “Direct phase measurement in zonal wavefront reconstruction using multidither coherent optical adaptive technique,” Opt. Express 22(2), 1619–1628 (2014). [CrossRef]   [PubMed]  

13. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

14. W. Ouyang, A. Aristov, M. Lelek, X. Hao, and C. Zimmer, “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol. 36(5), 460–468 (2018). [CrossRef]   [PubMed]  

15. J. L. M. Fuentes, E. J. Fernández, P. M. Prieto, and P. Artal, “Interferometric method for phase calibration in liquid crystal spatial light modulators using a self-generated diffraction-grating,” Opt. Express 24(13), 14159–14171 (2016). [CrossRef]   [PubMed]  

16. M. A. Neil, M. J. Booth, and T. Wilson, “Closed-loop aberration correction by use of a modal Zernike wave-front sensor,” Opt. Lett. 25(15), 1083–1085 (2000). [CrossRef]   [PubMed]  

17. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015). [CrossRef]   [PubMed]  

18. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

19. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process. 26(9), 4509–4522 (2017). [CrossRef]   [PubMed]  

20. S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” In Proceedings of the IEEE International Symposium on Biomedical Imaging (IEEE, 2016), pp. 514–517. [CrossRef]  

21. S. Antholzer, M. Haltmeier, and J. Schwab, “Deep learning for photoacoustic tomography from sparse data,” Inverse Probl. Sci. Eng. 1, 1–19 (2018). [CrossRef]  

22. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” In Advances in Neural Information Processing Systems (2012), pp. 1097–1105.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 The schematic diagram of the machine learning guided fast AO system. A 637nm laser beam is filtered by a pinhole and expanded by telescope system L3-L4 before passing through a half-wave plate, a PBS, and a polarizer sequentially. After that, the laser beam projects and reflects perpendicularly to the SLM plane by mounting a beam splitter (BS) at the front of SLM. L5-L6 forms a relay system which conjugates the SLM to the back-pupil plane of the objective OBJ1. The objective OBJ2 is used to collect light information and the point spread function can be detected by a CMOS camera through a relay lens L7. (L, lens; M, mirror; OBJ1 and OBJ2, both are objective lenses (RMS4X, Olympus, 4X / 0.10 NA); PH, pinhole; AP, aperture; HWP, half-wave plate; PBS, polarized beam splitter; BS, non-polarizing beam splitter; P, linear polarizer; SLM, spatial light modulator).
Fig. 2
Fig. 2 The principle of machine learning guided fast AO correction method. (a) Expression and name of each Zernike mode from 1st to 15th. (b) Flowchart of the machine learning algorithm.
Fig. 3
Fig. 3 (a) Description of Tip-tilt correction. Transparent red point spread function is at the ideal position and red point spread function has a dx and dy displacement in horizontal and vertical directions relative to the ideal position. (b) The network architecture of the training model based on Alexnet. (c) Radar map of Mean Square Error (RMSE) about calculated Zernike coefficient with KNN, ELM, MLP, and CNN, respectively.
Fig. 4
Fig. 4 Four groups of results in 200 repetitions compensating for random phase-masks. (a) Four groups of the point spread functions before (left) and after (right) correction at the focal plane gained by CMOS camera and the intensity is normalized. (b) The comparison of Zernike coefficient amplitudes between phase-mask (in blue bars) and reconstructed phase (in orange bars). (c) is the reconstructed phase pattern loaded on SLM for each group. (d) The intensity profile at the center section of Airy (ideal spot), NO AO (without AO corrected point spread function), T-T corr (tip-tilt corrected point spread function) and ML-AO (machine learning guided AO corrected point spread function) of four groups mentioned in (a).
Fig. 5
Fig. 5 Experimental compensation results of the 1-mm-thick phantom slice. (a)–(c) Point spread functions scattered by three different areas in a 1-mm-thick phantom sample (up) and corrected by our machine learning guided AO system (down). Inside the colored dotted boxes are the enlarged views of each point spread function. (d) Section intensity profile of the point spread functions without correction (NO AO), after tip-tilt correction (T-T corr) and after machine learning fully correction (ML-AO). The scale bar in (a)–(c) is 100 μm.
Fig. 6
Fig. 6 Experiment compensation results of 300-µm-thick mouse brain slices. (a)–(b) are two typical scattered (NO AO) and corrected (ML-AO) point spread functions. The corresponding blue-dashed and magenta-dashed ROI are enlarged as below. (c)–(d) Intensity profile at the center of the point spread function before and after correction (indicated with blue and magenta arrows respectively). (e)–(f) Amplitude distribution of Zernike coefficients calculated with our method. The inserted pictures demonstrate the compensate phase pattern loaded on SLM. The scale bar in (a)–(b) is 100 μm.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

ψ p h a s e ( x , y ) = a 1 Z 1 ( x , y ) + a 2 Z 2 ( x , y ) + a 3 Z 3 ( x , y ) + + a 10 Z 10 ( x , y ) +
a 2 = π d x λ f D 2
a 3 = π d y λ f D 2
f ( x ) = max ( 0 , x ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.