Current microscopy demands the visualization of large three-dimensional samples with increased sensitivity, higher resolution, and faster speed. Several imaging techniques based on widefield, point-scanning, and light-sheet strategies have been designed to tackle some of these demands. Although successful, all these require the illuminated volumes to be tightly coupled with the detection optics to accomplish efficient optical sectioning. Here, we break this paradigm and produce optical sections from out-of-focus planes. This is done by extending the depth of field of the detection optics in a light-sheet microscope using wavefront-coding techniques. This passive technique allows accommodation of the light sheet at any place within the extended axial range. We show that this enables quick scanning of the light sheet across a volumetric sample. As a consequence, imaging speeds faster than twice the volumetric video rate () can be achieved without needing to move the sample. These capabilities are demonstrated for volumetric imaging of fast dynamics in vivo as well as for fast, three-dimensional particle tracking.
© 2015 Optical Society of America
Many modern, high-resolution microscopy techniques require the careful coupling of illuminated sections with detection optics in order to accomplish efficient optical sectioning. In other words, the illuminated point/plane and the collection point/plane must coincide. For example, confocal, multiphoton, and even super-resolution microscopes share the same objective lens for illumination and detection, facilitating perfect coupling. This is such a natural and well-established condition that building an imaging system in a different way has not even been considered. For this reason, optical systems have always been built under strict alignment, limiting the versatility of high-resolution microscopy. An example of such limitation is the need to move the sample relative to the objective in order to generate a three-dimensional (3D) image . Here, mechanical contact through the immersion medium may introduce unwanted artifacts on both the imaging (aberrations) and the biological specimen (stress) . In this Letter, we aim to show that breaking this illumination-detection coupling constriction is in fact possible and that, by doing so, a new degree of freedom is gained. A good starting point to show this is with light sheet fluorescence microscopy (LSFM) . To fully exploit the advantages of the technique (efficient illumination, low excitation levels, reduced photobleaching, phototoxicity, etc.) without compromising resolution, alignment of the optical elements is critical. In addition, fast volumetric imaging in LSFM requires the movement of the sample or the synchronized movement of the illumination plane and the objective focal plane [4 –7], adding complexity and limiting the versatility of the technique.
Ideally, one would like to only scan the light sheet to obtain a 3D image while maintaining the sample still. However, by doing so, the light sheet will eventually lie outside the depth of field (DOF) of the collection objective, reducing the quality of the image. An alternative would be to use a collection objective with a large DOF so that the light sheet could be moved and still remains in focus. Objectives with low numerical aperture (NA) may be used, but the resolution and optical throughput of the system gets compromised. Another alternative to extend the DOF while keeping the full NA of the collection objective is by means of wavefront-coding (WFC) . By doing so, the light sheet is no longer required to be in a predefined fixed position or to present a certain orientation, geometry, or shape, effectively resulting in a decoupled illumination-detection (DID) system (see Fig. 1). This results in an extra degree of freedom, opening up new and imaginative applications. In this Letter we use this new degree of freedom to show how the light sheet can be quickly scanned to obtain fast volumetric, high-resolution images of the sample without the need of moving/modifying any other element in the microscope. Since typical light-scanning devices can run at several KHz rates, 3D volumetric acquisition speeds will only be limited by the integration time, , set on the camera or the signal-to-noise ratio (SNR). The only requirement to implement WFC in a microscope is to place an appropriate phase mask (PM) at the exit pupil of the objective lens. This PM modifies the optical transfer function (OTF) of the system, reducing its sensitivity to defocus, i.e., its DOF is extended. Thus, all the objects of a 3D sample over the extended DOF are projected to the image plane (i.e., depth discrimination is lost). Although this minimizes the impact on the optical throughput, due to the modified OTF, a deconvolution is required to restore the final 2D image.
Several PM designs are suitable to extend the DOF. Among these, the cubic phase-mask (CPM) is the most widely used [see inset in Fig. 1(a)]. This mask can be described by the phase function , where and are the coordinates on the normalized space of spatial frequencies. Conveniently, this function depends on a single parameter, , that is related to the amplitude of the phase modulation and can be used to define a single metric for achieving any desired extension of the DOF. Figure 1(a) shows a schematic representation of the DID concept in a LSFM (DID–LSFM). Conceptually, it is implemented by placing a PM at the exit pupil of the collection objective lens. In this case, in contrast to conventional WFC, a 3D image can be created by collecting 2D images at different positions of the light sheet as it is scanned along the sample [Figs. 1(b) and 1(d)]. To practically implement the DID–LSFM system (see Supplement 1, Section 1), we use a deformable mirror (DM) lying in a conjugate plane of the objective lens exit pupil. The traditional CPM was the target wavefront produced on the DM. A Shack–Hartmann wavefront sensor, connected to the DM in a closed-loop approach, was used to properly monitor the DM shape.
To show the capabilities of our method, we started by comparing a standard LSFM  with our DID–LSFM system by imaging fluorescent beads embedded in agar (see Supplement 1, Section 1). The active area of the camera used was set to 2048 by , which is a typical value in microscopy applications. Standard 3D LSFM was performed by setting the DM flat (i.e., with ) and moving the sample in the direction, obtaining a stack of images 170 μm in depth [Fig. 2(a)]. This 3D image was kept as a ground-truth reference for calibration purposes. Then, keeping the DM flat, we scanned only the light sheet. As mentioned before, the resulting images are only seen clearly within the objective DOF [see Fig. 2(b)].
We then set the DM for DID-LSFM (i.e., ) where the whole volume can be imaged by only scanning the light sheet [Fig. 2(c)]. Although the beads are clearly seen along the whole volume, their images are both deformed due to the modified OTF and shifted according to a quadratic phase factor that depends on the position  (see Supplement 1, Section 2). The deformation is corrected by applying a deconvolution algorithm, whereas the quadratic shift is easily compensated for using a calibration procedure (see Supplement 1, Section 1). This consists in adjusting a quadratic space transformation in which the positions of the imaged beads in the DID–LSFM system are related to those obtained with conventional LSFM [Fig. 2(a)]. Figure 2(d) shows the transformed image. A root-mean-square (rms) distance between the centroids of the corresponding beads [Figs. 2(c) and 2(d)] was calculated to be of 0.3 μm, confirming the similarity between the two techniques. Notice that this quadratic shift is characteristic of the introduced CPM (determined by ), and therefore the same transformation can be used for any other image having the same CPM settings.
By taking a closer look at the point spread functions (PSFs) shown in Fig. 2(c) (Supplement 1, Figs. S2 and S3), it is possible to see that these are not perfectly invariant along the axis. This has two important implications. First, the WFC deconvolution process has to be performed using a plane-by-plane strategy (see Supplement 1, Section 1). While this can be easily done in our DID–LSFM system (as the PSF can be measured at each plane), this is not possible in standard WFC. Second, due to PSF variation along , it is expected that the resolution of the system will not be constant throughout the different imaged planes. To assess this change in resolution, the OTF of the system was characterized and compared to the standard LSFM (flat DM) as a reference (incoherent resolution limit of ). We found that larger values result in stronger attenuation of the high-spatial-frequency components (up to 25% loss). This shows that brighter samples should be used if resolution is an issue (see Supplement 1, Fig. S4). On the contrary, the use of such large values also resulted in larger and more homogeneous DOF (Supplement 1, Fig. S5).
We assessed the performance of our DID–LSFM system at different scanning rates ranging from 2 Hz () up to 400 Hz (). At fast imaging speeds, it was found that up to 70% of the high-spatial-frequency components are masked by noise due to the low (see Supplement 1, Fig. S6). However, note that this masking effect is not related to the DID–LSFM system itself but to other conditions such as illumination intensity, sensitivity of the camera, or brightness of the sample. Importantly, the DID–LSFM system is robust against photobleaching and phototoxicity. Therefore, higher-illumination intensities, desirable for achieving higher SNR, are feasible.
To further show the potential of the DID–LSFM system for fast imaging, we tracked the Brownian motion under drift conditions of 0.2 μm microspheres freely floating in a saline buffer. In this case, imaging was performed using scattered light and not through fluorescence. The active area (FOV) of the sCMOS camera was reduced so that it can operate at its maximum speed of (see Supplement 1, Table S2 for available framerates). By defining the image volume containing 22 planes (depth of volume of 44 μm, with a light sheet thickness of 4 μm), an effective volumetric imaging speed of ( volumetric video rate) could be achieved. To localize the beads, the main lobe of the PSF was assigned to the center of the best fit of a 3D Gaussian function and then the quadratic calibration routine was applied (see Supplement 1, Section 1). The resulting 3D tracks for the microspheres are shown in Fig. 2(e); Visualization 1 shows the full evolution of the beads’ trajectories. With this data, we were able to calculate an average drifting speed of the beads of . In addition, the diffusion coefficient of the beads in the saline buffer was calculated, resulting in a value of . This value is in agreement with that reported before, , for beads of the same size in an aqueous media , confirming the validity of our technique.
The DID–LSFM system was then evaluated for in vivo imaging using genetically modified C. elegans worms expressing yellow Cameleon protein in the pharynx (see Supplement 1, Section 1). Figure 3(a) illustrates the results obtained by regular LSFM and Fig. 3(b) shows those obtained with the DID–LSFM system after deconvolution and space transformation. As can be seen the main features of the pharynx can be retrieved. The similarity of these two images was quantified performing a 2D cross-correlation, obtaining a coefficient of .
We then tested the fast capabilities of our DID system for 3D in vivo imaging. For this, the whole pharynx of a moving worm recovering from anesthesia was imaged at (or 50 planes per volume, , ). This was enough to reconstruct the 3D+time pharynx movements [see Fig. 3(c) and Visualization 2 and Visualization 3]. Video rate volumetric imaging was also achieved (see Visualization 4). In this case, the worm was imaged at (or 24 planes per volume, , ). Here, although some deconvolution artifacts are present (due to the low SNR) in the form of the characteristic L-shaped PSF, the movements are smoothly displayed and the main features of the worm pharynx can be recovered.
Finally, we proceeded to track the rapid movements of larvae inside of a C. elegans undergoing endotokia matricida . In such case, larvae transiting along the host worm body resulted in a complex dynamic scenario that can be studied using the presented method. Visualization 5 shows the 3D+time images obtained after deconvolution. The fluorescent cell bodies of two different larvae were tracked. Figure 3(d) shows their 3D trajectories and velocity vectors showing the dynamics of this process. In this case, larval locomotion with speeds up to could be measured. Here, the imaging speed was set to (10 planes per volume, ).
Among the different benefits resulting from our DID concept, here we provide a powerful technique that allows fast visualization in 3D at the microscopic level. We have performed a set of experiments that provide a detailed analysis on the expected resolution and on the effects that different levels of SNR may introduce into our DID concept. In particular, at 400 Hz and using fluorescent beads, we have calculated a maximum frequency loss of (see Supplement 1, Fig. S6). Even in this extreme case, beside the applications here presented, our technique can still be accurately applied for fast imaging in several situations (see Ref. , examples 13 and 14 of Table II). In fact, in those cases, the reported requirement in resolution is more relaxed () than that obtained with our current experimental conditions. Naturally, if a higher resolution is required, objective lenses with higher NAs can also be used.
Under low SNR conditions, e.g., as a result of a short , large scattering, or a weak fluorescent signal, noise will degrade the resolution. This is a consequence of the attenuation of the higher spatial frequencies on the OTF. Under such conditions, any recovery algorithm will be directly affected. This can be overcome by increasing the excitation power. In our DID technique, this is in fact possible thanks to the superior performance of light-sheet methods: efficient excitation and collection and a reduced photobleaching as compared to in-line fluorescence microscopy. If excitation power is an issue, the DOF, SNR, and OTF can all be traded off by modifying the mask so that the potential frequency loss is adjusted. As an example, for a CPM, the OTF attenuation is proportional to , where represents the normalized frequencies. Therefore, by reducing the amplitude of the CPM, noise effects can be mitigated if the associated reduction of the DOF can be accepted. Vettenburg et al.  have presented a methodology to optimize WFC masks based on the optimization of the fidelity of the restored images and a realistic noise model. This is totally compatible with our DID–LSFM.
Our technique requires a calibration procedure that is based on measuring the PSF at different depths. PSF characterization is a totally standard procedure and, if the optical setup remains the same, there is no need for repeating it. Furthermore, as our method uses a standard and optimized LSFM image as a ground-truth reference, any unexpected distortion originated by our DID–LSFM technique is compensated for by means of a space transformation. This is quantified either by calculating the rms difference between the position of the fluorescent beads [of Figs. 2(a) and 2(d)] or by calculating a direct 2D correlation between standard LSFM and DID–LSFM images [Figs. 3(a) and 3(b), respectively]. A 0.3 μm rms distance was found between corresponding bead centroids of Figs. 2(a) and 2(d) and a 0.95 2D correlation factor between the images of Figs. 3(a) and 3(b).
For the demonstration presented here, a DM was used to produce the desired wavefront and therefore no chromatic aberrations are introduced. However, in multicolor imaging, care has to be taken as samples with different fluorescence emission spectra should be used. In such cases, a new PSF characterization for each color should be performed. Similarly, if large fluorescent bandwidths are used around the design wavelength, a narrower fluorescent bandwidth filter would result in better performance. In cases where a DM is replaced by a refractive element (phase plate), this should be specifically designed and selected according to the fluorescent emission wavelength of the sample. As said before, for each PM a new PSF characterization should be performed.
In closing, the DID concept integrates LSFM and WFC techniques, resulting in a powerful imaging system with a new degree of freedom able to produce optical sections from out of focus planes. We have presented a complete characterization of our DID–LSFM and found equivalent performance when compared to a traditional LSFM. We have shown that using WFC makes it possible to extend the DOF by more than one order of magnitude. This enables a fast axial scanning of the light sheet and allows obtaining for fast 3D imaging in vivo of large samples such as C. elegans. In addition, we have been able to track fast dynamics happening inside the body of a nematode at high resolution and under low-fluorescence conditions. Finally, we have taken our camera to its speed limits to allow tracking of particles at ( volumetric video rate). Altogether, DID–LSFM provides unprecedented capabilities that can be used to obtain a holistic view of fast biological dynamics in their own environment, addressing the needs for fast 3D imaging and tracking in large samples.
Although we have focused on fast 3D imaging to demonstrate the DID concept in LSFM, its consequences and applications are vast. For example, only the observation path is modified, the compatibility with other LSFM modalities is maintained . Furthermore, the light sheet can be positioned anywhere within the extended DOF, with any tilt or any engineered shape that better adapts to the sample, and still be correctly imaged. Finally, as our technique requires knowledge of the PSF, reconstruction algorithms required in several structured illumination strategies [15,16] can be naturally integrated. All these may open up new possibilities in fast particle velocimetry , optogenetics , and instantaneous 3D sensing .
Fundació Cellex Barcelona; LaserLab Europe (EU-FP7 284464).
See Supplement 1 for supporting content.
1. J. B. Pawley, ed., Handbook of Biological Confocal Microscopy (Springer, 2006).
2. L. N. Vandenberg, C. Stevenson, and M. Levin, PloS One 7, e51473 (2012). [CrossRef]
3. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, Science 305, 1007 (2004). [CrossRef]
4. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. C. Hillman, Nat. Photonics 9, 113 (2015). [CrossRef]
5. T. F. Holekamp, D. Turaga, and T. E. Holy, Neuron 57, 661 (2008). [CrossRef]
6. Y. Wu, A. Ghitani, R. Christensen, A. Santella, Z. Du, G. Rondeau, Z. Bao, D. Colón-Ramos, and H. Shroff, Proc. Natl. Acad. Sci. USA 108, 17708 (2011). [CrossRef]
7. F. O. Fahrbach, F. F. Voigt, B. Schmid, F. Helmchen, and J. Huisken, Opt. Express 21, 21010 (2013). [CrossRef]
8. J. Dowski and W. T. Cathey, Appl. Opt. 34, 1859 (1995). [CrossRef]
9. O. E. Olarte, J. Licea-Rodriguez, J. A. Palero, E. J. Gualda, D. Artigas, J. Mayer, J. Swoger, J. Sharpe, I. Rocha-Mendoza, R. Rangel-Rojo, and P. Loza-Alvarez, Biomed. Opt. Express 3, 1492 (2012). [CrossRef]
10. B. Rieger, H. R. C. Dietrich, L. R. Van Den Doel, and L. J. Van Vliet, Microsc. Res. Tech. 65, 218 (2004). [CrossRef]
11. J. Chen and E. P. Caswell-Chen, Nematology 5, 641 (2003).
12. J. Vermot, S. E. Fraser, and M. Liebling, Human Front. Sci. Prog. J. 2, 143 (2008). [CrossRef]
13. T. Vettenburg, N. Bustin, and A. R. Harvey, Opt. Express 18, 9220 (2010). [CrossRef]
14. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Lladó, D. E. K. Ferrier, T. Čižmár, F. J. Gunn-Moore, and K. Dholakia, Nat. Methods 11, 541 (2014). [CrossRef]
15. B. Judkewitz and C. Yang, Opt. Express 22, 11001 (2014). [CrossRef]
16. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Boehme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, Science 346, 1257998 (2014). [CrossRef]
17. C. Skupsch and C. Brücker, Opt. Express 21, 1726 (2013). [CrossRef]
18. A. B. Arrenberg, D. Y. R. Stainier, H. Baier, and J. Huisken, Science 330, 971 (2010). [CrossRef]
19. S. Quirin, D. S. Peterka, and R. Yuste, Opt. Express 21, 16007 (2013). [CrossRef]