Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Strategies for volumetric imaging with a fluorescence microscope

Open Access Open Access

Abstract

Three-dimensional fluorescence imaging has been a longstanding goal for microscopists, made all the more challenging when aiming for a trifecta of resolution, speed, and field of view. The purpose of this review is to summarize some current strategies in volumetric microscopy, both camera- and scanning-based.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Microscopes can be naturally divided into two categories, those that are based on parallelized data acquisition with a camera, and those that are based on sequential data acquisition with a scanner. Accordingly, this review is divided into two parts, involving camera-based and scanner-based microscope strategies. Several redundancies will become apparent, where strategies appear quite similar across both categories. These redundancies should come as no surprise, since they stem directly from the formal equivalence between camera- and scanning-based microscopes resulting from the Helmholtz reciprocity theorem [1]. In other words, a component in the detection path of a camera-based microscope very often needs simply be transposed to the illumination path in a scanning-based microscope (or vice versa) to achieve the same result. And what at first glance might appear to be very different techniques are, upon further examination, simply variations of one another. An effort has been made here to cluster the various techniques to make their similarities more apparent.

2. CAMERA-BASED STRATEGIES

The function of a camera-based fluorescence microscope is almost always to project the fluorescence from a 2D plane in the sample (object plane) onto the 2D plane of the camera sensor (image plane), presumably with some magnification. Imaging is said to be “good” when the object and image planes are conjugate to one another, and the image reveals high-resolution information about the sample that is in focus. But this information is in a transverse 2D plane. What about the third axial dimension? How is it possible to retrieve high-resolution image information from sample structures that are out of focus?

A. Scanned Focus

The most obvious way to perform volumetric imaging is to acquire many 2D images in sequence, each revealing in-focus information from a different object plane. This can be achieved, for example, by physically translating the sample itself in the axial direction, or the camera, or an optical element in between such as the objective. But such physical translation of large objects is inherently slow because of the inertia involved. Recent advances in this strategy have come from reducing or eliminating altogether the inertia associated with focus scanning. Example optical elements that can perform fast scanning are electrically tunable lenses (ETLs) [2], which, when miniaturized, can operate at kilohertz rates [3]. Even faster electrical tuning can be achieved with a tunable acoustic gradient index of refraction (or TAG) lens [4], though at the cost of a reduced aperture size with increasing speed. As an alternative to a transmittive lens, a reflecting component can be used, such as a liquid crystal spatial light modulator (SLM) [5], or a deformable microelectromechanical system (MEMS) [6,7]. To maintain system telecentricity (i.e., magnification independent of defocus), the above elements should be placed in a pupil plane of the microscope, which generally involves the addition of relay optics. Nevertheless, imaging aberrations can arise, especially as the microscope objective is operated farther and farther from its nominal working distance. In principle, SLMs or MEMS can be programmed to compensate for such aberrations. A simpler solution involves directly canceling these aberrations by a technique of remote focusing [8], which makes use of relay optics to duplicate an object plane with unit magnification. In this last technique, a scanning reflector is placed in an object plane, instead of a pupil plane. Because the scanning reflector can be small in size (typically a voice-coil-mounted micromirror), this technique can also be fast, though it does suffer from a loss in fluorescence collection efficiency caused by the use of a beam splitter.

The scanned-focus techniques described above provide multiple 2D images obtained from different focal planes. A drawback of these techniques, which is endemic to all standard fluorescence microscopes, is that the 2D images are not optically sectioned. That is, they are not devoid of out-of-focus background. Such background can be numerically removed post hoc using techniques such as structured illumination microscopy (SIM; see [9] for a recent review). However, SIM requires the acquisition of even more raw images, which typically reduces acquisition speed by an order of magnitude or more and can be prohibitively time consuming when applied to volumetric imaging.

An alternative to numerically removing background post hoc is to prevent the generation of background in the first place. This is the key idea behind light sheet fluorescence microscopy (see [10] for a recent review). Provided the sample is optically clear, light sheet microscopy can provide high-resolution background-free 2D images over a large field of view (FOV) [11]. Scanned-focus imaging can then be performed by axially translating the sample (which is slow) or by axially scanning the light sheet (which can be fast). However, this last approach involves the synchronized co-registration of the detection focal plane with the light field plane, which, while amenable to the use of ETLs [12] or TAG lenses [13], becomes technically involved when high accuracy is required [14]. Of course, a difficulty with light sheet microscopy is that it requires side-on illumination, which, depending on the sample, may not always be feasible. To circumvent this problem, geometries have been devised where the same objective is used to both deliver the light sheet and collect the resulting fluorescence [15,16]. Moreover, the co-registration of the light sheet and focal plane becomes automatically synchronized when using the same scanning optics [1719] (e.g., Fig. 1), achieving volumetric imaging rates of the order of 10 Hz. Finally, a drawback of pupil clipping in the detection optics can be mitigated by the use of very high NA relay optics [20].

 figure: Fig. 1.

Fig. 1. By scanning an oblique light sheet (blue), and descanning the resultant fluorescence (green, yellow orange) with the same mechanical scanner, the light sheet and (oblique) focal planes are automatically co-registered with no other moving parts. The stationary intermediate oblique image plane is then projected onto a downstream camera. Reproduced with permission from [18], 2015, Springer Nature. Alternative scan strategies that maintain a fixed light sheet tilt are described in [17,19].

Download Full Size | PDF

A light sheet can also be generated by two-photon excitation [21], which becomes advantageous if the sample is not quite clear. Indeed, two-photon excitation even provides an alternative method for generating a light sheet based on temporal focusing [22,23]. This more esoteric method is power inefficient, generally restricting it to small FOVs. On the other hand, it provides essentially instantaneous transverse scanning and has the convenience of operating in a head-on illumination configuration. Fast, volumetric imaging can then be achieved by focus scanning, for example with an ETL [24].

B. Multi-Focus

The scanned focus approaches described above involve acquiring a stack of images, typically a z-stack, sequentially in time. We turn now to a multi-focus approach where these images are acquired simultaneously. The most straightforward way to achieve this is with multiple cameras, each conjugated to a different object plane [25]. With the use of an oblique sample geometry, this simple approach can be extended to light sheet imaging [26]. Alternatively, different focal planes can be distributed onto a same camera, which can be achieved by combining lenses of different focusing strengths into a same pupil. Such a technique requires the splitting of the light field into multiple components, which can be conveniently achieved with a single diffractive optical element (DOE) that performs both the splitting and focusing of the field components [27] (or, for more versatility, a SLM [28]). Additional steps can even be taken to correct for the chromatic aberrations that are inherent with DOEs [29] (see Fig. 2). This same DOE-based approach can also be applied to light sheet microscopy [30].

 figure: Fig. 2.

Fig. 2. Axially distributed focal planes in the primary image space (conjugate to the sample) are transversely distributed onto a camera plane (final image) with the use of a multi-focus grating (MFG). An additional chromatic correction grating (CDG) and prism assembly compensate for chromatic dispersion, leading to an aberration corrected multi-focus microscope. Reproduced with permission from [29], 2013, Springer Nature.

Download Full Size | PDF

The above techniques involve the use of multiple lenses in the fluorescence detection path, either refractive or diffractive. But there is a different way to perform simultaneous multi-focus imaging by exploiting parallax. This involves not only detecting the positional distribution of the fluorescence light, but also its directional distribution. A four-dimensional function that encapsulates both distributions is the light field [31], or equivalently the plenoptic function [32] (ray-optic variants of the Wigner distribution [33]). The projection of the light field onto a single camera frame necessarily entails a compromise in spatial resolution to accommodate some degree of directional resolution. The most common implementation of light field microscopy involves placing a microlens array in a plane conjugate to the object, followed by a camera in a pupil plane (or Fourier plane). In this manner, the camera sensor is segmented into a grid of “super-pixels,” each of which records the local directional distribution of the detected fluorescence. The price paid is a loss in spatial resolution, which has been degraded to the size of a super-pixel (typically the size of a microlens). The gain that comes from directional resolution, however, is an ability to numerically synthesize images that are focused at different depths in the sample. In other words, what was achieved directly with the multiple lens approaches above, can now be achieved post hoc. The numerical refocusing of images can be performed with a simple shearing algorithm [31], or through more sophisticated approaches involving deconvolution [34], which can be aided by wavefront coding [35,36]. Light field microscopy is effective at providing volumetric images in optically clear samples [37], where it has been generalized to multi-view imaging for isotropic resolution [38] (see Fig. 3). In samples that are not so clear, image reconstruction becomes more difficult and requires a priori information, such as an assumption of sample sparsity [39]. A miniaturized version of this has even been applied to freely moving mice [40] (with the caveat that its accuracy was confirmed only with planar fluorescent source distributions rather than volumetric distributions).

 figure: Fig. 3.

Fig. 3. Objective configuration of an orthogonal-view light field microscope (blue, excitation; green fluorescence detection with microlens arrays not shown). Each view separately leads to poor axial resolution, whereas both views together lead to isotropic resolution (PSF) upon image reconstruction. Adapted with permission from [38], 2019, Springer Nature.

Download Full Size | PDF

But other variants of light field microscopy are possible (see [41] for review). For example, the microlens array can be placed in a Fourier plane while the camera is placed in an object plane (sometimes called integral imaging), more or less inverting their roles in encoding positional versus directional information (e.g., [42]). Finally, it should be mentioned that pseudo-3D visualizations of objects can be obtained directly by techniques of view synthesis [43,44], circumventing the need for volumetric object reconstruction altogether.

C. Extended Focus

So far, we have considered approaches that provide images focused at different depths, acquired sequentially or simultaneously. We turn now to a different approach that does not provide focused imaging at any depth at all. Such an approach involves extending the focus of a microscope in the axial direction, rather than scanning it. In other words, such a microscope produces 2D images where all depths are superposed upon one another (as opposed to distributed), and all axial resolution is lost. Such extended depth of focus (EDOF) imaging can be achieved by taking the scanned focus approach to its extreme such that the focal sweep is performed within a camera exposure time rather than across multiple exposure times [45] (which presumes, of course, that the sweep mechanism is faster yet than the speed of the camera [7,13]). The resulting EDOF images are peculiar in that they are simultaneously in- and out-of-focus, with an attendant point spread function (PSF) that is nearly space invariant in all three dimensions and amenable to deconvolution [4648]. Nevertheless, a problem with EDOF imaging is that the contribution of out-of-focus background increases with increasing EDOF range, leading to increased background noise that hampers effective deconvolution. This problem can be alleviated by reducing the generation of background with 3D targeted illumination [49].

An extended focus can also be generated with no moving parts, by modulating the wavefront of the fluorescence in either amplitude or phase. The resulting PSF is thus “engineered” to be axially extended. To preserve PSF translational invariance, the wavefront modulator should be placed in the detection pupil plane. Example modulators are annular masks [50], (incoherent) superpositions of annular masks [51,52], Fresnel zone plates [53], logarithmic masks [54,55], etc. These example masks are radially symmetric, meaning that they lead to radially symmetric PSFs (as does the focal sweep method described above). Such PSFs have the advantage that they produce images that are readily interpretable even in their raw form. The same cannot be said for asymmetric pupil masks, such as cubic phase masks [56,57], or even random diffusers [58,59], which require the additional step of deconvolution. But asymmetric masks have their own advantages. Because of asymmetries in the resultant engineered PSFs, there can be a coupling between the axial position of an object and the transverse position of its image, which can facilitate depth ranging from a single image [60]. PSFs optimized for this purpose can be very strange indeed [61,62], and even made broadband [63]. Note that such depth ranging can be rendered more robust with two-shot imaging with different wavefront-coded pupils [64,65] (generalized to multi-shot [66]), or even with opposing illumination ramps [67], but this undermines somewhat the speed advantage of EDOF imaging.

EDOF approaches can also be applied to light sheet microscopy. For example, phase/amplitude masks can be placed in the illumination pupil to extend the range of a light sheet [68], or to produce what is known as a lattice light sheet of exceptionally fine resolution [69] (see Fig. 4). Alternatively, masks can be placed in the detection pupil, allowing the light sheet to be broadened (and hence also extended in range) [70,71]. This last strategy was used in [72], where the detection path was aberrated by a simple refractive-index block.

 figure: Fig. 4.

Fig. 4. Different masks applied to the illumination pupil of a (swept-beam) light sheet microscope. A, a Gaussian beam at the pupil leads to a Gaussian beam at the sample that, when swept, becomes a light sheet. B, an annular beam at the pupil leads to a Bessel beam light sheet. C, a square lattice at the pupil optimizes the confinement to the central Bessel plane. D, a hexagonal lattice optimizes the overall light sheet axial resolution. Reproduced with permission from [69], 2014, AAAS.

Download Full Size | PDF

A final EDOF approach is mentioned here because it as intriguing as it is difficult to classify [73,74]. By imaging the Fourier plane (as opposed to the object plane) of a phase-shifting rotational shear interferometer, and performing SIM-like computations, an image with in principle limitless DOF can be numerically reconstructed.

3. SCANNING-BASED STRATEGIES

We turn now to volumetric imaging strategies based on scanning microscopes. One of the key advantages of scanning microscopes is that they enable the possibility of optical sectioning even in head-on illumination configurations, either by physically blocking fluorescence background (as in confocal microscopy), or by not generating background in the first place (as in multi-photon microscopy), thereby providing high signal contrast. However, such microscopes generally rely on the scanning of a single focal probe volume throughout the sample, which can be slow, particularly in three dimensions. An obvious way to address this problem is to develop faster scanning mechanisms, and indeed this direct approach is perhaps the most promising for the advancement of scanning-based volumetric microscopy. However, before considering this approach we turn to an alternative compromise strategy, which, while not providing volumetric imaging per se, provides quasi-volumetric imaging. Specifically, we turn to EDOF approaches based on PSF engineering.

A. PSF Engineering

As before, the point of PSF engineering is to increase acquisition speed. In the case of camera-based microscopes, quasi-volumetric images are obtained from a single camera frame, without the need for axial scanning. In the same manner, in the case of scanning-based microscopes, one less scan direction becomes required. Several examples of EDOF imaging can be found in multi-photon microscopy, where the laser focus is stretched from a Gaussian to a Bessel focus either dynamically [75] or statically [7680] (e.g., Fig. 5). Another example makes use of simultaneous multi-focus excitation distributed axially [81,82]. In these examples, because of the symmetric nature of the engineered PSF, axial resolution is lost. On the other hand, for samples that are fixed in space and vary only in time, it can be argued that such axial resolution is not necessary, since it can be obtained separately either pre or post hoc. Moreover, the segmentation of dynamic objects (e.g., active neurons) can in practice be performed post hoc using statistical methods [83], provided one has acquired sufficiently long time sequences.

 figure: Fig. 5.

Fig. 5. A, scanning a standard Gaussian focus (yellow) in the x–y plane probes structures in a thin optical section, whereas scanning a Bessel focus (orange) in the x–y plane probes structures throughout a 3D volume. B, a Bessel beam can be created by inserting a SLM and mask to achieve annular illumination in the pupil plane of a two-photon microscope. Reproduced with permission from [78], 2017, Springer Nature.

Download Full Size | PDF

Nevertheless, in some applications, some degree of axial resolution may be desirable, even if only a capacity for depth ranging. To this end, asymmetric PSFs that couple axial to transverse spatial information can be useful. For example, two-photon microscopy, where the PSF is defined by the excitation beam, can achieve depth ranging with twin-beam stereoscopy [84,85]. Similarly, confocal microscopy, where the PSF is defined by both excitation and detection, can achieve depth ranging with combined wavefront engineering and the use of an array detector [86,87] (which can be parallelized for higher speed [88]). A drawback of these methods is that they all require some form of image deconvolution to disentangle the depth information. A more straightforward approach is to perform multi-plane imaging directly (i.e., computation-free) with the use of a micromirror array [89] or axially distributed reflecting pinholes [90] (see Fig. 6). These last techniques are confocal microscopes that provide optically sectioned image stacks rather than single images, using the same laser power and with no penalty in speed.

 figure: Fig. 6.

Fig. 6. Multi-Z confocal imaging can be achieved by underfilling the illumination pupil while utilizing the full detection pupil and detecting the resultant fluorescence with a series of axially distributed reflecting pinholes. Here, four optically sectioned are acquired simultaneously. More can be acquired sequentially with the addition of an ETL. Adapted with permission from [90], 2019, OSA.

Download Full Size | PDF

B. Fast 3D Scanning

We turn now to strategies where axial information is acquired not instantaneously, as it was above, but instead sequentially in time, allowing time to serve as the dimension into which spatial information is encoded. In other words, we revert to the standard strategy of focal spot scanning, but here in 3D rather than simply 2D. The most recent developments in this regard have been based on increases in scanning speed.

One approach comes from recognizing that fluorescent objects of interest are often sparsely distributed, meaning that fast scanning can be achieved by visiting only those objects, while skipping over the spaces in between. Rapid switching from one target to another can be achieved, for example, with acousto-optic deflectors (AODs), which are free of inertia. By exploiting the fact that AODs can be used to focus laser beams in addition to deflecting them [91], such targeted scanning can even be performed in 3D with two-photon excitation [9295]. However, a drawback of this approach is that it is restricted to objects (here, neurons) that remain fixed in time, making it susceptible to motion artifacts that cannot be corrected a posteriori. Moreover, it requires an initial calibration step to determine where the objects of interest are located in the first place.

For object distributions that move or distort over time, or are unknown a priori, space-filling scanning strategies are preferred. An early attempt at performing this at high speed involved transverse spiral scanning and sweeping the objective of a two-photon microscope as fast as possible [96]. More recently, much faster axial scanning has been demonstrated with ETLs [97] or TAG [98] lenses, or voice-coil-based remote focusing [99,100]. But space filling at high resolution quickly becomes problematic, since it is intrinsically time consuming and leads to the proliferation of unwieldy amounts of data. For fast, large-scale imaging, an intentional sacrifice in resolution may be required, and even perfectly acceptable (e.g., [90,101,102]). Alternatively, one can perform fast scanning that is not exactly space filling by resorting once more to discrete multi-plane imaging, as demonstrated in confocal microscopy with a TAG lens [103], or in two-photon microscopy with an ETL [104] or fast AOD switching [105].

Finally, with the advent of fast detector technologies, a new technique for multi-plane imaging has emerged that is near-instantaneous while nevertheless continuing to allow the separation of different planes in time. This technique, applicable to multi-photon microscopy, involves splitting the excitation laser beam into independently time-delayed beamlets that can be focused to different depths within the sample. Lower limits to the time delays between the beamlets is not even imposed by the detector speeds (which nowadays are fast), but rather by the finite lifetime of the fluorescent indicators, which is typically a few nanoseconds. Since its original demonstration [106], this technique has gained popularity [107109], which will likely grow with the continued development of low-repetition rate lasers for ultradeep imaging (e.g., [110]). For example, this technology has been applied with combined two- and three-photon microscopy, to rapidly probe volumes more than a cubic millimeter in mouse brain [111] (see Fig. 7). But the complexity of these devices increases with the number of imaging planes. To solve this problem, a “reverberation loop” can be used to split the excitation beam into an arbitrary number of increasingly time-delayed beamlets [112] (see Fig. 8). In addition to being simple, this last technique has the benefit of automatically matching the beamlet power to the imaging depth in tissue.

 figure: Fig. 7.

Fig. 7. Schematic of a hybrid microscope that provides simultaneous two- (red) and three- (blue) photon excitation, using a custom-built laser. Resulting fluorescence is in green. This microscope includes a time multiplexing module for near-instantaneous four-plane two-photon imaging, a temporal focusing module to tailor the spatial resolution for video-rate scanning, and a remote focusing module for axial scanning of the four-plane stack. Reproduced with permission from [111], 2019, Elsevier.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Arbitrary number of focal planes can be near-instantaneously captured by inserting a reverberation loop in the excitation beam of a multi-photon microscope [112]. A lens (or equivalent) in the loop causes each subsequent beamlet to focus to increasingly shallower depths all the way to the sample surface. With a 50:50 loop beam splitter, half the laser power targets the deepest plane and the other half targets all the other planes, while producing roughly equal fluorescence power per plane.

Download Full Size | PDF

4. OUTLOOK

This review of volumetric microscopy strategies will most certainly become obsolete within a few years, given the rapid progress in new camera and scanning technologies. For example, the point of the PSF engineering approaches described above is that they obviate the need for axial scanning. This need becomes less pressing with the development of ever faster scanning techniques. Hardware is already available that enables volumetric microscopy at scales up to a cubic millimeter and speeds up to video rate in a direct manner (i.e., computation-free). At these scales, another problem usually intervenes that is much more fundamental than the hardware itself, which is the problem of signal-to-noise ratio. Indeed, the shorter the amount of time one can allocate to the detection of signal, the greater the relative noise, shot or otherwise. This problem of SNR is particularly restrictive in the case of fluorescence, where maximum signals are limited by fluorescence lifetimes, or, as is more often the case, by constraints related to photobleaching or photodamage. Ultimately, these are the issues that will have to be addressed if progress in volumetric imaging is to continue. Computational techniques based on a priori sample information will likely be useful in this regard, as will the development of indicators that are brighter and more robust, or not based on fluorescence at all.

Funding

National Science Foundation Directorate for Engineering (EEC-0812056); National Institutes of Health - National Eye Institute (R21-EY027549).

REFERENCES

1. C. J. R. Sheppard and T. Wilson, “On the equivalence of scanning and conventional microscopes,” Optik 73, 39–43 (1986).

2. M. Martínez-Corral, P.-Y. Hsieh, A. Doblas, E. Sánchez-Ortiga, G. Saavedra, and Y.-P. Huang, “Fast axial-scanning widefield microscopy with constant magnification and resolution,” J. Display Technol. 11, 913–920 (2015). [CrossRef]  

3. H. Oku, K. Hashimoto, and M. Ishikawa, “Variable-focus lens with 1-kHz bandwidth,” Opt. Express 12, 2138–2149 (2004). [CrossRef]  

4. A. Mermillod-Blondin, E. McLeod, and C. B. Arnold, “High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens,” Opt. Lett. 33, 2146–2148 (2008). [CrossRef]  

5. M. D. Maschio, A. M. D. Stasi, F. Benfenati, and T. Fellin, “Three-dimensional in vivo scanning microscopy with inertia-free focus control,” Opt. Lett. 36, 3503–3505 (2011). [CrossRef]  

6. M. J. Moghimi, K. N. Chattergoon, C. R. Wilson, and D. L. Dickensheets, “High speed focus control MEMS mirror with controlled air damping for vital microscopy,” J. Microelectromech. Syst. 22, 938–948 (2013). [CrossRef]  

7. W. J. Shain, N. A. Vickers, B. B. Goldberg, T. Bifano, and J. Mertz, “Extended depth-of-field microscopy with a high-speed deformable mirror,” Opt. Lett. 42, 995–998 (2017). [CrossRef]  

8. E. J. Botcherby, R. Juskaitis, M. J. Booth, and T. Wilson, “Aberration-free optical refocusing in high numerical aperture microscopy,” Opt. Lett. 32, 2007–2009 (2007). [CrossRef]  

9. Y. Wu and H. Shroff, “Faster, sharper, and deeper: structured illumination microscopy for biological imaging,” Nat. Methods 15, 1011–1019 (2018). [CrossRef]  

10. R. M. Power and J. Huisken, “A guide to light-sheet fluorescence microscopy for multiscale imaging,” Nat. Methods 14, 360–373 (2017). [CrossRef]  

11. F. F. Voigt, D. Kirschenbaum, E. Platonova, S. Pagès, R. A. A. Campbell, R. Kästli, M. Schaettin, L. Egolf, A. van der Bourg, P. Bethge, K. Haenraets, N. Frézel, T. Topilko, P. Perin, D. Hillier, S. Hildebrand, A. Schueth, A. Roebroeck, B. Roska, E. Stoeckli, R. Pizzala, N. Renier, H. U. Zeilhofer, T. Karayannis, U. Ziegler, L. Batti, A. Holtmaat, C. Lüscher, A. Aguzzi, and F. Helmchen, “The mesoSPIM initiative: open-source light-sheet mesoscopes for imaging in cleared tissue,” bioRxiv 577122 (2019).

12. F. O. Fahrbach, F. F. Voigt, B. Schmid, F. Helmchen, and J. Huisken, “Rapid 3D light-sheet microscopy with a tunable lens,” Opt. Express 21, 21010–21026 (2013). [CrossRef]  

13. M. Duocastella, G. Sancataldo, P. Saggau, P. Ramoino, P. Bianchini, and A. Diaspro, “Fast inertia-free volumetric light-sheet microscope,” ACS Photonics 4, 1797–1804 (2017). [CrossRef]  

14. R. K. Chhetri, F. Amat, Y. Wan, B. Höckendorf, W. C. Lemon, and P. J. Keller, “Whole-animal functional and developmental imaging with isotropic spatial resolution,” Nat. Methods 12, 1171–1178 (2015). [CrossRef]  

15. C. Dunsby, “Optically sectioned imaging by oblique plane microscopy,” Opt. Express 16, 20306–20316 (2008). [CrossRef]  

16. T. Li, S. Ota, J. Kim, Z. J. Wong, Y. Wang, X. Yin, and X. Zhang, “Axial plane optical microscopy,” Sci. Rep. 4, 7253 (2014). [CrossRef]  

17. S. Kumar, D. Wilding, M. B. Sikkel, A. R. Lyon, K. T. MacLeod, and C. Dunsby, “Application of oblique plane microscopy to high speed live cell imaging,” Proc. SPIE 8086, 80860V (2011). [CrossRef]  

18. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. C. Hillman, “Swept confocally-aligned planar excitation (SCAPE) microscopy for high-speed volumetric imaging of behaving organisms,” Nat. Photonics 9, 113–119 (2015). [CrossRef]  

19. M. Kumar, S. Kishore, J. Nasenbeny, D. L. McLean, and Y. Kozorovitskiy, “Integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy for rapid volumetric imaging,” Opt. Express 26, 13027–13041 (2018). [CrossRef]  

20. B. Yang, X. Chen, Y. Wang, S. Feng, V. Pessino, N. Stuurman, N. H. Cho, K. W. Cheng, S. J. Lord, L. Xu, D. Xie, R. D. Mullins, M. D. Leonetti, and B. Huang, “Epi-illumination SPIM for volumetric imaging with high spatial-temporal resolution,” Nat. Methods 16, 501–504 (2019). [CrossRef]  

21. T. V. Truong, W. Supatto, D. S. Koos, J. M. Choi, and S. E. Fraser, “Deep and fast live imaging with two-photon scanned light-sheet microscopy,” Nat. Methods 8, 757–760 (2011). [CrossRef]  

22. D. Oron, E. Tal, and Y. Silberberg, “Scanningless depth-resolved microscopy,” Opt. Express 13, 1468–1476 (2005). [CrossRef]  

23. G. Zhu, J. van Howe, M. Durst, W. Zipfel, and C. Xu, “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express 13, 2153–2159 (2005). [CrossRef]  

24. J. Jiang, D. Zhang, S. Walker, C. Gu, Y. Ke, W. H. Yung, and S.-C. Chen, “Fast 3-D temporal focusing microscopy using an electrically tunable lens,” Opt. Express 23, 24362–24368 (2015). [CrossRef]  

25. P. Prabhat, S. Ram, E. S. Ward, and R. J. Ober, “Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions,” IEEE Trans. Nanobiosci. 3, 237–242 (2004). [CrossRef]  

26. K. M. Dean, P. Roudot, E. S. Welf, T. Pohlkamp, G. Garrelts, J. Herz, and R. Fiolka, “Imaging subcellular dynamics with fast and light-efficient volumetrically parallelized microscopy,” Optica 4, 263–271 (2017). [CrossRef]  

27. P. M. Blanchard and A. H. Greenaway, “Simultaneous multiplane imaging with a distorted diffraction grating,” Appl. Opt. 38, 6692–6699 (1999). [CrossRef]  

28. C. Maurer, S. Khan, S. Fassl, S. Bernet, and M. Ritsch-Marte, “Depth of field multiplexing in microscopy,” Opt. Express 18, 3023–3034 (2010). [CrossRef]  

29. S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. Dahan, and M. G. L. Gustafsson, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Methods 10, 60–63 (2013). [CrossRef]  

30. Q. Ma, B. Khademhosseinieh, E. Huang, H. Qian, M. A. Bakowski, E. R. Troemel, and Z. Liu, “Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes,” Sci. Rep. 6, 31445 (2016). [CrossRef]  

31. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]  

32. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]  

33. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in IEEE International Conference on Computational Photography (ICCP) (2009), pp. 1–10.

34. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]  

35. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014). [CrossRef]  

36. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016). [CrossRef]  

37. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

38. N. Wagner, N. Norlin, J. Gierten, G. de Medeiros, B. Balázs, J. Wittbrodt, L. Hufnagel, and R. Prevedel, “Instantaneous isotropic volumetric imaging of fast biological processes,” Nat. Methods 16, 497–500 (2019). [CrossRef]  

39. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015). [CrossRef]  

40. O. Skocek, T. Nöbauer, L. Weilguny, F. M. Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, D. D. Cox, P. Golshani, and A. Vaziri, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15, 429–432 (2018). [CrossRef]  

41. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10, 512–566 (2018). [CrossRef]  

42. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10, 29–49 (2019). [CrossRef]  

43. A. Orth and K. B. Crozier, “Light field moment imaging,” Opt. Lett. 38, 2666–2668 (2013). [CrossRef]  

44. J.-C. Baritaux, C. R. Chan, J. Li, and J. Mertz, “View synthesis with a partitioned-aperture microscope,” Opt. Lett. 39, 685–688 (2014). [CrossRef]  

45. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]  

46. H. Nagahara, S. Kuthirummal, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” in Computer Vision–ECCV 2008 (D. Forsyth, P. Torr, and A. Zisserman, eds.) (Springer, 2008), pp. 60–73.

47. S. Liu and H. Hua, “Extended depth-of-field microscopic imaging with a variable focus microscope objective,” Opt. Express 19, 353–362 (2011). [CrossRef]  

48. W. J. Shain, N. A. Vickers, A. Negash, T. Bifano, A. Sentenac, and J. Mertz, “Dual fluorescence-absorption deconvolution applied to extended-depth-of-field microscopy,” Opt. Lett. 42, 4183–4186 (2017). [CrossRef]  

49. S. Xiao, H. Tseng, H. Gritton, X. Han, and J. Mertz, “Video-rate volumetric neuronal imaging using 3D targeted illumination,” Sci. Rep. 8, 7921 (2018). [CrossRef]  

50. W. T. Welford, “Use of annular apertures to increase focal depth,” J. Opt. Soc. Am. 50, 749–753 (1960). [CrossRef]  

51. S. Abrahamsson, S. Usawa, and M. Gustafsson, “A new approach to extended focus for high-speed high-resolution biological microscopy,” Proc. SPIE 6090, 60900N (2006). [CrossRef]  

52. K. Chu, N. George, and W. Chi, “Extending the depth of field through unbalanced optical path difference,” Appl. Opt. 47, 6895–6903 (2008). [CrossRef]  

53. G. Indebetouw and H. Bai, “Imaging with Fresnel zone pupil masks: extended depth of field,” Appl. Opt. 23, 4299–4302 (1984). [CrossRef]  

54. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001). [CrossRef]  

55. S. Mezouari and A. R. Harvey, “Phase pupil functions for reduction of defocus and spherical aberrations,” Opt. Lett. 28, 771–773 (2003). [CrossRef]  

56. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]  

57. A. Castro and J. Ojeda-Castañeda, “Asymmetric phase masks for extended depth of field,” Appl. Opt. 43, 3474–3479 (2004). [CrossRef]  

58. E. E. García-Guerrero, E. R. Méndez, H. M. Escamilla, T. A. Leskova, and A. A. Maradudin, “Design and fabrication of random phase diffusers for extending the depth of focus,” Opt. Express 15, 910–923 (2007). [CrossRef]  

59. O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 Papers (Association for Computing Machinery, 2010), p. 31.

60. Y. Zhou, P. Zammit, G. Carles, and A. R. Harvey, “Computational localization microscopy with extended axial range,” Opt. Express 26, 7563–7577 (2018). [CrossRef]  

61. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014). [CrossRef]  

62. Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. E. Moerner, “Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett. 15, 4194–4199 (2015). [CrossRef]  

63. Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. E. Moerner, “Multicolour localization microscopy by point-spread-function engineering,” Nat. Photonics 10, 590–594 (2016). [CrossRef]  

64. S. Quirin and R. Piestun, “Depth estimation and image recovery using broadband, incoherent illumination with engineered point spread functions [Invited],” Appl. Opt. 52, A367–A376 (2013). [CrossRef]  

65. P. Zammit, A. R. Harvey, and G. Carles, “Extended depth-of-field imaging and ranging in a snapshot,” Optica 1, 209–216 (2014). [CrossRef]  

66. H.-Y. Liu, J. Zhong, and L. Waller, “Multiplexed phase-space imaging for 3D fluorescence microscopy,” Opt. Express 25, 14986–14995 (2017). [CrossRef]  

67. W. J. Shain, N. A. Vickers, J. Li, X. Han, T. Bifano, and J. Mertz, “Axial localization with modulated-illumination extended-depth-of-field microscopy,” Biomed. Opt. Express 9, 1771–1782 (2018). [CrossRef]  

68. T. Vettenburg, H. I. C. Dalgarno, J. Nylk, C. Coll-Lladó, D. E. K. Ferrier, T. Čižmár, F. J. Gunn-Moore, and K. Dholakia, “Light-sheet microscopy using an Airy beam,” Nat. Methods 11, 541–544 (2014). [CrossRef]  

69. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer III, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Böhme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution,” Science 346, 1257998 (2014). [CrossRef]  

70. O. E. Olarte, J. Andilla, D. Artigas, and P. Loza-Alvarez, “Decoupled illumination detection in light sheet microscopy for fast volumetric imaging,” Optica 2, 702–705 (2015). [CrossRef]  

71. S. Quirin, N. Vladimirov, C.-T. Yang, D. S. Peterka, R. Yuste, and B. M. Ahrens, “Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy,” Opt. Lett. 41, 855–858 (2016). [CrossRef]  

72. R. Tomer, M. Lovett-Barron, I. Kauvar, A. S. Andalman, V. M. Burns, S. Sankaran, L. Grosenick, M. Broxton, S. Yang, and K. Deisseroth, “SPED light sheet microscopy: fast mapping of biological system structure and function,” Cell 163, 1796–1806 (2015). [CrossRef]  

73. P. Potuluri, M. R. Fetterman, and D. J. Brady, “High depth of field microscopic imaging using an interferometric camera,” Opt. Express 8, 624–630 (2001). [CrossRef]  

74. D. Weigel, H. Babovsky, A. Kiessling, and R. Kowarschik, “Widefield microscopy with infinite depth of field and enhanced lateral resolution based on an image inverting interferometer,” Opt. Commun. 342, 102–108 (2015). [CrossRef]  

75. N. Olivier, A. Mermillod-Blondin, C. B. Arnold, and E. Beaurepaire, “Two-photon microscopy with simultaneous standard and extended depth of field using a tunable acoustic gradient-index lens,” Opt. Lett. 34, 1684–1686 (2009). [CrossRef]  

76. E. J. Botcherby, R. Juškaitis, and T. Wilson, “Scanning two photon fluorescence microscopy with extended depth of field,” Opt. Commun. 268, 253–260 (2006). [CrossRef]  

77. G. Thériault, Y. D. Koninck, and N. McCarthy, “Extended depth of field microscopy for rapid volumetric two-photon imaging,” Opt. Express 21, 10095–10104 (2013). [CrossRef]  

78. R. Lu, W. Sun, Y. Liang, A. Kerlin, J. Bierfeld, J. D. Seelig, D. E. Wilson, B. Scholl, B. Mohar, M. Tanimoto, M. Koyama, D. Fitzpatrick, M. B. Orger, and N. Ji, “Video-rate volumetric functional imaging of the brain at synaptic resolution,” Nat. Neurosci. 20, 620–628 (2017). [CrossRef]  

79. B. Chen, X. Huang, D. Gou, J. Zeng, G. Chen, M. Pang, Y. Hu, Z. Zhao, Y. Zhang, Z. Zhou, H. Wu, H. Cheng, Z. Zhang, C. Xu, Y. Li, L. Chen, and A. Wang, “Rapid volumetric imaging with Bessel-Beam three-photon microscopy,” Biomed. Opt. Express 9, 1992–2000 (2018). [CrossRef]  

80. C. Rodríguez, Y. Liang, R. Lu, and N. Ji, “Three-photon fluorescence microscopy with an axially elongated Bessel focus,” Opt. Lett. 43, 1914–1917 (2018). [CrossRef]  

81. W. Yang, J. K. Miller, L. Carrillo-Reid, E. Pnevmatikakis, L. Paninski, R. Yuste, and D. S. Peterka, “Simultaneous multi-plane imaging of neural circuits,” Neuron 89, 269–284 (2016). [CrossRef]  

82. S. Han, W. Yang, and R. Yuste, “Two-color volumetric imaging of neuronal activity of cortical columns,” Cell Rep. 27, 2229–2240 (2019). [CrossRef]  

83. E. A. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, M. Ahrens, R. Bruno, T. M. Jessell, D. S. Peterka, R. Yuste, and L. Paninski, “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89, 285–299 (2016). [CrossRef]  

84. Y. Yang, B. Yao, M. Lei, D. Dan, R. Li, M. Van Horn, X. Chen, Y. Li, and T. Ye, “Two-photon laser scanning stereomicroscopy for fast volumetric imaging,” PLOS ONE 11, e0168885 (2016). [CrossRef]  

85. A. Song, A. S. Charles, S. A. Koay, J. L. Gauthier, S. Y. Thiberge, J. W. Pillow, and D. W. Tank, “Volumetric two-photon imaging of neurons using stereoscopy (vTwINS),” Nat. Methods 14, 420–426 (2017). [CrossRef]  

86. C. Roider, R. Piestun, and A. Jesacher, “3D image scanning microscopy with engineered excitation and detection,” Optica 4, 1373–1381 (2017). [CrossRef]  

87. C. Roider, R. Heintzmann, R. Piestun, and A. Jesacher, “Deconvolution approach for 3D scanning microscopy with helical phase engineering,” Opt. Express 24, 15456–15467 (2016). [CrossRef]  

88. S. Li, J. Wu, H. Li, D. Lin, B. Yu, and J. Qu, “Rapid 3D image scanning microscopy with multi-spot excitation and double-helix point spread function detection,” Opt. Express 26, 23585–23593 (2018). [CrossRef]  

89. C. Yang, K. Shi, M. Zhou, S. Zheng, S. Yin, and Z. Liu, “Z-microscopy for parallel axial imaging with micro mirror array,” Appl. Phys. Lett. 101, 231111 (2012). [CrossRef]  

90. A. Badon, S. Bensussen, H. J. Gritton, M. R. Awal, C. V. Gabel, X. Han, and J. Mertz, “Video-rate large-scale imaging with Multi-Z confocal microscopy,” Optica 6, 389–395 (2019). [CrossRef]  

91. A. Kaplan, N. Friedman, and N. Davidson, “Acousto-optic lens with very fast focus scanning,” Opt. Lett. 26, 1078–1080 (2001). [CrossRef]  

92. R. G. Duemani, K. Kelleher, R. Fink, and P. Saggau, “Three-dimensional random access multiphoton microscopy for functional imaging of neuronal activity,” Nat. Neurosci. 11, 713–720 (2008). [CrossRef]  

93. Y. Otsu, V. Bormuth, J. Wong, B. Mathieu, G. P. Dugué, A. Feltz, and S. Dieudonné, “Optical monitoring of neuronal activity at high frame rate with a digital random-access multiphoton (RAMP) microscope,” J. Neurosci. Methods 173, 259–270 (2008). [CrossRef]  

94. K. M. N. S. Nadella, H. Roš, C. Baragli, V. A. Griffiths, G. Konstantinou, T. Koimtzis, G. J. Evans, P. A. Kirkby, and R. A. Silver, “Random-access scanning microscopy for 3D imaging in awake behaving animals,” Nat. Methods 13, 1001–1004 (2016). [CrossRef]  

95. G. Katona, G. Szalay, P. Maák, A. Kaszás, M. Veress, D. Hillier, B. Chiovini, E. S. Vizi, B. Roska, and B. Rózsa, “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods 9, 201–208 (2012). [CrossRef]  

96. W. Göbel, B. M. Kampa, and F. Helmchen, “Imaging cellular network dynamics in three dimensions using fast 3D laser scanning,” Nat. Methods 4, 73–79 (2007). [CrossRef]  

97. J. M. Jabbour, B. H. Malik, C. Olsovsky, R. Cuenca, S. Cheng, J. A. Jo, Y.-S. L. Cheng, J. M. Wright, and K. C. Maitland, “Optical axial scanning in confocal microscopy using an electrically tunable lens,” Biomed. Opt. Express 5, 645–652 (2014). [CrossRef]  

98. L. Kong, J. Tang, J. P. Little, Y. Yu, T. Lämmermann, C. P. Lin, R. N. Germain, and M. Cui, “Continuous volumetric imaging via an optical phase-locked ultrasound lens,” Nat. Methods 12, 759–762 (2015). [CrossRef]  

99. P. Rupprecht, A. Prendergast, C. Wyart, and R. W. Friedrich, “Remote z-scanning with a macroscopic voice coil motor for fast 3D multiphoton laser scanning microscopy,” Biomed. Opt. Express 7, 1656–1671 (2016). [CrossRef]  

100. N. J. Sofroniew, D. Flickinger, J. King, and K. Svoboda, “A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,” eLife 5, e14472 (2016). [CrossRef]  

101. R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13, 1021–1028 (2016). [CrossRef]  

102. P. Rupprecht, R. Prevedel, F. Groessl, W. E. Haubensak, and A. Vaziri, “Optimizing and extending light-sculpting microscopy for fast functional imaging in neuroscience,” Biomed. Opt. Express 6, 353–368 (2015). [CrossRef]  

103. M. Duocastella, G. Vicidomini, and A. Diaspro, “Simultaneous multiplane confocal microscopy using acoustic tunable lenses,” Opt. Express 22, 19293–19301 (2014). [CrossRef]  

104. B. F. Grewe, F. F. Voigt, M. van’t Hoff, and F. Helmchen, “Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens,” Biomed. Opt. Express 2, 2035–2046 (2011). [CrossRef]  

105. E. Z. Chong, M. Panniello, I. Barreiros, M. M. Kohl, and M. J. Booth, “Quasi-simultaneous multiplane calcium imaging of neuronal circuits,” Biomed. Opt. Express 10, 267–282 (2019). [CrossRef]  

106. W. Amir, R. Carriles, E. E. Hoover, T. A. Planchon, C. G. Durfee, and J. A. Squier, “Simultaneous imaging of multiple focal planes using a two-photon scanning microscope,” Opt. Lett. 32, 1731–1733 (2007). [CrossRef]  

107. A. Cheng, J. T. Gonçalves, P. Golshani, K. Arisaka, and C. Portera-Cailliau, “Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing,” Nat. Methods 8, 139–142 (2011). [CrossRef]  

108. J. L. Chen, F. F. Voigt, M. Javadzadeh, R. Krueppel, and F. Helmchen, “Long-range population dynamics of anatomically defined neocortical networks,” eLife 5, e14679 (2016). [CrossRef]  

109. J. N. Stirman, I. T. Smith, M. W. Kudenov, and S. L. Smith, “Wide field-of-view, multi-region, two-photon imaging of neuronal activity in the mammalian brain,” Nat. Biotechnol. 34, 857–862 (2016). [CrossRef]  

110. K. Charan, B. Li, M. Wang, C. P. Lin, and C. Xu, “Fiber-based tunable repetition rate source for deep tissue two-photon fluorescence microscopy,” Biomed. Opt. Express 9, 2304–2311 (2018). [CrossRef]  

111. S. Weisenburger, F. Tejera, J. Demas, B. Chen, J. Manley, F. T. Sparks, F. M. Traub, T. Daigle, H. Zeng, A. Losonczy, and A. Vaziri, “Volumetric Ca2+ imaging in the mouse brain using hybrid multiplexed sculpted light microscopy,” Cell 177, 1050–1066 (2019). [CrossRef]  

112. D. R. Beaulieu, I. G. Davison, T. G. Bifano, and J. Mertz, “Simultaneous multiplane imaging with reverberation multiphoton microscopy,” arXiv:1812.05162 [physics] (2018).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. By scanning an oblique light sheet (blue), and descanning the resultant fluorescence (green, yellow orange) with the same mechanical scanner, the light sheet and (oblique) focal planes are automatically co-registered with no other moving parts. The stationary intermediate oblique image plane is then projected onto a downstream camera. Reproduced with permission from [18], 2015, Springer Nature. Alternative scan strategies that maintain a fixed light sheet tilt are described in [17,19].
Fig. 2.
Fig. 2. Axially distributed focal planes in the primary image space (conjugate to the sample) are transversely distributed onto a camera plane (final image) with the use of a multi-focus grating (MFG). An additional chromatic correction grating (CDG) and prism assembly compensate for chromatic dispersion, leading to an aberration corrected multi-focus microscope. Reproduced with permission from [29], 2013, Springer Nature.
Fig. 3.
Fig. 3. Objective configuration of an orthogonal-view light field microscope (blue, excitation; green fluorescence detection with microlens arrays not shown). Each view separately leads to poor axial resolution, whereas both views together lead to isotropic resolution (PSF) upon image reconstruction. Adapted with permission from [38], 2019, Springer Nature.
Fig. 4.
Fig. 4. Different masks applied to the illumination pupil of a (swept-beam) light sheet microscope. A, a Gaussian beam at the pupil leads to a Gaussian beam at the sample that, when swept, becomes a light sheet. B, an annular beam at the pupil leads to a Bessel beam light sheet. C, a square lattice at the pupil optimizes the confinement to the central Bessel plane. D, a hexagonal lattice optimizes the overall light sheet axial resolution. Reproduced with permission from [69], 2014, AAAS.
Fig. 5.
Fig. 5. A, scanning a standard Gaussian focus (yellow) in the x–y plane probes structures in a thin optical section, whereas scanning a Bessel focus (orange) in the x–y plane probes structures throughout a 3D volume. B, a Bessel beam can be created by inserting a SLM and mask to achieve annular illumination in the pupil plane of a two-photon microscope. Reproduced with permission from [78], 2017, Springer Nature.
Fig. 6.
Fig. 6. Multi-Z confocal imaging can be achieved by underfilling the illumination pupil while utilizing the full detection pupil and detecting the resultant fluorescence with a series of axially distributed reflecting pinholes. Here, four optically sectioned are acquired simultaneously. More can be acquired sequentially with the addition of an ETL. Adapted with permission from [90], 2019, OSA.
Fig. 7.
Fig. 7. Schematic of a hybrid microscope that provides simultaneous two- (red) and three- (blue) photon excitation, using a custom-built laser. Resulting fluorescence is in green. This microscope includes a time multiplexing module for near-instantaneous four-plane two-photon imaging, a temporal focusing module to tailor the spatial resolution for video-rate scanning, and a remote focusing module for axial scanning of the four-plane stack. Reproduced with permission from [111], 2019, Elsevier.
Fig. 8.
Fig. 8. Arbitrary number of focal planes can be near-instantaneously captured by inserting a reverberation loop in the excitation beam of a multi-photon microscope [112]. A lens (or equivalent) in the loop causes each subsequent beamlet to focus to increasingly shallower depths all the way to the sample surface. With a 50:50 loop beam splitter, half the laser power targets the deepest plane and the other half targets all the other planes, while producing roughly equal fluorescence power per plane.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.