## Abstract

We present a new two-snapshot structured light illumination (SLI) reconstruction algorithm for fast image acquisition. The new algorithm, which only requires two mutually *π* phase-shifted raw structured images, is implemented on a custom-built temporal focusing fluorescence microscope (TFFM) to enhance its axial resolution via a digital micromirror device (DMD). First, the orientation of the modulated sinusoidal fringe patterns is automatically identified via spatial frequency vector detection. Subsequently, the modulated in-focal-plane images are obtained via rotation and subtraction. Lastly, a parallel amplitude demodulation method, derived based on Hilbert transform, is applied to complete the decoding processes. To demonstrate the new SLI algorithm, a TFFM is custom-constructed, where a DMD replaces the generic blazed grating in the system and simultaneously functions as a diffraction grating and a programmable binary mask, generating arbitrary fringe patterns. The experimental results show promising depth-discrimination capability with an axial resolution enhancement factor of 1.25, which matches well with the theoretical estimation, i.e, 1.27. Imaging experiments on pollen grain and mouse kidney samples have been performed. The results indicate that the two-snapshot algorithm presents comparable contrast reconstruction and optical cross-sectioning capability than those adopting the conventional root-mean-square (RMS) reconstruction method. The two-snapshot method can be readily applied to any sinusoidally modulated illumination systems to realize high-speed 3D imaging as less frames are required for each in-focal-plane image restoration, i.e., the image acquisition speed is improved by 2.5 times for any two-photon systems.

© 2017 Optical Society of America

## 1. Introduction

Two-photon excitation (TPE) microscopy has been widely used in the field of bio-imaging and noninvasive diagnosis since its first introduction in the early 1990s [1]. Importantly, TPE microscopy presents distinctive advantages in deep tissue and *in vivo* imaging due to its long (near-infrared) excitation wavelength and low photobleaching/phototoxicity effect respectively [2]. A generic TPE system employs a scanning system that raster-scans the laser focus through a specimen; the emissions from each pixel are then collected to form an optical cross-section. To increase the imaging speed, i.e., video rate or higher, one direct approach is to use high speed scanners, e.g., resonant galvanometric scanners, polygonal scanners, and acousto-optic deflectors (AODs) [3–5]. However, in these systems high scanning speeds are achieved at the expense of pixel dwell time and may compromise the signal-to-noise ratio (SNR). To address the issue, one may excite or illuminate multiple pixels in the specimen simultaneously. Good examples include temporal focusing fluorescence microscopy (TFFM), and line-scan or multifocal multi-photon microscope [6–14]. Among the aforementioned methods, TFFM has a higher degree of parallelization, i.e., higher speed. TFFM is realized by spatially and temporally focusing a femtosecond laser in the focal region [15]. Accordingly, the excitation speed can be as fast as the laser repetition rate [16]; and the imaging speed is practically limited by the speed of cameras, i.e., 1 – 10 kHz.

Compared with a point-scanning system, the axial resolution of the wide-field TFFM is slightly compromised [17,18], which prevents it from broad adoption and commercialization. Structured light illumination (SLI) has been applied in TFFM to improve its optical cross-sectioning capability [19]. In addition, SLI algorithm also helps to suppress the effect of tissue scattering by acting as a virtual pinhole that computationally filters out the scattered photons outside the focal plane based on a set of raw structured images [20]. However, for any two-photon imaging system, at least five equal phase-stepped images are required to remove the sinusoidal modulation terms and obtain the in-focus image when the conventional root-mean-square (RMS) method is applied [21, 22], which compromises the speed of TFFM. One way to reduce the required raw structured images is to apply the HiLo technique [23–25], which applies a high-pass filer to a uniform image, and a low-pass filter to a structured image to form a fused HiLo image that contains all frequencies that can be collected by the image system. However, the optimal filter cut-off frequency and scaling factor (η) need to be empirically determined in the image reconstruction process; this prevents HiLo technique to be fully automated, especially when switching among different biological specimens.

In this paper, we present a new two-snapshot SLI reconstruction algorithm for fast image acquisition. The new algorithm, which only requires two mutually *π* phase-shifted raw structured images, is implemented on a custom-designed TFFM to enhance its optical cross-sectioning capability via a digital micromirror device (DMD), which functions as a programmable blazed grating that carries the designed structured patterns. Mathematical models of the new SLI algorithm have been derived; imaging experiments are performed to verify the predicted performance. The new SLI algorithm can be implemented in any sinusoidally modulated illumination system in a fully automatic, adaptive, and robust fashion that enables high-resolution and high-speed 3D imaging. In the following sections, we report the (1) principle of generating structured patterns in a temporal focusing system via a DMD, (2) theoretical development of the two-snapshot SLI algorithm, and (3) SLI TFFM imaging results on pollen grain and mouse kidney samples.

## 2. Methods

#### 2.1 Generation of temporal focusing and structured patterns via a DMD

Figure 1 illustrates the principle of generating structured images in a TFFM based on a DMD. A DMD consists of several millions of binary micromirrors; each mirror has two stable angular positions, i.e., ± 12°, and typically operates at a speed of 4.2 – 32.5 kHz. As shown in Fig. 1(a), the femtosecond laser beam is first spatially dispersed by the DMD, which functions as a programmable blazed grating, a lens then collimates these beams, and lastly the objective lens recombines the beams in the focal region. As shown in Fig. 1(b), the dispersion causes the laser pulses to be broadened after the DMD and everywhere before and after the front focal plane of the objective due to spectral-temporal relationship. Temporal focusing is thus achieved at the focal region, where all frequency components spatially overlap, leading to the shortest laser pulse and the capability of optical cross-sectioning [9, 15]. Note that to ensure high diffraction efficiency, the incident angle of the laser beam and the DMD micromirror array should satisfy the blazing condition.

To encode sinusoidal fringe patterns to the image, one may recall the inverse Fourier transform of the sinusoidal fringes is a set of spatial frequencies of opposite polarity, i.e., parallel stripes on the DMD. Accordingly, the sinusoidal fringes can be generated at the focal region by spatially selecting the 0th and ± 1st order diffractions, which form the periodic fringes at the front focal plane of the objective lens, as shown in Fig. 1(c). Observing the results in Figs. 1(b) and 1(c), one may find the plane of temporal focusing and sinusoidal fringes coincide, and accordingly structured images encoded with the desired fringe patterns can be directly generated by controlling the dimension of the “parallel stripes” on the DMD. In addition, since out-of-focus fluorescent signals do not contain the periodic fringe patterns, we can computationally remove the out-of-focus emissions and demodulate the sinusoidal fringe patterns from the in-focus images to enhance the axial resolution of the temporal focusing images. Next, we present the new two-snapshot SLI method and its application for in-focus signal decoding.

#### 2.2 Fast two-snapshot SLI for in-focus emission demodulation

We first mathematically describe the imaging field. The image captured by the camera, $D(\overrightarrow{r})$, can be expressed as:

To derive the two-snapshot algorithm, first the orientation of the modulated fringe patterns is automatically identified via spatial frequency vector detection. By rotation and subtraction, the modulated in-focus images are obtained. Lastly, a parallel amplitude demodulation method, derived based on Hilbert transform, is applied to complete the decoding processes via two phase-shifted structured images, i.e., ${D}_{1}(\overrightarrow{r})$ and ${D}_{2}(\overrightarrow{r})$. The process flow is illustrated in Fig. 2(a). In the following analysis, the phase difference of ${D}_{1}(\overrightarrow{r})$ and ${D}_{2}(\overrightarrow{r})$ is set to $\pi $ to simplify the process, i.e., ${\phi}_{1}-{\phi}_{2}=\left(2l+1\right)\pi $, where $l$ is an integer. First, we consider the frequency vector, ${\overrightarrow{p}}_{\theta}$, in the reciprocal space. The position of ${\overrightarrow{p}}_{\theta}$ can be determined in the reciprocal space by applying local maxima detection. This will allow us to identify the skew angle $\theta $ of the fringe pattern, shown in Fig. 2(c), by using the formula $\theta =\mathrm{arc}\mathrm{tan}\left({p}_{v}/{p}_{u}\right)$, where $\left({p}_{v}/{p}_{u}\right)$ are the coordinates of ${\overrightarrow{p}}_{\theta}$ as illustrated in Fig. 2(e). Next, we perform a planar rotation to ensure the fringe pattern is parallel to the y-axis, as shown in Fig. 2(f); the angle of rotation is set to $\pi /2-\theta $. Image padding processes are applied before the rotation step in order to avoid introducing artefacts in the reconstructed images. The rotated structured images can be mathematically expressed as:

*n*= 1 or 2; ${S}^{\text{'}}(\overrightarrow{r}){I}_{0}^{\text{'}2}(\overrightarrow{r})$ is the rotated in-focus fluorescent signals; and ${D}_{p2}^{\text{'}}(\overrightarrow{r})$ is the rotated out-of-focus fluorescent signals. Accordingly, the non-modulated term, ${D}_{p2}^{\text{'}}(\overrightarrow{r})$ can be removed by a subtraction step, i.e., ${D}_{d}^{\text{'}}\left(\overrightarrow{r}\right)={D}_{1}^{\text{'}}\left(\overrightarrow{r}\right)-{D}_{2}^{\text{'}}\left(\overrightarrow{r}\right).$ The result is expressed in Eq. (3):

Next, a set of complex vectors is constructed based on ${D}_{d}^{\text{'}}\left(\overrightarrow{r}\right)$ for decoding. First, the 2D image, ${D}_{d}^{\text{'}}\left(\overrightarrow{r}\right)$, is decomposed to many row vectors, i.e.,${V}_{1}({\overrightarrow{e}}_{1})$*,${V}_{2}({\overrightarrow{e}}_{2})$* , *…,${V}_{k}({\overrightarrow{e}}_{k})$* , *…,${V}_{s}\left({\overrightarrow{e}}_{s}\right),$* where *s* represents the total number of rows of ${D}_{d}^{\text{'}}\left(\overrightarrow{r}\right)$; ${V}_{k}\left({\overrightarrow{e}}_{k}\right)$ represents the ${k}^{\text{th}}$ row vector ($k=\text{}1,\text{}2,\text{}3,\text{}\dots ,s$); and ${\overrightarrow{e}}_{1}$, ${\overrightarrow{e}}_{2}$, …, ${\overrightarrow{e}}_{k}$, …, ${\overrightarrow{e}}_{s}$ are the orthogonal bases for ${D}_{d}^{\text{'}}\left(\overrightarrow{r}\right)$. ${V}_{k}\left({\overrightarrow{e}}_{k}\right)$ is mathematically expressed as:

Accordingly, we can process the 1D complex vectors in a parallel fashion in the computer. To achieve this, we define $V{A}_{k}({\overrightarrow{e}}_{k})$ as:

Next, we parallelly perform amplitude demodulation by recombining the Euclidean norms of the complex vectors. First, substitute the expressions of ${V}_{k}({\overrightarrow{e}}_{k})$ and $V{H}_{k}({\overrightarrow{e}}_{k})$ into Eq. (5), we have:

By taking the Euclidean norm of the complex vectors, we arrive at:

Importantly, from Eq. (8), $\left|V{A}_{k}\left({\overrightarrow{e}}_{k}\right)\right|$ is proportional to ${S}_{k}^{\text{'}}({\overrightarrow{e}}_{k}){I}_{0}^{\text{'}2}({\overrightarrow{e}}_{k})H(\overrightarrow{r}),$ where all the sine and cosine modulation terms in the expression are removed. Thus, just by combining $\left|V{H}_{1}\left({\overrightarrow{e}}_{1}\right)\right|$, $\left|V{H}_{2}\left({\overrightarrow{e}}_{2}\right)\right|$, …, $\left|V{H}_{k}\left({\overrightarrow{e}}_{k}\right)\right|$, …., $\left|V{H}_{s}\left({\overrightarrow{e}}_{s}\right)\right|$, the rotated in-focus image can be decoded and expressed as:

The decoded in-focus image is illustrated in Fig. 2(i). To fully restore the in-focus fluorescent signals, $S(\overrightarrow{r}){I}_{0}^{2}(\overrightarrow{r})$, we lastly rotate $S\text{'}(\overrightarrow{r}){I}_{0}^{\text{'}2}(\overrightarrow{r})$ by an angle of $\theta -\pi /2$ to align the resultant image to its original orientation, as shown in Fig. 2(j). Note that since the fringes are parallel to the *y*-axis, it is unnecessary for our method to implement the bidimensional empirical mode decomposition (BEMD) and spiral phase Hilbert transform [28] in the amplitude demodulation step, making the algorithm simple and efficient. It is worthwhile to note that if the optical system is perfectly aligned such that the modulation fringes are parallel to the y-axis, step 1 and step 2, i.e., detection of spatial vector as well as image padding and rotation, may be entirely omitted, leading to shorter calculation time.

## 3. Experimental results

#### 3.1 Experimental setup

Figure 3 presents the optical configuration of the DMD-based TFFM. The laser source is a Ti:sapphire regenerative laser amplifier (Spitfire Pro, Spectra-Physics) with an average power of 4W; pulse width of 100 fs; repetition rate of 1 kHz; center wavelength of 800 nm; and output beam diameter of ~12 mm. A mechanical shutter is included in the system to control the excitation time. A half-wave plate and a polarizing beam splitter together control the laser power. Two high reflectance mirrors, M1 and M2, are used to guide the laser beam to the DMD (DLP 4500, 912 $\times $ 1140 pixels; pixel size: 7.6 × 7.6 μm, Texas Instrument), where the DMD replaces the blazed grating in a generic temporal focusing system and simultaneously functions as a diffraction grating and a programmable binary mask. To achieve high efficiency, the incident angle of the laser beam is set to 24.8° in reference to the DMD surface so that the collected 5th order diffraction satisfies the blazing condition. After the collimating lens (CL), an objective lens (CFI Apo LWD 40X; NA 1.15, Nikon) recombines the dispersed spectral components on the x-z plane, shown in Fig. 1(a); and temporal focusing is achieved at the focal plane of the objective lens. Note that the objective lens simultaneously recombines the 0th and ± 1st order diffractions on the y-z plane, shown in Fig. 1(c) and Fig. 3, to form the sinusoidal fringes at the focal plane. The field of view is ~300 × 250 μm^{2}, where each DMD pixel maps to an area of 270 × 270 nm^{2}. The sample is mounted on a precision xyz stage (MP-285, Sutter Instrument) for positioning. The emissions are collected by the objective lens, separated from the excitation signals via a dichroic mirror (ET780lp, Chroma) and a band-pass filter, and lastly imaged to a high sensitivity electron-multiplying charge-coupled device (EMCCD) camera (ProEM, Princeton Instrument, USA). A camera zoom lens (EF 70-200 mm f/2.8L IS II USM, Canon) is installed before the EMCCD camera for the ease of adjusting the focal plane. A data acquisition (DAQ) card (USB-6363, National Instruments) collects and processes the signals to form images.

#### 3.2 Generation of sinusoidal fringes via the DMD

Before implementing the two-snapshot algorithm, we first verify the DMD can precisely generate and control the designed sinusoidal fringe patterns. A thin layer of Rhodamine 6G (Rh6G) mixed in polymethyl-methacrylate (PMMA) is prepared [29] to illustrate the capability of the DMD. In the experiment, the on-off states of the DMD micromirrors are controlled electronically to generate parallel stripes of 50% duty cycle, as discussed in Section 2.1. The period of the stripe pattern is set as 2.70 µm, i.e., 10 DMD pixels.

Note that *π*-phase shifted patterns can be generated by applying binary patterns of reversed states. Figures 4(a) and 4(b) present the binary patterns on the DMD that have opposite states; the imaging results of the Rh6G sample are presented in Figs. 4(c) and 4(d), respectively. The fluorescent intensity profiles along the red and blue lines in Figs. 4(c) and 4(d) are plotted in Fig. 4(e), where the circles represent the measured fluorescent intensities, and solid lines represent the least-squares fitted results using ${[1+\mathrm{cos}(\left|{\overrightarrow{p}}_{\theta}\right|x+{\phi}_{n})]}^{2}$, where *n* = 1 or 2. From the results, one can conclude that the measured intensities have sinusoidal profiles; and the two curves in Fig. 4(e) have a precise phase shift of 180°. Figure 4(f) presents the Fourier transform results of the measured intensity data in Fig. 4(e), where one can clearly observe the color-coded ± 1st peaks have opposite signs, confirming again the 180° phase shift. Importantly, from Fig. 4(f), we also observe the 2nd order peaks, as indicated by the black arrows, which represent the second harmonic frequency component, i.e., the $\frac{{m}^{2}}{2}\mathrm{cos}\left[2\left(2\pi \left|{\overrightarrow{p}}_{\theta}\right|x+{\phi}_{n}\right)\right]$ term in Eq. (2).

#### 3.3 Axial resolution enhancement

In this section, we experimentally characterize the axial resolution of the SLI TFFM based on the two-snapshot algorithm via the thin Rh6G sample. The axial resolution is determined by axially scanning the Rh6G sample with 1 μm steps over a range of 20 μm, where the fluorescent intensities are recorded at every step. The results are presented in Fig. 5, where the red and blue circles represent the measured fluorescent intensities of the TFFM “without” and “with” the two-snapshot algorithm respectively. The solid lines are the least-squares fitted curves of a Gaussian. Analyzing the results, one can conclude that the axial resolution, i.e., full width at half maximum (FWHM), of the TFFM “without” and “with” the two-snapshot algorithm is 5.84 μm and 4.30 μm respectively. Accordingly, the axial resolution enhancement factor, defined in [30], is calculated to be 1.25.

Note that the theoretical prediction of the axial resolution of a TFFM is derived and reported in [31,32]. By substituting our system characteristics, i.e., *NA* of objective = 1.15, refractive index of the immersion fluid *n* = 1.33; and magnification of the system *M* = 40, into the equation, the optimal axial resolution of our system is calculated to be 3.79 μm. Next, based on Stokseth’s approximation [33], we can calculate the axial resolution for structured light illumination systems. By substituting the system characteristics again, the axial resolution is found to be 2.73 μm. Accordingly, the predicted axial resolution enhancement factor is 1.27, which matches well with the experimental result. It is worthwhile to note that the theoretical axial resolution of the two-snapshot algorithm is identical to the conventional RMS algorithm; in other words, the two-snapshot algorithm improves the data acquisition speed by a factor of 2.5 without sacrificing the image resolution.

#### 3.4 Imaging experiments on pollen grain samples

Imaging experiments on pollen grain samples (Item# 304264, Carolina Biological Supply) have been performed to characterize the performance of the TFFM with the two-snapshot algorithm. Figures 6(a) - 6(c) present the imaging results of a pollen grain (~30 × 30 μm^{2}, cropped from the original image) at three different depths, i.e., 0 μm, 6 μm, and 12 μm without SLI algorithms. From the images, one can observe the presence of out-of-focus fluorescent signals, which blur the image. Figures 6(d) - 6(f) present the reconstructed images via two snapshots in the same imaging field. From the results, we find the out-of-focus emissions have been effectively suppressed by the two-snapshot algorithm; and low contrast features become evident and clear, i.e., improved signal to background ratio (SBR). The effect may be clearly observed when one compares the contours of the pollen in Figs. 6(c) and 6(f). Figure 6(g) presents normalized intensity profiles along the green and blue dashed line in Figs. 6(c) and 6(f) respectively. The results confirm the strong reduction in mean background intensity, i.e., from 0.69 to 0.36 (calculated along the dashed lines), and an increase of SBR value from 1.5 to 2.7. Overall, the imaging results demonstrate the capability of the two-snapshot method on (1) out-of-focus fluorescent emission rejection and (2) axial resolution enhancement.

#### 3.5 Imaging experiments on mouse kidney slices

Next, we perform imaging experiments on mouse kidney slices, i.e., fluorescently labeled glomeruli and convoluted tubules sections with a thickness of 16 µm (F24630, Invitrogen, USA). In this experiment, we compare the raw cross-sectional images with the conventional RMS algorithm as well as the two-snapshot algorithm. Figure 7 presents the TFFM optical cross-sectional imaging results “without” and “with” the SLI algorithms. Each image recorded from the EMCCD camera has 512 × 384 pixels, mapping to an imaging field of 204 × 154 µm^{2}. Figures 7(a) - 7(e) present the raw images captured by the TFFM at different depths, i.e., −6 μm, −3 μm, 0 μm, 3 μm and 6 μm. Figures 7(f) - 7(j) present the reconstructed images based on the RMS algorithm at the five corresponding depths. (Note that the RMS algorithm requires five $2\pi /5$ phase-shifted raw images for image reconstruction.) Figs. 7(k) - 7(o) present the reconstructed images based on the two-snapshot algorithm at the five corresponding depths, i.e., each column in Fig. 7 corresponds to the same depth and field of view. From the results, one may observe the strong presence of out-of-focus emissions in the raw images at different depths, i.e., Figs. 7(a) - 7(e). Comparing the RMS and two-snapshot algorithms in Figs. 7(f) - 7(j) and Figs. 7(k) - 7(o), one may find that when the raw images are collected with the same exposure time, i.e., 20 ms, both algorithms can effectively suppress the out-of-focus emissions and improve the axial resolution, and the reconstructed images have comparable noise level. This can be confirmed by calculating the normalized background intensities of Figs. 7(b), 7(d), 7(g), 7(i) and 7(l), 7(n) inside the randomly selected dashed boxes, which are found to be 0.49, 0.46, 0.33, 0.26, 0.31, and 0.24 respectively. A video demonstration of the results in Fig. 7 is presented in Visualization 1. It is worthwhile to note that in Figs. 6 and 7, some artifacts have been observed in the reconstructed images for both the RMS and two-snapshot algorithms. This is due to the low normalized modulation frequency, and may be removed by (1) using an objective lens of larger aperture; or (2) using a photodetector of better resolution, i.e., smaller pixels.

In summary, comparing with the conventional RMS algorithm, the two-snapshot algorithm has the following advantages: (1) the image acquisition speed is increased, i.e., a factor of 2.5 in two-photon microscopy, as only two raw images are required versus five for the RMS method; and (2) at the same imaging speed, the two snapshot method requires less raw images, which effectively increases the exposure time by a factor of 2.5, resulting in images with better signal-to-noise ratios. One limitation of the two-snapshot algorithm is that the Hilbert transform is a relatively time-consuming operation and thus the reconstructed images may not be displayed in real-time (note the raw images can be displayed at the acquisition speed), which the RMS algorithm may allow when paired with a high-performance computer. This drawback may be addressed by more powerful computers with parallel computing techniques [34]. Comparing with other algorithms that require two raw images, i.e., the HiLo technique [23–25] and BEMD [28], our new two-snapshot algorithm has the advantage of being completely adaptive without the need of manual inspection and selection of critical operation parameters, such as scaling factor, cut-off frequency, or bidimensional intrinsic mode functions etc. Note that the two-snapshot algorithm has superior computation speeds versus both the HiLo method and BEMD method.

## 4. Conclusion

We have presented a new two-snapshot SLI algorithm for fast image acquisition that only requires two mutually *π* phase-shifted raw images. The new algorithm is implemented on a custom-built DMD-based TFFM to demonstrate its unique characteristics, including the enhanced axial resolution and background rejection capability. Mathematical models of the two-snapshot method have been derived. The axial resolution enhancement has been experimentally characterized; the results show an enhancement factor of 1.25, which matches well with the theoretical prediction, i.e., 1.27. Imaging experiments have been devised and performed on pollen as well as mouse kidney slices; the experimental results show clean optical cross-sectional images, confirming the great background rejection capability and improved axial resolution. Importantly, these results have been achieved with only two raw images in a completely adaptive and automated fashion, promising 2.5 times higher image acquisition speed. The new two-snapshot algorithm can be readily adapted to any sinusoidally modulated imaging systems, e.g., LED- or continuous wave laser-based SLI microscopes [35, 36], multifocal structured illumination microscope [37], light sheet microscopy [38], or electron microscope, and generate significant impact to the field of biomedical imaging.

## Funding

HKSAR Research Grants Council (RGC); General Research Fund (GRF) (CUHK 14202815); Innovation and Technology Commission (ITC), Innovation and Technology Fund (ITF), ITS/007/15P.

## References and links

**1. **W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science **248**(4951), 73–76 (1990). [CrossRef] [PubMed]

**2. **F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods **2**(12), 932–940 (2005). [CrossRef] [PubMed]

**3. **K. H. Kim, C. Buehler, and P. T. So, “High-speed, two-photon scanning microscope,” Appl. Opt. **38**(28), 6004–6009 (1999). [CrossRef] [PubMed]

**4. **J. D. Lechleiter, D.-T. Lin, and I. Sieneart, “Multi-photon laser scanning microscopy using an acoustic optical deflector,” Biophys. J. **83**(4), 2292–2299 (2002). [CrossRef] [PubMed]

**5. **V. Iyer, T. M. Hoogland, and P. Saggau, “Fast Functional Imaging of Single Neurons Using Random-Access Multiphoton (RAMP) Microscopy,” J. Neurophysiol. **95**(1), 535–545 (2006). [CrossRef] [PubMed]

**6. **D. Oron, E. Tal, and Y. Silberberg, “Scanningless depth-resolved microscopy,” Opt. Express **13**(5), 1468–1476 (2005). [CrossRef] [PubMed]

**7. **E. Papagiakoumou, F. Anselmi, A. Bègue, V. de Sars, J. Glückstad, E. Y. Isacoff, and V. Emiliani, “Scanless two-photon excitation of channelrhodopsin-2,” Nat. Methods **7**(10), 848–854 (2010). [CrossRef] [PubMed]

**8. **E. Block, M. Greco, D. Vitek, O. Masihzadeh, D. A. Ammar, M. Y. Kahook, N. Mandava, C. Durfee, and J. Squier, “Simultaneous spatial and temporal focusing for tissue ablation,” Biomed. Opt. Express **4**(6), 831–841 (2013). [CrossRef] [PubMed]

**9. **J.-N. Yih, Y. Y. Hu, Y. D. Sie, L.-C. Cheng, C.-H. Lien, and S.-J. Chen, “Temporal focusing-based multiphoton excitation microscopy via digital micromirror device,” Opt. Lett. **39**(11), 3134–3137 (2014). [CrossRef] [PubMed]

**10. **G. J. Brakenhoff, J. Squier, T. Norris, A. C. Bliton, M. H. Wade, and B. Athey, “Real-time two-photon confocal microscopy using a femtosecond, amplified Ti:sapphire system,” J. Microsc. **181**(3), 253–259 (1996). [CrossRef] [PubMed]

**11. **J. Bewersdorf, R. Pick, and S. W. Hell, “Multifocal multiphoton microscopy,” Opt. Lett. **23**(9), 655–657 (1998). [CrossRef] [PubMed]

**12. **A. H. Buist, M. Muller, J. Squier, and G. J. Brakenhoff, “Real time two-photon absorption microscopy using multi point excitation,” J. Microsc. **192**(2), 217–226 (1998). [CrossRef]

**13. **T. Nielsen, M. Fricke, D. Hellweg, and P. Andresen, “High efficiency beam splitter for multifocal multiphoton microscopy,” J. Microsc. **201**(3), 368–376 (2001). [CrossRef] [PubMed]

**14. **K. H. Kim, C. Buehler, K. Bahlmann, T. Ragan, W.-C. A. Lee, E. Nedivi, E. L. Heffer, S. Fantini, and P. T. C. So, “Multifocal multiphoton microscopy based on multianode photomultiplier tubes,” Opt. Express **15**(18), 11658–11678 (2007). [CrossRef] [PubMed]

**15. **G. Zhu, J. van Howe, M. Durst, W. Zipfel, and C. Xu, “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express **13**(6), 2153–2159 (2005). [CrossRef] [PubMed]

**16. **J. Jiang, D. Zhang, S. Walker, C. Gu, Y. Ke, W. H. Yung, and S. C. Chen, “Fast 3-D temporal focusing microscopy using an electrically tunable lens,” Opt. Express **23**(19), 24362–24368 (2015). [CrossRef] [PubMed]

**17. **H. Dana and S. Shoham, “Numerical evaluation of temporal focusing characteristics in transparent and scattering media,” Opt. Express **19**(6), 4937–4948 (2011). [CrossRef] [PubMed]

**18. **A. Straub, M. E. Durst, and C. Xu, “High speed multiphoton axial scanning through an optical fiber in a remotely scanned temporal focusing setup,” Biomed. Opt. Express **2**(1), 80–88 (2010). [CrossRef] [PubMed]

**19. **H. Choi, E. Y. S. Yew, B. Hallacoglu, S. Fantini, C. J. R. Sheppard, and P. T. C. So, “Improvement of axial resolution and contrast in temporally focused widefield two-photon microscopy with structured light illumination,” Biomed. Opt. Express **4**(7), 995–1005 (2013). [CrossRef] [PubMed]

**20. **M. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. **22**(24), 1905–1907 (1997). [CrossRef] [PubMed]

**21. **K. Isobe, T. Takeda, K. Mochizuki, Q. Song, A. Suda, F. Kannari, H. Kawano, A. Kumagai, A. Miyawaki, and K. Midorikawa, “Enhancement of lateral resolution and optical sectioning capability of two-photon fluorescence microscopy by combining temporal-focusing with structured illumination,” Biomed. Opt. Express **4**(11), 2396–2410 (2013). [CrossRef] [PubMed]

**22. **L.-C. Cheng, C.-H. Lien, Y. Da Sie, Y. Y. Hu, C.-Y. Lin, F.-C. Chien, C. Xu, C. Y. Dong, and S.-J. Chen, “Nonlinear structured-illumination enhanced temporal focusing multiphoton excitation microscopy with a digital micromirror device,” Biomed. Opt. Express **5**(8), 2526–2536 (2014). [CrossRef] [PubMed]

**23. **D. Lim, K. K. Chu, and J. Mertz, “Wide-field fluorescence sectioning with hybrid speckle and uniform-illumination microscopy,” Opt. Lett. **33**(16), 1819–1821 (2008). [CrossRef] [PubMed]

**24. **E. Y. S. Yew, H. Choi, D. Kim, and P. T. C. So, “Wide-field two-photon microscopy with temporal focusing and HiLo background rejection,” Proc. SPIE **7903**, 79031O (2011). [CrossRef]

**25. **C.-Y. Chang, Y. Y. Hu, C.-Y. Lin, C.-H. Lin, H.-Y. Chang, S.-F. Tsai, T.-W. Lin, and S.-J. Chen, “Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy,” Biomed. Opt. Express **7**(5), 1727–1736 (2016). [CrossRef] [PubMed]

**26. **D. Karadaglić and T. Wilson, “Image formation in structured illumination wide-field fluorescence microscopy,” Micron **39**(7), 808–818 (2008). [CrossRef] [PubMed]

**27. **S. L. Hahn, *Hilbert Transforms in Signal Processing* (Artech Print on Demand, 1996).

**28. **K. Patorski, M. Trusiak, and T. Tkaczyk, “Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing,” Opt. Express **22**(8), 9517–9527 (2014). [CrossRef] [PubMed]

**29. **M. A. Lauterbach, E. Ronzitti, J. R. Sternberg, C. Wyart, and V. Emiliani, “Fast Calcium Imaging with Optical Sectioning via HiLo Microscopy,” PLoS One **10**(12), e0143681 (2015). [CrossRef] [PubMed]

**30. **M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. **198**(2), 82–87 (2000). [CrossRef] [PubMed]

**31. **M. E. Durst, G. Zhu, and C. Xu, “Simultaneous spatial and temporal focusing in nonlinear microscopy,” Opt. Commun. **281**(7), 1796–1805 (2008). [CrossRef] [PubMed]

**32. **E. Y. S. Yew, C. J. R. Sheppard, and P. T. C. So, “Temporally focused wide-field two-photon microscopy: paraxial to vectorial,” Opt. Express **21**(10), 12951–12963 (2013). [CrossRef] [PubMed]

**33. **P. A. Stokseth, “Properties of a Defocused Optical System,” J. Opt. Soc. Am. **59**(10), 1314–1321 (1969). [CrossRef]

**34. **Z. Yang, Y. Zhu, and Y. Pu, “Parallel Image Processing Based on CUDA,” in *2008 International Conference on Computer Science and Software Engineering* (IEEE, 2008), pp. 198–201. [CrossRef]

**35. **T. C. Schlichenmeyer, M. Wang, K. N. Elfer, and J. Q. Brown, “Video-rate structured illumination microscopy for high-throughput imaging of large tissue areas,” Biomed. Opt. Express **5**(2), 366–377 (2014). [CrossRef] [PubMed]

**36. **M. A. Neil, A. Squire, R. Juskaitis, P. I. Bastiaens, and T. Wilson, “Wide-field optically sectioning fluorescence microscopy with laser illumination,” J. Microsc. **197**(1), 1–4 (2000). [CrossRef] [PubMed]

**37. **A. G. York, S. H. Parekh, D. Dalle Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods **9**(7), 749–754 (2012). [CrossRef] [PubMed]

**38. **P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Methods **7**(8), 637–642 (2010). [CrossRef] [PubMed]