Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep-computer-generated holography with temporal-focusing and a digital propagation matrix for rapid 3D multiphoton stimulation

Open Access Open Access

Abstract

Deep learning-based computer-generated holography (DeepCGH) has the ability to generate three-dimensional multiphoton stimulation nearly 1,000 times faster than conventional CGH approaches such as the Gerchberg-Saxton (GS) iterative algorithm. However, existing DeepCGH methods cannot achieve axial confinement at the several-micron scale. Moreover, they suffer from an extended inference time as the number of stimulation locations at different depths (i.e., the number of input layers in the neural network) increases. Accordingly, this study proposes an unsupervised U-Net DeepCGH model enhanced with temporal focusing (TF), which currently achieves an axial resolution of around 5 µm. The proposed model employs a digital propagation matrix (DPM) in the data preprocessing stage, which enables stimulation at arbitrary depth locations and reduces the computation time by more than 35%. Through physical constraint learning using an improved loss function related to the TF excitation efficiency, the axial resolution and excitation intensity of the proposed TF-DeepCGH with DPM rival that of the optimal GS with TF method but with a greatly increased computational efficiency.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Computer-generated holography (CGH) is widely applied in various fields nowadays, including AR/VR displays [13], optogenetics [4], and microfabrication [5]. CGH operates by encoding precise patterns onto deformable mirrors or spatial light modulators (SLM), which then modulate the intensity or wavefront of the incoming light to achieve customized light distributions. Conventional CGH methods utilize the iterative Fourier transform algorithm (IFTA) proposed by Gerchberg and Saxton (GS) [6]. The GS algorithm synthesizes an optimized hologram by iteratively calculating the complex fields of the imaging plane and Fourier plane. However, owing to the use of an iterative process, IFTA suffers from an excessive computation time and may converge to a suboptimal solution. As a consequence, it is poorly suited to precise real-time applications. Although several CGH algorithms have been proposed to address this problem by performing the optimization process using a definitive loss function, such as non-convex optimization with gradient descent [7], or Wirtinger's derivatives [8,9], they incur a tradeoff between the accuracy of the customized illumination patterns and the computation time. Consequently, the feasibility of utilizing deep-learning (DL) methods for CGH has attracted increased attention in recent years. Convolutional neural networks (CNNs) appear to provide a particularly promising approach for addressing the challenges of CGH [10]. Through their nonlinear mapping capabilities [11,12] and inference based on pretrained networks, CNNs provide a feasible means to rapidly synthesize optimal holograms. Horisaki et al. [13] proposed a noniterative DL approach for calculating CGHs using a supervised learning procedure involving random hologram patterns and the corresponding illumination patterns generated by the Fresnel propagation formula. Furthermore, several multi-depth algorithms based on U-Net have been proposed to synthesize customized illumination patterns at different depths [14,15]. However, supervised learning methods require extensive training datasets and may struggle to identify approximate solutions, resulting in blurred inference light distributions [16]. Several unsupervised learning strategies have been proposed to overcome this limitation. For example, CNN models for CGH trained with an unsupervised strategy predict the complex field on the imaging plane and then synthesize optimal holograms by reverse propagating the complex field from the imaging plane to the SLM plane [17,18]. However, while unsupervised DL CHG (DeepCGH) methods achieve rapid and accurate hologram generation, they cannot reach the level of axial confinement required by optogenetic applications, which typically require confinement on the order of a few microns [19]. Cellular-scale axial confinement is on the order of a few microns—a criterion that poses challenges for current DeepCGH approaches to achieve. Additionally, the computation time of DeepCGH approaches increases as the number of stimulation points at different depth locations increases. Finally, pretrained models are only applicable to stimulation locations at previously planned depths. Accordingly, there is a need for more rapid, accurate, and arbitrary neuron stimulation techniques to support the real-time observation of neural activity.

Temporal focusing (TF) provides exceptional depth resolution in multiphoton microscopy applications [20,21]. In the TF approach, the beam is dispersed by diffractive optical elements and refocused at the imaging plane by a “4f” relay lens system. At the imaging plane, all the spectral components, initially dispersed by the diffractive optical elements, constructively interfere since all of the components are in phase. Thus, the pulse width at the imaging plane is the narrowest and, thus yields the highest peak power. To date, TF has mainly been integrated with imaging approaches, such as light-field microscopy [22], functional imaging, and photodynamic therapy [23]. In these applications, TF enables scan-free, simultaneous, and axial-resolved imaging over a large access area. Moreover, when using a light source with a near-infrared wavelength, TF exhibits an exceptional axial sectioning capacity and a deep penetration depth. It thus has significant potential for precise micropattern excitation [24]. TF has also shown great promise for optogenetic stimulation. By leveraging the axial precision of TF and the light modulation capabilities of CGH, researchers have achieved holographic illumination with several-micron precision, making possible highly precise neuron excitation at a single depth [25]. However, this approach can only generate precise illumination patterns on the imaging plane since the phase modulation introduced by the SLM in front of the grating disrupts the coherence of the incident ultrafast laser beam. Yet neurons are randomly distributed in organisms, and hence three-dimensional (3D) photostimulation is required to observe neural activity and connectomes. To address this challenge, CGH methods require the ability to precisely modulate light at different depths and positions. Pégard et al. [26] proposed an integrated TF and CGH approach wherein the light was modulated at arbitrary 3D locations by positioning the SLM at the Fourier plane of the grating. Since the beam was spread in a focused line along the blazed direction of the grating on the SLM, a lens was placed in front of the grating to avoid damaging the SLM crystal by fully illuminating the SLM with a spherical phase pattern. However, the lens generated a secondary focus of the illumination patterns and thus caused unwanted photostimulation. Accanto et al. [27] synthesized customized illumination patterns with various shapes and depths by placing the grating at the Fourier plane of two SLMs, where the first SLM was responsible for shaping the illumination patterns, and the second SLM was responsible for projecting the 3D patterns. Although both methods offer solutions for 3D photostimulation, their extended computation times preclude their use in real-time applications. Accordingly, there remains a pressing need for innovative approaches to achieve more rapid 3D photostimulation.

In response to this need, the present study proposes a TF-DeepCGH algorithm, which leverages the depth-resolving capabilities of TF and the efficiency of DL for hologram synthesis, respectively. During the early development stage of the proposed method, it was found that the hologram inference time increased greatly with an increasing number of input layers (i.e., an increasing number of stimulation depths). Moreover, the necessity to retrain the neural network model each time the number of input layers was changed increased the computational time and effort. Ryu et al. [28] previously addressed this retraining issue by introducing a depth embedding block that converted the target depth into a depth embedding vector, which was then injected into the neural network. However, the proposed approach was limited to producing 2D projections at different depths, and thus creating a full 3D projection required the sequential generation and aggregation of multiple 2D projections in a layer-by-layer manner. Wu et al. [29] presented a method, designated as Deep-Z, in which single 2D fluorescence images were appended to a digital propagation matrix (DPM). A neural network was then employed to refocus the images at various depths to create a corresponding 3D representation. In the present study, the DPM technique is employed to convert all the stimulation spots at different depth locations into lateral position and depth matrices, thereby reducing the required number of input layers to just two. In this way, it is possible not only to synthesize multi-depth 3D stimulation holograms without the need to retrain the neural network but also to reduce the computation time. It is shown that the TF-DeepCGH with DPM method successfully synthesizes holograms for 3D arbitrary multiphoton stimulation with a few-micron level axial confinement. Moreover, to enhance the stimulation efficiency, physical constraint learning is performed with an improved loss function related to the efficiency of TF excitation. The experimental results confirm that DeepCGH-TF with DPM and the improved loss function enable arbitrary 3D multiphoton stimulation with cellular resolution in just several milliseconds using only a pretrained network model.

2. Methods and system

2.1 Configuration and simulation of TF-DeepCGH with DPM and improved loss function

Figure 1 illustrates the configuration of the proposed TF-DeepCGH with DPM method. The integration of a neural network, numerical optical simulations, and DPM enables real-time hologram synthesis with micron-level accuracy. The TF-DeepCGH with DPM method commences by converting randomly generated 3D stimulation patterns into a DPM, which is then taken as the input to the neural network. The DPM representation encompasses two channels, one for lateral position information and the other for axial position (i.e., depth) information. The lateral position information contains the positions and intensity distributions of the illumination patterns. In contrast, the depth information contains the relative axial displacement between these illumination patterns and the front focal plane (FFP), which is indicated in corresponding color in Fig. 1. The fixed number of input channels (i.e., two) not only ensures a consistent computation time irrespective of the number of layers but also avoids the need to retrain the network each time a new illumination pattern is generated at a different depth.

 figure: Fig. 1.

Fig. 1. Configuration of TF-DeepCGH with DPM.

Download Full Size | PDF

In the proposed method, the two-photon excited fluorescence distribution of the illumination patterns is generated using a numerical simulation model based on Fourier optics theory [30]. To support the use of TF in the proposed method, it is necessary to derive the bandwidth of the incident beam in order to accurately simulate the complex fields of all the spectral components. In doing so, the utilized light source is assumed to be an ultrafast laser with a central wavelength of 1,030 nm and a pulse duration of 228 fs (PHAROS, PH1-10, Light Conversion). The bandwidth, B, of the incident beam at the full width at half maximum (FWHM) position can be derived as follows [31]:

$$ {B = \frac{{4\ln 2\cdot \lambda _0^2}}{{2\pi \cdot \tau \cdot c}},\; }$$
where c is the speed of light, ${\lambda _0}$ is the central wavelength, and $\tau $ is the pulse duration. The specific frequency components contained within the incident light can be ascertained by Eq. (1). The amplitudes of the frequency components, ${A_G}({\xi ,\eta ;\omega } )$), can then be obtained from the following Gaussian distribution in relation to ${\omega _0}$ [31]:
$$ {{A_G}({\xi ,\eta ;\omega } )= {A_G}({\xi ,\eta ;{\omega_0}} )\sqrt {\frac{\pi }{{2\ln 2}}} \tau \cdot {e^{ - \frac{{{\tau ^2}{{({\omega - {\omega_0}} )}^2}}}{{8\ln 2}}}},\; }$$
where $\omega \; \in {\omega _0} \pm B/2.$ In the TF setup, a diffraction grating is used to angularly disperse the ultrafast laser, as shown in Fig. 1. The proposed TF-DeepCGH with DPM method utilizes the 1st diffraction order light and assumes that this light is aligned with the optical axis of the stimulation system. Thus, the incident angle is set to 38.17° based on the grating equation. The complex field after the grating can be expressed as [32,33]
$$ {{U_G}({\xi ,\eta ;\omega } )= {A_G}({\xi ,\eta ;\omega } ){e^{j2\pi \sin {\theta _\omega }/\lambda , \xi }},}$$
where θω is the diffraction angle corresponding to $\omega $. As shown in Fig. 1, the diffracting beam propagates through a Fourier lens and converges at the Fourier plane, coinciding with the position of the SLM. The complex field before the SLM plane can be determined via the Fourier transform of ${U_G}({\xi ,\eta ;\omega } )$ as
$$ {{U_S}({x,y;\omega } )= {\mathrm{{\cal F}}_{grating \to SLM}}\{{{U_G}({\xi ,\eta ;\omega } )} \}.}$$

The holograms displayed on the SLM modulate the wavefront of ${U_S}({x,y;\omega } )$ as ${\phi _m}({x,y} )$ to generate customized illumination patterns at the FFP of the objective lens. The objective lens then performs Fourier transform of the modulated field to complex field of FFP. The 3D field distributions of the illumination patterns can be acquired utilizing the angular spectrum method [30]. In particular, the complex field of the illumination patterns can be derived as

$$ {{U_P}({u,v,\; z;\omega } )= {\mathrm{{\cal F}}_{SLM \to FFP}}\{{{U_S}({x,y;\omega } )\cdot {e^{j{\phi_m}({x,y} )}}\cdot P({x,y} )} \}\cdot H({{f_x},{f_y}} ),\; }$$
where $P({x,y} )$ is the pupil function of the back aperture of the objective and $H({{f_x},{f_y}} )= {e^{j2\pi \frac{z}{\lambda }\sqrt {1 - {{({\lambda {f_x}} )}^2} - {{({\lambda {f_y}} )}^2}} }}$ is the transfer function of the light propagation in the z (i.e., axial) direction with respect to the FFP. The simulation procedure described above yields the complex fields of all the spectral components. However, the field needs to be Fourier transformed from the frequency domain to the time domain to ensure that the beam is temporally focused. The two-photon excited fluorescence is quadratically proportional to the summation of the complex amplitude in the time domain [34,35]. Hence, the two-photon excited fluorescence can be derived as
$$ {{I_{2p}}({u,v,\; \mathrm{\Delta }z;t} )\propto \mathop \sum \limits_t {{[{{\mathrm{{\cal F}}_{freq \to time}}\{{{U_P}({u,v,\; \mathrm{\Delta }z;\omega } )} \}} ]}^4}.}$$

Equations (1) through (6) collectively describe the passage of the light through the optical system and the generation of the resulting two-photon fluorescence. This forward model provides a foundation for the neural network to learn the optical characteristics of the photostimulation system and hence solve the inverse problem of synthesizing optimized holograms. Essentially, the 3D stimulation pattern is transferred into one axial position information layer and the other axial position layer. The two patterns are then taken as inputs to the neural network, which, in the present case, has the form of a 2D U-Net architecture with two layers [36]. An interleaving method is first used to rearrange the input data through periodic sampling in order to reduce its size and enhance the ability of the network to generate different spatial frequency patterns [37]. The hologram synthesis network contains encoder blocks and decoder blocks. Within the encoder, there exist two identical blocks, each comprising two sets of convolution layers, ReLU functions [38], and batch normalization layers [39], coupled with maxpooling. Similarly, the decoder consists of two identical blocks, each harmonizing with two sets of convolution layers and ReLU functions via upsampling. The network comprises two branches with convolution layers afterward, where one branch infers the amplitude of the light, and the other branch predicts the phase of the light. The amplitude branch incorporates a ReLU activation function after the final convolution layer. However, in the phase branch, this activation function is omitted since the phases are relative and do not need to be restricted. The feature maps produced by the two branches are deinterleaved and then processed by inverse Fourier transform into a modulating complex field, ${U_m},$ via the SLM. Herein, the model structure is oriented toward achieving coherence among all spectral components to induce a temporal focusing effect. Consequently, the network has been streamlined to discern a model aligning the target pattern with the complex field at the FFP of the objective for all spectral components. Since only phase modulation occurs at the SLM, the phase term, ${\phi _m}({x,y} )$, of ${U_m}$ is taken as the hologram, where the phase values are restricted to the range of –π to π. The hologram, with a phase transmittance of ${e^{j{\phi _m}({x,y} )}}$, is multiplied by ${U_S}({x,y;\omega } )$ at the SLM plane, and the result, i.e., $U_S^{\prime}({x,y;\omega } )= {U_S}({x,y;\omega } )\cdot {e^{j{\phi _m}({x,y} )}}$, then generates the two-photon excited fluorescence associated with the customized illumination patterns through the objective lens. Through the training process described above, the neural network learns the photostimulation system information and thus gains the ability to solve the inverse problem between the hologram and the illumination pattern. Once the model has been trained, it can be used to synthesize holograms in real-time, enabling the generation of customized illumination patterns with single-cell precision for photostimulation purposes. The entirety of the networks within the proposed architecture underwent training on a singular Nvidia Tesla V100 GPU equipped with 32GB of memory.

When developing the proposed TF-DeepCGH with DPM method, the loss function for the neural network training process was originally designed as the Euclidean mean distance between the ground truth and the predicted data to maximize their correlation to train an optimized neural network [17]. However, it was found that the intensity of the two-photon excited fluorescence reconstructed by TF-DeepCGH with DPM was relatively lower than that obtained with the GS algorithm with TF (denoted as TF-GS). It was thus inferred that the proposed method failed to achieve the same level of TF at the target positions as the GS algorithm. Accordingly, the loss function was modified to enhance the intensity of the modulated light while preserving the axial confinement ability. In particular, the loss function was adjusted by incorporating the fourth root of the two-photon excited fluorescence (denoted as TF-iDeepCGH w/ DPM). This component is positively correlated with the temporal amplitude and thus encourages an improved TF effect of the excitation light during the training process. By multiplying this term with the target patterns, the aim was to maximize the temporal amplitude and concentrate it with the narrowest pulse width at the illumination patterns, resulting in a higher two-photon excited fluorescence. The improved loss function was written as

$$ {\textrm{loss} = 1 - \frac{{\sqrt[4]{{\hat{I}}}\mathrm{\ast I}}}{a},}$$
where a is a normalized constant based on the optical setting and was set to 104 in the present study. At present, the network trained with this loss function demonstrates enhanced fluorescence intensity and superior axial confinement in comparison to the loss function calculated using the Euclidean mean.

2.2 Overall optical system setup

Figure 2 illustrates the overall setup of the TF-based 3D multiphoton stimulation system with layer-by-layer fluorescence imaging. The illumination light was provided by a Yb: KGW femtosecond laser (PHAROS, PH1-10, Light Conversion) with a central wavelength of 1,030 nm, a pulse width of 228 fs, a repetition rate of 200 kHz, and a maximum power of 10 W. A power control unit composed of a half-wave plate (HWP) and polarization beam splitter (PBS) was used to attenuate the laser power and ensure that the input light source was horizontally polarized. Furthermore, a lens system (f1 = 250 mm, f2 = -150 mm) was employed to reduce the incident beam size in accordance with the diameter of the illumination patterns. The laser beam was directed onto a blazed reflective diffraction grating (600 lines/mm@1,250 nm, #49574, Edmund Optics). This grating angularly dispersed the incident beam along the ruling direction, segregating each frequency component into distinct angles. The first diffraction order light was captured, aligning the central frequency component perpendicular to the blazed grating along the optical axis. Subsequently, all frequency components were collimated and converged onto the FFP of the Fourier lens (f3 = 750 mm). The beam was then dispersed onto the SLM plane, forming a narrow elliptical area measuring 6.3 mm in length and 0.4 mm in width. This SLM, positioned at the FFP (Fourier plane) of the Fourier lens, was modulated by a calculated hologram displayed on a reflective SLM (LCOS-SLM JD5552, Jasper Display, 1,920 × 1,080 pixels, 6.4 µm pixel size). The elliptical area encompassed approximately 61,523 pixels on the SLM. The maximum laser power focused on the elliptical area of the SLM was approximately 100 mW. This intensity is likely adequate for effectively stimulating 20-30 neurons within an around 100 × 100 × 50 µm3 volume in the Drosophila brain. However, it's crucial to be cautious, as excessively high power could potentially result in damage to the SLM. The modulated beam was conjugated to the back focal plane of the objective (Plan-Apochromat 40X/ 1.0 NA, water immersion, Carl Zeiss). Thereafter, the modulated beam was focused at arbitrary positions within the volume of interest. A layer-by-layer fluorescence imaging system was used to verify the precision of the photostimulation system. To capture the 3D two-photon excited fluorescence images generated by the system, the sample of interest was placed on a motorized stage 1 (H101A ProScan, Prior Scientific) incorporating a 3-axis encoder and a rapid piezo stage (NanoScanZ 200, Prior Scientific) with a maximum travel range of 200 µm. The 3D fluorescence images were conjugated to the required imaging planes and scanned along the z-axis with a second motorized stage 2 (Z812-12 mm Motorized Actuator, Thorlabs). The images were finally captured by an sCMOS camera (ORCA-Fusion BT Digital CMOS camera, C15440-20UP, Hamamatsu Photonics). Through this process, volumetric images of the illumination patterns were captured by scanning along different depths.

 figure: Fig. 2.

Fig. 2. Schematic illustration of photostimulation system setup with excited fluorescence imaging. The right-hand side of the schematic shows the TF-based 3D multiphoton stimulation system, with the different colors indicating the different frequency components separated by the grating. The left-hand side shows the layer-by-layer fluorescence imaging system, in which the green path represents the two-photon excited fluorescence.

Download Full Size | PDF

3. Results and discussions

3.1 Simulation results

Figure 3 shows the simulation results obtained from the TF-DeepCGH with DPM method and improved loss function (i.e., TF-iDeepCGH w/ DPM). The network was trained using a dataset consisting of 1,000 randomly generated 3D fluorescence distributions, where each distribution comprised 256 × 256 × n pixels, with n representing the number of depth planes. Figure 3(a) shows the target illumination patterns, consisting of multiple randomly distributed fluorescent patterns at five distinct depths with a separation of 10 µm between them. Note that the dark gray layer represents the FFP (i.e., the five layers are located at the depths of –20, –10, 0, 10, and 20 µm relative to the FFP). Moreover, each target within the illumination patterns represents a Gaussian bead with a FWHM of 10 µm. The target illumination patterns were transformed into a DPM as the input of the neural network. During the training process, the neural network generated a hologram with dimensions of 256 × 256 pixels and a phase modulation range of –π to π. The hologram was subsequently utilized in numerical optical simulations to reconstruct the corresponding 3D two-photon excited fluorescence distributions, as shown in Fig. 3(b). It is seen that the illumination patterns are accurately projected to their respective positions at different depths. In other words, the reconstructed distribution accurately reproduces both the pattern size and the intensity distribution. Thus, the effectiveness of the proposed method (TF-iDeepCGH w/ DPM) is confirmed. It is noted that the coupling effect between the patterns at the different depths is significantly reduced. This is of practical benefit since it enables the simultaneous manipulation of the light distribution across three dimensions and thus achieves cellular-scale axial confinement. Figures 3(c) and 3(d) show the axial beam profiles and corresponding vertical cross-section image, respectively, of the different depths. It is seen in Fig. 3(c) that the two-photon excited fluorescence is well-confined, with average axial confinements of the target patterns of 4.78, 4.70, 4.61, 4.58, and 4.77 µm at FWHM for the layers located at distances of –20, –10, 0, 10, and 20 µm from the FFP, respectively. The results thus confirm the ability of TF-iDeepCGH w/ DPM to achieve axial confinement with a several-micron level of precision and demonstrate the performance improvement gained by incorporating TF into the proposed approach.

 figure: Fig. 3.

Fig. 3. Simulation results obtained from TF-iDeepCGH w/ DPM: (a) target illumination patterns, arranged from top to bottom and separated with 10-µm plane distances, represent patterns above the FFP, at the FFP (shown in dark gray), and below the FFP; (b) reconstructed 3D distribution of two-photon excited fluorescence; (c) axial intensity profiles fitted using the Lorentzian function [40]; and (d) vertical cross-sectional view of 3(b).

Download Full Size | PDF

Figure 4 provides a comparative analysis of the reconstruction performance of three methods: (1) the GS algorithm with TF (TF-GS), (2) DeepCGH with the Euclidean mean loss function and TF (TF-DeepCGH), and (3) the proposed TF-iDeepCGH w/ DPM method. Figure 4(a) shows the reconstructed fluorescence intensity and computation time for the five-layer configuration shown in Fig. 3(a) for each method. (It is noted that the fluorescence intensity is normalized by the mean value of the intensity of the illumination patterns at the FFP in every case.) Due to its enhanced loss function, TF-iDeepCGH with DPM achieves a comparable or even higher two-photon excited fluorescence intensity than TF-GS and TF-DeepCGH, whether in focus or out of focus. In conventional iterative CGH algorithms such as GS, the computation time increases with the number of iterations, which poses significant challenges for in-vivo bio-applications. In contrast, utilizing a pretrained neural network for hologram synthesis enables a rapid computation time measured in tens of milliseconds. However, the computation time of conventional DL methods increases with an increasing number of input layers (i.e., different depths). Moreover, the model must be retrained each time a new layer is added to the target illumination pattern. In the present study, this problem is addressed by transforming the target pattern to a DPM such that the input to the network is limited to just two channels (i.e., just two layers). It is seen in Fig. 4(a) that the average computation times of the TF-GS algorithm with 100 iterations, TF-DeepCGH-TF method, and TF-iDeepCGH with DPM method when synthesizing 20 randomly distributed target points at five distinct depths (see Fig. 3(a)) are 12 s, 21 ms, and 13 ms, respectively. The results show that DPM has the ability to reduce the computation time and improves by almost 1,000 times over traditional algorithms. The computation time of the proposed TF-iDeepCGH with DPM method (13 ms) is faster than the refresh rate of the SLM in the present optical setup (60 Hz). Consequently, the computation time is reduced from seconds (as in the conventional TS-GS method) to just several tens of milliseconds with no significant loss in the fluorescence intensity. In other words, the proposed method appears to provide a feasible approach for enabling real-time neural stimulation in living specimens.

 figure: Fig. 4.

Fig. 4. (a) Fluorescence intensity and computation time comparison among TF-GS, TF-DeepCGH, and TF-iDeepCGH w/ DPM. The fluorescence intensity was evaluated for 10 randomly distributed target patterns and the computation time was measured for 100 randomly target patterns. (b) Statistical analysis of axial confinement in FWHM at five different depths (i.e. –20, –10, 0, 10, and 20 µm depths from the FFP) for the three different methods. The analysis was based on 10 randomly generated illumination patterns in each case.

Download Full Size | PDF

Another challenge is the degradation of the axial confinement as the illumination patterns move away from the FFP. Figure 4(b) shows the axial confinement of the illumination patterns generated by the three approaches at the five different depths. It is seen that the axial confinement performance of the TF-DeepCGH's method weakens with an increasing distance from the FFP. In contrast, through the use of its improved loss function, the axial confinement performance of TF-iDeepCGH with DPM remains consistent in all of the out-of-focus planes. In particular, TF-iDeepCGH with DPM achieves average axial resolutions of 4.76, 4.74, 4.55, 4.51, and 4.83 µm in FWHM at distances of –20, –10, 0, 10, and 20 µm from the FFP, respectively. Simultaneously, the average computation time of the three methods was calculated across four different numbers of layers: 3, 5, 7, and 9, each involving 100 inferences. The observed trend indicates an increase in computation time corresponding to the augmented number of layers in both TF-GS and TF-DeepCGH, reporting times of 12.9, 20.1, 23.9, and 43.2 seconds, and 0.017, 0.018, 0.020, and 0.023 seconds, respectively. Notably, in the case of TF-iDeepCGH with DPM, the computation time remains around 0.013 seconds across the four different numbers of layers. It is noted that this performance is comparable to that of TF-GS despite the significantly faster computation time of TF-iDeepCGH with DPM. It is observed that this performance is comparable to TF-GS, despite TF-iDeepCGH with DPM exhibiting significantly faster computation times. However, it is essential to note that the application of DPM involves a trade-off, as it compromises stimulation uniformity across identical x and y positions in different layers.

3.2 Experimental results

The performance of the proposed TF-iDeepCGH with DPM method was further evaluated through experimental trials performed using a rhodamine 6 G sample with a thickness of several microns. As shown in Fig. 2, the rhodamine sample was placed on a motorized stage to facilitate z-axis scanning. As described previously, the 3D fluorescence distribution was conjugated with a second motorized stage housing an objective and a sCMOS camera, which was synchronized to capture the two-photon excited fluorescence distribution along the z-axis. Figures 5(a)-5(c) show the 3D distributions and corresponding lateral views of the two-photon excited fluorescence at the plane located 10 µm from the FFP in Fig. 3(a) obtained by the TF-GS, TF-DeepCGH, and TF-iDeepCGH w/ DPM methods, respectively. As shown, the TF-iDeepCGH w/ DPM method yields a significantly more intense two-photon excited fluorescence than the TF-DeepCGH method, both in the on-focus plane and the out-of-focus planes. In other words, the experimental results are consistent with the simulation findings. Moreover, the fluorescence intensity of the hologram synthesized by TF-iDeepCGH w/ DPM is comparable to that produced by TF-GS. Figure 5(d) shows the axial confinement abilities of the three methods. The mean axial confinements of TF-GS and TF-DeepCGH are 6.61, 6.25, 6.79, 6.09, 5.63 and 10.07, 7.46, 6.30, 7.44, 8.79 µm at the FWHM position for depths of –20, –10, 0, 10, and 20 µm, respectively. By contrast, the equivalent mean axial confinements of TF-iDeepCGH w/ DPM are 7.83, 7.40, 6.76, 5.69, and 6.57 µm, respectively. These values are around 45% higher than the corresponding simulation results (4.78, 4.70, 4.61, 4.58, and 4.77 µm). Notably, TF-iDeepCGH eliminates the problem of the deteriorating resolution of method TF-DeepCGH when out of focus, and is still considerably closer to the TF-GS in all values. Overall, the experimental results confirm the ability of the proposed method to efficiently stimulate target patterns with temporally concentrated pulses, ultimately providing precise and rapid holographic stimulation capabilities.

 figure: Fig. 5.

Fig. 5. Simulated 3D multiphoton distribution and corresponding experimental layer-by-layer fluorescence images obtained using (a) TF-GS, (b) TF-DeepCGH, and (c) TF-iDeepCGH with DPM. (d) Quantification analysis of the axial confinement performance of the three methods at five different depths.

Download Full Size | PDF

4. Conclusions

This study has presented a method that integrates a DeepCGH neural model with TF and DPM to synthesize 3D illumination patterns with axial confinement down to the several-micron level for optogenetic stimulation purposes. Significantly, the model is trained in an unsupervised manner, thereby enabling it to adapt to the DPM light propagation model and TF mechanism without the need for a large amount of paired training data. Furthermore, through the use of the DPM, the target illumination patterns are transformed into just two channels, one capturing the lateral information of the target pattern and the other capturing the depth information. The two channels are then provided as inputs to the neural model. Notably, this approach ensures a fixed computation time, irrespective of the number of depth layers considered in the target illumination pattern. It is thus more efficient than the TF-DeepCGH method, in which the computation time increases significantly with an increasing number of layers. Next, we change the Euclidean mean loss function, which is traditionally used, to a temporal amplitude loss function, which is actually physically meaningful for excitation, and we obtain good results here. The simulation results highlight the effectiveness of the proposed loss function in enhancing the intensity of the two-photon excited fluorescence to a level comparable with that of TF-GS. The simulation results additionally show that TF-iDeepCGH with DPM achieves an excellent axial confinement of the illumination patterns of approximately 4.7 µm at FWHM. The experimental results confirm the ability of TF-iDeepCGH with DPM to confine the axial resolution to just several microns and to enhance the intensity accordingly. In summary, the simulation and experimental results show that TF-iDeepCGH with DPM appears to hold significant promise for the precise manipulation of light at a cellular resolution and millisecond timescale.

Funding

National Science and Technology Council (110-2221-E-A49 -009, 110-2221-E-A49 -059 -MY3).

Disclosures

The authors declare no conflicts of interest.

Data availability

The source code presented in this paper is provided at [41].

References

1. L. Shi, B. Li, C. Kim, et al., “Towards real-time photorealistic 3D holography with deep neural networks,” Nature 591(7849), 234–239 (2021). [CrossRef]  

2. Y. Peng, S. Choi, N. Padmanaban, et al., “Neural holography with camera-in-the-loop training,” ACM Trans. Graph. 39(6), 1–14 (2020). [CrossRef]  

3. L. Shi, B. Li, and W. Matusik, “End-to-end learning of 3D phase-only holograms for holographic display,” Light: Sci. Appl. 11(1), 247 (2022). [CrossRef]  

4. I. Reutsky-Gefen, L. Golan, N. Farah, et al., “Holographic optogenetic stimulation of patterned neuronal activity for vision restoration,” Nat. Commun. 4(1), 1509 (2013). [CrossRef]  

5. H. Takahashi and Y. Hayasaki, “Three-dimensional structure formed by holographic two-photon microfabrication of photoresist,” in2007 Conference on Lasers and Electro-Optics - Pacific Rim, (2007), 1–2.

6. R. W. Gerchberg, “A practical algorithm for the determination of plane from image and diffraction pictures,” Optik 35, 237–246 (1972).

7. J. Zhang, N. Pégard, J. Zhong, et al., “3D computer-generated holography by non-convex optimization,” Optica 4(10), 1306–1313 (2017). [CrossRef]  

8. P. Chakravarthula, Y. Peng, J. Kollin, et al., “Computing high quality phase-only holograms for holographic displays,” in Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), (SPIE, 2020), 47–62.

9. P. Chakravarthula, Y. Peng, J. Kollin, et al., “Wirtinger holography for near-eye displays,” ACM Trans. Graph. 38, 213 (2019). [CrossRef]  

10. Y. LeCun, L. Bottou, Y. Bengio, et al., “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

11. Y. Kiarashinejad, S. Abdollahramezani, and A. Adibi, “Deep learning approach based on dimensionality reduction for designing electromagnetic nanostructures,” npj Comput. Mater. 6(1), 12 (2020). [CrossRef]  

12. Y. Kiarashinejad, M. Zandehshahvar, S. Abdollahramezani, et al., “Knowledge discovery in nanophotonics using geometric deep learning,” Adv. Intell. Syst. 2(2), 1900132 (2020). [CrossRef]  

13. R. Horisaki, R. Takagi, and J. Tanida, “Deep-learning-generated holography,” Appl. Opt. 57(14), 3859–3863 (2018). [CrossRef]  

14. J. Lee, J. Jeong, J. Cho, et al., “Deep neural network for multi-depth hologram generation and its training strategy,” Opt. Express 28(18), 27137–27154 (2020). [CrossRef]  

15. H. Zheng, J. Hu, C. Zhou, et al., “Computing 3D phase-type holograms based on deep learning method,” Photonics 8(7), 280 (2021). [CrossRef]  

16. H. Bo, “Deep learning approach for computer-generated holography,” in 2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC), (2021), 1–5.

17. M. Hossein Eybposh, N. W. Caira, M. Atisa, et al., “DeepCGH: 3D computer-generated holography using deep learning,” Opt. Express 28(18), 26636–26650 (2020). [CrossRef]  

18. J. Wu, K. Liu, X. Sui, et al., “High-speed computer-generated holography using an autoencoder-based deep neural network,” Opt. Lett. 46(12), 2908–2911 (2021). [CrossRef]  

19. E. Papagiakoumou, E. Ronzitti, and V. Emiliani, “Scanless two-photon excitation with temporal focusing,” Nat. Methods 17(6), 571–581 (2020). [CrossRef]  

20. D. Oron, E. Tal, and Y. Silberberg, “Scanningless depth-resolved microscopy,” Opt. Express 13(5), 1468–1476 (2005). [CrossRef]  

21. G. Zhu, J. V. Howe, M. Durst, et al., “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express 13(6), 2153–2159 (2005). [CrossRef]  

22. F.-C. Hsu, C.-Y. Lin, Y. Y. Hu, et al., “Light-field microscopy with temporal focusing multiphoton illumination for scanless volumetric bioimaging,” Biomed. Opt. Express 13(12), 6610–6620 (2022). [CrossRef]  

23. P. T. C. So, H. Choi, E. Yew, et al., eds. (De Gruyter, Berlin, Boston, 2018), 103–140.

24. N. Wijethilake, M. Anandakumar, C. Zheng, et al., “DEEP2: Deep learning powered de-scattering with excitation patterning (DEEP),” in Biophotonics Congress: Optics in the Life Sciences 2023 (OMA, NTM, BODA, OMP, BRAIN), Technical Digest Series (Optica Publishing Group, 2023), BW3B.3.

25. E. Papagiakoumou, V. De Sars, D. Oron, et al., “Patterned two-photon illumination by spatiotemporal shaping of ultrashort pulses,” Opt. Express 16(26), 22039–22047 (2008). [CrossRef]  

26. N. C. Pégard, A. R. Mardinly, I. A. Oldenburg, et al., “Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT),” Nat. Commun. 8(1), 1228 (2017). [CrossRef]  

27. N. Accanto, I. W. Chen, E. Ronzitti, et al., “Multiplexed temporally focused light shaping through a gradient index lens for precise in-depth optogenetic photostimulation,” Sci. Rep. 9(1), 7603 (2019). [CrossRef]  

28. W. J. Ryu, J. S. Lee, and Y. H. Won, “Continuous depth control of phase-only hologram with depth embedding block,” IEEE Photonics J. 14, 1–7 (2022). [CrossRef]  

29. Y. Wu, Y. Rivenson, H. Wang, et al., “Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning,” Nat. Methods 16(12), 1323–1331 (2019). [CrossRef]  

30. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company publishers, 2005).

31. U. Keller and R. Paschotta, Ultrafast Lasers (Springer, 2021).

32. C.-H. Lien, C.-Y. Lin, C.-Y. Chang, et al., “Simulation design of wide-field temporal-focusing multiphoton excitation with a tunable excitation wavelength,” OSA Continuum 2(4), 1174–1187 (2019). [CrossRef]  

33. C.-Y. Chang, C.-Y. Lin, Y. Hu, et al., “Temporal focusing multiphoton microscopy with optimized parallel multiline scanning for fast biotissue imaging,” J. Biomed. Opt. 26, 016501 (2021). [CrossRef]  

34. M. E. Durst, G. Zhu, and C. Xu, “Simultaneous spatial and temporal focusing for axial scanning,” Opt. Express 14(25), 12243–12254 (2006). [CrossRef]  

35. B. D. Guenther and D. Steel, Encyclopedia of Modern Optics (Academic Press, 2018).

36. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, (Springer, 2015), 234–241.

37. W. Shi, J. Caballero, F. Huszár, et al., “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 1874–1883.

38. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international Conference on Machine Learning (ICML-10), (2010), 807–814.

39. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning, (pmlr, 2015), 448–456.

40. H. Dana and S. Shoham, “Numerical evaluation of temporal focusing characteristics in transparent and scattering media,” Opt. Express 19(6), 4937–4948 (2011). [CrossRef]  

41. W. Chen, “DeepCGH-with-temporal-focusing-digital-propagation-matrix,” Github (2023). https://github.com/baronlwchen/DeepCGH-with-temporal-focusing-digital-propagation-matrix.git

Data availability

The source code presented in this paper is provided at [41].

41. W. Chen, “DeepCGH-with-temporal-focusing-digital-propagation-matrix,” Github (2023). https://github.com/baronlwchen/DeepCGH-with-temporal-focusing-digital-propagation-matrix.git

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Configuration of TF-DeepCGH with DPM.
Fig. 2.
Fig. 2. Schematic illustration of photostimulation system setup with excited fluorescence imaging. The right-hand side of the schematic shows the TF-based 3D multiphoton stimulation system, with the different colors indicating the different frequency components separated by the grating. The left-hand side shows the layer-by-layer fluorescence imaging system, in which the green path represents the two-photon excited fluorescence.
Fig. 3.
Fig. 3. Simulation results obtained from TF-iDeepCGH w/ DPM: (a) target illumination patterns, arranged from top to bottom and separated with 10-µm plane distances, represent patterns above the FFP, at the FFP (shown in dark gray), and below the FFP; (b) reconstructed 3D distribution of two-photon excited fluorescence; (c) axial intensity profiles fitted using the Lorentzian function [40]; and (d) vertical cross-sectional view of 3(b).
Fig. 4.
Fig. 4. (a) Fluorescence intensity and computation time comparison among TF-GS, TF-DeepCGH, and TF-iDeepCGH w/ DPM. The fluorescence intensity was evaluated for 10 randomly distributed target patterns and the computation time was measured for 100 randomly target patterns. (b) Statistical analysis of axial confinement in FWHM at five different depths (i.e. –20, –10, 0, 10, and 20 µm depths from the FFP) for the three different methods. The analysis was based on 10 randomly generated illumination patterns in each case.
Fig. 5.
Fig. 5. Simulated 3D multiphoton distribution and corresponding experimental layer-by-layer fluorescence images obtained using (a) TF-GS, (b) TF-DeepCGH, and (c) TF-iDeepCGH with DPM. (d) Quantification analysis of the axial confinement performance of the three methods at five different depths.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

B = 4 ln 2 λ 0 2 2 π τ c ,
A G ( ξ , η ; ω ) = A G ( ξ , η ; ω 0 ) π 2 ln 2 τ e τ 2 ( ω ω 0 ) 2 8 ln 2 ,
U G ( ξ , η ; ω ) = A G ( ξ , η ; ω ) e j 2 π sin θ ω / λ , ξ ,
U S ( x , y ; ω ) = F g r a t i n g S L M { U G ( ξ , η ; ω ) } .
U P ( u , v , z ; ω ) = F S L M F F P { U S ( x , y ; ω ) e j ϕ m ( x , y ) P ( x , y ) } H ( f x , f y ) ,
I 2 p ( u , v , Δ z ; t ) t [ F f r e q t i m e { U P ( u , v , Δ z ; ω ) } ] 4 .
loss = 1 I ^ 4 I a ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.