Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Overcoming the field-of-view to diameter trade-off in microendoscopy via computational optrode-array microscopy

Open Access Open Access

Abstract

High-resolution microscopy of deep tissue with large field-of-view (FOV) is critical for elucidating organization of cellular structures in plant biology. Microscopy with an implanted probe offers an effective solution. However, there exists a fundamental trade-off between the FOV and probe diameter arising from aberrations inherent in conventional imaging optics (typically, FOV < 30% of diameter). Here, we demonstrate the use of microfabricated non-imaging probes (optrodes) that when combined with a trained machine-learning algorithm is able to achieve FOV of 1x to 5x the probe diameter. Further increase in FOV is achieved by using multiple optrodes in parallel. With a 1 × 2 optrode array, we demonstrate imaging of fluorescent beads (including 30 FPS video), stained plant stem sections and stained living stems. Our demonstration lays the foundation for fast, high-resolution microscopy with large FOV in deep tissue via microfabricated non-imaging probes and advanced machine learning.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Deep-tissue imaging is important for biological research because the organization of cells and sub-cellular organelles can depend on their natural cellular environment, and this is especially true for plants. The most common method for deep-tissue imaging is multi-photon microscopy (MPM) [14], which has the advantages of non-invasiveness, high resolution, and rejection of background signals. However, the achievable imaging depth is limited typically to several hundred micrometers from the surface, due to scattering. Phototoxicity further limits practical excitation power. Most implementations of MPM require scanning and are relatively slow, although there are recent advancements that can alleviate this issue [5]. If minimally-invasive procedure is allowed, then implanting a microendoscope (probe) to the desired depth in tissue overcomes these limitations [6,7]. Furthermore, it has been recently shown that the application of machine learning can improve the image quality and speed [8,9]. Similarly, a miniscope system based on a masked gradient-index (GRIN) lens was used to demonstrate 3D imaging with high resolution [10,11]. In microendoscopy, a critical challenge is to minimize trauma due to implantation. The obvious solution is to minimize the size of the implanted probe. The most common implanted probe for high-resolution microscopy is the GRIN lens, typically in the form of a cylinder of diameter between 0.5 mm and 1 mm [6]. The GRIN-lens probe is an imaging system, whose field-of-view (FOV) is typically less than 30% of the probe diameter (limited primarily by its aberrations) [12]. Therefore, scaling the GRIN-lens probe down also significantly reduces the FOV.

An alternate approach uses a short-segment multi-mode fiber (MMF, typical diameter < 0.22 mm) as a non-imaging element to deliver the excitation and to collect the emission light. In this case, computational post-processing of acquired images is generally required to reconstruct anthropocentric images [13]. By placing a scattering medium next to the distal end of the fiber, 2D imaging could be achieved by recording the spectrum [14]. However, the achievable resolution is limited. Other studies have also been reported exploring the application of machine learning for image reconstruction through an MMF [15,16]. Most importantly, it is possible to achieve FOV that is almost 100% of the probe diameter. This approach is more favorable to scaling to smaller probe diameters to reduce the amount of tissue displacement. In prior work, we first characterized the space-variant point-spread function of the MMF, and then applied regularized-matrix inversion to reconstruct images [13,17,18]. In recent work, we collected pairs of images from the same region of the sample (one from the MMF and another from a conventional microscope) simultaneously, and then trained a machine-learning algorithm to perform image reconstructions [1921]. We note that there are two alternate approaches for imaging using MMFs. The first uses adaptive optics to scan a focused spot across the distal plane, which ensures that the collected signal originates preferentially from the excitation focus [22]. Although no computation is required for imaging, this approach is generally slow due to scanning. The second method uses speckle patterns and the optical-memory effect to computationally reconstruct images [23]. Although this method is simple and fast, it is limited to small FOV and fails for incoherent or partially coherent signals. In contrast, our machine-learning-enabled approach is relatively agnostic to the spatial and temporal coherence of the signal [24]. As in all computational microscopy methods, low coherence and image sparsity will affect the signal-to-noise ratio and the quality of reconstructed images. In this work, we report experimental demonstration of Computational Optrode-Array Microscopy (COAM) via a 1 × 2 array of microfabricated glass probes (optrodes) [25]. In COAM, images from both optrodes are acquired and reconstructed simultaneously, thereby doubling the total FOV. The diameter of each optrode (averaged over its length, since there is a small taper at the tip) is ∼80 µm, length ∼1.2 mm, and the center-to-center spacing between the two optrodes is 400 µm. We demonstrated imaging of fluorescent beads with spatial resolution smaller than 4 µm, FOV from one optrode as large as 400 µm, and imaging speeds as fast as 30 Hz (limited by our camera sensor). We applied this microscope to imaging of fluorescently stained cell walls in plant-stem sections, and also to in vivo imaging from a live stem. Other contributions of this work include collection of ground-truth images from the same surface of the sample (allowing for imaging thick samples), and the use of contrast enhancement (see details in Supplement 1) in ground-truth data to reduce out-of-focus fluorescence and improve resolution of the reconstructed images (by almost a factor of 2).

2. Experiments

2.1 Optrode-array microscope setup

We modified a conventional epi-fluorescence widefield microscope by placing the optrode array between the objective lens (working distance = 1.2 mm, Olympus PLN 20x) and the sample. In order to obtain training data, we used a pair of matched objective lenses to create a 4F relay system, and inserted a beamsplitter to acquire images with a conventional (reference) microscope. The system is illustrated in Fig. 1(a). The light source is a light-emitting diode (LED) with center wavelength = 470 nm, bandwidth (FWHM) = 28 nm (model number M470L5, Thorlabs) and an excitation filter (center wavelength = 472 nm, bandwidth = 30 nm, FF02-472/30-25, BrightLine) was used. The optrode array was uniformly illuminated and a machined metal plate (stainless steel, thickness = 0.1 mm) was used to block excitation light in the regions between the optrodes. The illumination area under each optrode depends upon the distance between the optrode and the sample as discussed later. Unless specified otherwise, the distance between the optrode and sample was made as small as possible, and the illuminated region was approximately a circle of diameter ∼72µm (see Fig. S8(a) in Supplement 1). Each optrode collected fluorescence from within its FOV, and the pattern formed on its proximal end was recorded by an sCMOS camera (2048 × 2048 pixels, pixel size = 6.5µm, Hamamatsu C11440) through a tube lens and an emission filter. Ground-truth images were captured by the reference image sensor (1280 × 1024 pixels, pixel size = 3.6 µm, AmScope MU130). The slide samples were mounted on a 3-axis motorized stage (two 1-axis stages, PT1-Z8, Thorlabs) and a high-load vertical translation stage (MLJ150, Thorlabs). Note that images from both optrodes are collected simultaneously. A calibration step is used to align the region-of-interest of each optrode to its corresponding ground-truth image (details in Supplement 1).

 figure: Fig. 1.

Fig. 1. Sketch of experimental System. (a) The system is comprised of an epifluorescence microscope coupled to the proximal end of an optrode array. A 4-F system consisting of two objectives is used to relay the image from the sample to the distal end of an optrode array. A beamsplitter was used to collect the ground-truth images with a conventional microscope. Inset shows photograph of a large optrode array. Note that we used a 1 × 2 array in our experiments in order to fit the region-of-interest within the FOV of the objective above the optrode array. (b) Imaging process. The regions-of-interest are used to crop both the ground-truth and optrode image frames to separate the data for the two optrodes. The images (128 × 128 pixels after downsampling) are then processed by their corresponding U-nets to produce the reconstructed images. U-net-L is the trained U-net with the dataset from the left optrode. U-net-R is the trained U-net with the dataset from the right optrode. Details of the U-nets are included in the Supplement 1 (Fig. S2).

Download Full Size | PDF

A 4 × 4 array of glass optrodes was fabricated on a 400µm thick glass backplane, with each probe measuring ∼1.2 mm in length and spaced with a 400µm pitch. Probe diameter is 80 to 85µm, with a pyramidal tip at the distal end with height ∼40µm. Details of the fabrication process have been reported elsewhere [26] and also summarized in the Supplement 1.

2.2 Deep-neural network

The image transmitted through each optrode is related to the sample via a space-variant point-spread function. As we have shown previously [1921], an auto-encoder network like the U-net is a very effective architecture to learn such a transformation and invert it. Here, we modified the standard U-net, comprised of encoder and decoder sections concatenated by skip connections. The encoder and decoder sections include dense blocks, consisting of 2 convolutional layers with a RELU activation function and a batch-normalization layer. Compared to prior work, here we added two more layers, and adjusted the filter-size in each layer to improve the mean-absolute error (MAE) and structural similarity index measure (SSIM) of reconstructed images. A comparison of these metrics against the prior U-net architecture is summarized in Table S1. Specifically, we combined the pixel-wise cross-entropy and SSIM to create a multi-objective loss function while training. The loss function is defined as:

$$\textrm{L} = ( {\frac{1}{\textrm{N}}\mathop \sum \nolimits_\textrm{i} - {\textrm{g}_\textrm{i}}\textrm{log}{\textrm{p}_\textrm{i}} - ({1{\; } - {\textrm{g}_\textrm{i}}} )\textrm{log}({1 - {\textrm{p}_\textrm{i}}} )) + (1 - \textrm{mean}(\textrm{ssim}({\textrm{g},\textrm{p}} )})),$$
where g and p represent the ground truth and predicted images, gi and pi represent the ground truth and predicted pixel intensity, respectively. The U-net was trained and evaluated separately for each optrode. The frames captured by the sCMOS and the reference cameras contain information from both optrodes, simultaneously. Regions-of-interest (ROIs) were designed for each camera that was aligned to one another (see details in Supplement 1). These ROIs were then used to crop each frame into a pair of images (one for each optrode, see Fig. 1(b)). Each optrode image was down-sampled from 350 × 350 pixels to 128 × 128 pixels in order to match the size of the ground-truth images. The U-net was trained and tested on the following hardware: Intel Core i7-4790 CPU and NVIDIA GeForce GTX 970, resulting in an average image-reconstruction time of 2.3 ms.

2.3 Preparation of plant samples

Branches from Populus nigra x deltoides genotype GWR_50_102 poplar hybrid cuttings (4 months after rooting) were removed at the 24th internode. Cut stem ends were immediately placed in water and cut a second time while submerged. Branches were then transferred to 0.5x Murashigie and Skoog solution. Images were taken within 24 h of branch removal. For preparation of sections, a portion of stem from the 12th internode was sectioned (thickness ∼40 µm) on a Leica VT1000S vibratome. Sections were stored in water, until staining with 200µg−1 mL Auramine O for 5 min followed by rinsing twice in water for 5 min. For incised stem, just prior to imaging a transverse cut was made with a sharp razor blade and the apical portion of the stem was removed. The exposed surface was stained by adding 500 µL 200µg−1 mL Auramine O solution onto the surface for 10s followed by rinsing with water followed by immediate imaging.

3. Results

We used fluorescent beads (2% solid, 4µm-diameter green FluoSpheres sulfate microsphere, see details in Supplement 1) sandwiched between a coverslip and a microscope slide to evaluate the performance of our system. A dataset containing 22,860 image-pairs was acquired for each optrode (note that the images from two optrodes are acquired simultaneously). The U-net for each optrode was trained with 19,860 images using an Adam optimizer with a learning rate of 1 × 10−4. From the remaining images, 2,000 were used for validation, and 1,000 for testing the trained network. Figure 2 shows three exemplary results from each optrode. The trained networks are able to reconstruct the images remarkably well. The averaged SSIM and MAE from the test images are summarized in Table S2. By evaluating the intensity cross-sections, we estimate that each optrode is able to resolve beads spaced by ∼8µm and to resolve the full-width at half-maximum (FWHM) of isolated beads of 3.9µm. It is noteworthy that the resolution of both optrodes is almost identical. The FOV of the reconstructed images is ∼72µm (diameter). We were able to compute the resolution over a large number of images of beads, and calculated a mean diameter of 4.2µm (standard deviation = 0.42µm, see sections 13 and 14 in Supplement 1).

 figure: Fig. 2.

Fig. 2. Imaging fluorescent beads with 1 X 2 optrode array. (a) Left optrode. (b) Right optrode. SSIM values for reconstructed images are labelled. Cross-sections through single beads and closely-spaced bead pairs were used to estimate resolution. Nominal bead diameter is 4µm and we resolved a mean diameter of 4.2µm (standard deviation = 0.42µm). Also see Figs. S12-S15.

Download Full Size | PDF

The excitation beam diverges after it exits the optrode (see illustration in Fig. 3(a)). We confirmed this by recording the excitation-light distribution beyond the optrode as shown in Fig. S8(a). In order to characterize the increase in FOV, we captured 29,814 and 29,829 pairs of images of fluorescent beads at distance of 300 µm and 400 µm, respectively. The FOV was estimated by averaging all the ground-truth images in each plane (see Fig. S8(b)). The FOV (diameter) vs distance between the optrode-tip and sample is plotted in Fig. 3(b), which indicates an average divergence full-angle of 49.4° [25]. This corresponds to a numerical aperture (in air) of 0.42, which is consistent with the optrode geometry based on previous calculations [25]. We trained a separate pair of U-nets at each plane (and also for each optrode). In each case we used 2,000 pairs of images for validation and 2,000 pairs of images were set aside for testing the trained networks (additional details in the Supplement 1). As shown in Fig. 3(c), the reconstructed results show excellent agreement with the ground-truth images in all cases with an average SSIM and MAE of 0.92, 0.01, and 0.90, 0.01 at the distances of 300 µm, and 400 µm, respectively (see Table S2). At distance of 400 µm, the FOV of one optrode slightly overlaps with that of its neighbor, which will allow for combining the FOV of two optrodes in the array using stitching algorithms in the future.

 figure: Fig. 3.

Fig. 3. Increasing the field-of-view beyond the optrode diameter. (a) Excitation light diverges beyond the optrode with an approximate full angle of 49.4°. This corresponds to an effective NA of 0.42 (in air), which is consistent with prior work [18]. By increasing the distance of the sample from the optrode tip (d), the excitation area is increased (see Fig. S8). Example ground-truth images at various distances are shown. (b) The FOV is averaged over ∼30,000 ground-truth images at each d. (c) By training separate networks for each plane (and each optrode), image reconstructions at the larger FOV are demonstrated. When d = 400 µm, the FOV is slightly larger than 400 µm, the spacing between the optrodes, which could enable image stitching.

Download Full Size | PDF

Since image-reconstructions are very fast, the imaging speed is limited primarily by the frame rate of the sensor and the brightness of the fluorophores. Experiments done with stationary plant samples indicate that video frame rates are feasible (see Fig. S5 in Supplement 1). In order to explore imaging dynamic events, we fabricated a fluidic channel (see details in Supplement 1) and imaged the motion of fluorescent beads through this channel via capillary action. Video data at 30 frames per second (FPS) were acquired and reconstructed (see Visualization 1 and a few frames from the video are shown in Fig. 4 with arrows indicating motion of beads). This frame rate is limited by the sensor. Ground-truth video is included in Visualization 2 for comparison. The ground-truth sensor limited the achievable frame rate to 25 FPS. We also performed analysis of the sensor and temporal noise, and concluded that shorter exposure times contribute to reducing the SNR (see section 12 in Supplement 1). This can be mitigated partially by using sensor pixels with lower dark noise limit.

 figure: Fig. 4.

Fig. 4. Imaging sections of stems. Three example images from (a) the left and (b) the right optrode. Note that the images in same row are acquired simultaneously (same frame). The fluorescence from the cell wall is well resolved. (c) Contrast enhancement of ground-truth images prior to training (U-net-ce) can improve the contrast and resolution of the reconstructed images. The full-width at half-maximum of the resolved cell wall is ∼4µm for U-net and ∼2µm for U-net-ce. SSIM values are labelled in the reconstructed images. Inset: photograph of an exemplary stem section (thickness ∼40 µm).

Download Full Size | PDF

We next applied this technique to visualize plant tissue structure in stem sections from poplar hybrids (Populus nigra x deltoides genotype GWR_50_102). The stem from a plant was sliced into 40 µm transverse sections and bathed in Auramine O solution to stain lignin and suberin cell wall components [27]. The sections were mounted in water on slides and coverslips were sealed with nail polish. A dataset containing over 23,000 images was collected from these samples. As before, 2,000 images were set aside for testing and a separate 2,000 for validation, while the rest were used for training. Three exemplary images from each optrode are summarized in Figs. 5(a) and 5(b). The reconstructed results agree well with the ground-truth images (average SSIM and MAE for left and right optrodes were 0.79, 0.05, and 0.78, 0.05 respectively, see Table S2). Since the sections are ∼40µm thick, there is strong background from out-of-focus fluorescence, which reduces the overall image contrast. In order to mitigate this, we first digitally adjusted the contrast of the ground-truth images, retrained the deep neural networks and tested their performance (details of each step are included in the Supplement 1). The results are summarized in Fig. 5(c). The network trained using digitally enhanced ground-truth images (labelled U-net-ce) clearly outperforms that trained using unenhanced ground-truth images (labelled U-net). We note that the contrast of output images from U-net-ce is significantly higher, background fluorescence is almost completely eliminated, and the plant-cell wall (thickness ∼2.3µm) is clearly resolved. In addition, we note that this resolution is consistently achieved across the full FOV (diameter = 72µm) as summarized in Fig. S7 (Supplement 1).

 figure: Fig. 5.

Fig. 5. Dynamic imaging of capillary flow. Consequent frames from the reconstructed and ground-truth videos of fluorescent beads in a fluidic channel undergoing capillary flow. Note that the regions-of-interest in the reference camera do not perfectly match those in the optrode-acquired frame. The frame rates for U-net and ground-truth images were 30 FPS and 25 FPS, respectively. See Visualization 1 and Visualization 2. Also, see section 12 in Supplement 1 for temporal noise analysis.

Download Full Size | PDF

In order to demonstrate imaging with a living plant, we carefully made an incision on a detached branch from a poplar hybrid (4.5 months old, rooted from cutting). The cut stem was fed with 0.2X Murashigie and Skoog solution throughout the experiment to sustain the cells. Two types of incisions were attempted: one transverse and another diagonal. Since the incised region was not flat, only one optrode was in focus. We recorded 106 image-pairs. The networks previously trained on plant sections (Fig. 5) were used for image reconstructions. The reconstructed results from 4 example images are shown in Fig. 6, and show good agreement with the ground-truth images. Note that no retraining was performed for these experiments.

 figure: Fig. 6.

Fig. 6. Imaging live stems. We imaged the transverse and diagonal cut surfaces of separate detached (but live) stems. Only one optrode was in focus. Its U-net trained with stem sections were used. No re-training was performed.

Download Full Size | PDF

4. Discussion

The FOV of any lens (including a microscope objective) is limited primarily by off-axis aberrations to a small fraction (< 1/3rd) of its diameter. This trade-off can be overcome, if we relax the imaging condition. Here, we use a non-imaging optic, an optrode, to transport light into and out of a sample. The acquired image has little resemblance to the object, since this transport represents a space-variant transformation. By collecting sufficient training data, we show that a machine-learning algorithm can learn to effectively invert this transformation, producing anthropocentric images with high resolution. In this case, the FOV is limited by the excitation and collection efficiency (i.e., the relative power in the space-variant point-spread functions), and with appropriate choice of geometries can be 1x to 5x larger than the probe diameter itself. Here, we demonstrated imaging with the following metrics: resolving feature-widths ranging from 2.1µm to 3.9µm, FOV from 72µm to ∼400µm (with probe diameter of 80µm), and imaging speeds up to 30FPS. Our demonstration included imaging of fluorescent beads, stained stem sections and incised living stems. We did not demonstrate implantation of the optrodes here, although studies of chronic implantation of similar devices have been reported before [28,29]. Recently, we demonstrated computational microscopy using a commercial dual-cannula probe [30]. However, due to the cylindrical shape of the cannula, the FOV of each cannula was smaller than its diameter. Furthermore, the divergence angle of excitation light limited the achievable FOV with multiple cannulae. Compared to this prior work, here we report the most rigorous and exhaustive set of ex vivo experiments to date. Finally, we note that there is one important limitation of our approach that requires additional work. The reconstructions of non-sparse images (and those with significant background fluorescence) are constrained by the signal-to-noise ratio of acquired frames (see the images of longitudinal sections in the Supplement 1 for example). Although this is a common problem for all optical microscopies, it is especially critical for a computational microscope, but we note that this problem may be mitigated via the acquisition of and contrast-enhancement (as illustrated in Fig. 5(c)) of training data that is closely representative of the samples of interest. Acquiring large numbers of ground-truth images from deep tissue remains an open challenge, which potentially may be addressed with accurate simulation models [31]. Last, but not least, scaling to larger array of optrodes requires a high-NA, wide FOV objective (not implanted) such that the larger optrode array can fit within its FOV. This challenge may be overcome by utilizing an array of high-NA microlenses, for example [32], where one microlens is matched to an optrode.

Funding

U.S. Department of Energy (55801063); National Institutes of Health (1R21EY030717-01).

Disclosures

RM: Univ. of Utah (P). The use of trade or firm names in this publication is for reader information and does not imply endorsement by the U.S. Department of Agriculture of any product or service.

Data availability

All data are available in the main text or the supplementary materials. Code and representative data are available in Ref. [33].

Supplemental document

See Supplement 1 for supporting content.

References

1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

2. K. Takasaki, R. Abbasi-Asl, and J. Waters, “Superficial bound of the depth limit of 2-photon imaging in mouse brain,” eNeuro 7(1), ENEURO.0255-19.2019 (2020). [CrossRef]  

3. N. G. Horton, K. Wang, D. Kobat, C. G. Clark, F. W. Wise, C. B. Schaffer, and C. Xu, “In vivo three-photon microscopy of subcortical structures within an intact mouse brain,” Nat. Photonics 7(3), 205–209 (2013). [CrossRef]  

4. T. Wang and C. Xu, “Three-photon neuronal imaging in deep mouse brain,” Optica 7(8), 947–960 (2020). [CrossRef]  

5. B. Li, C. Wu, M. Wang, K. Charan, and C. Xu, “An adaptive excitation source for high speed multiphoton microscopy,” Nat. Methods 17(2), 163–166 (2020). [CrossRef]  

6. S. Malvaut, V.-S. Constantinescu, H. Dehez, S. Dodric, and A. Saghatelyan, “Deciphering brain function by miniaturized fluorescence microscopy in freely behaving animals,” Front. Neurosci. 14, 819 (2020). [CrossRef]  

7. S. Chen, Z. Wang, D. Zhang, A. Wang, L. Chen, H. Cheng, and R. Wu, “Miniature fluorescence microscopy for imaging brain activity in freely-behaving animals,” Neurosci. Bull. 36(10), 1182–1190 (2020). [CrossRef]  

8. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

9. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

10. K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, and L. Waller, “Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy,” Light: Sci. Appl. 9(1), 171 (2020). [CrossRef]  

11. K. Yanny, K. Monakhova, R. W. Shuai, and L. Waller, “Deep learning for fast spatially varying deconvolution,” Optica 9(1), 96–99 (2022). [CrossRef]  

12. M. Sato, S. Sano, H. Watanabe, Y. Kudo, and J. Nakai, “An aspherical microlens assembly for deep brain fluorescence microendoscopy,” Biochem. Biophys. Res. Commun. 527(2), 447–452 (2020). [CrossRef]  

13. G. Kim and R. Menon, “An ultra-small three dimensional computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014). [CrossRef]  

14. S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015). [CrossRef]  

15. B. Rahmani, D. Loterie, G. Konstantinou, D. Psaltis, and C. Moser, “Multimode optical fiber transmission with a deep learning network,” Light: Sci. Appl. 7(1), 69 (2018). [CrossRef]  

16. N. Borhani, E. Kakkava, C. Moser, and D. Psaltis, “Learning to see through multimode fibers,” Optica 5(8), 960–966 (2018). [CrossRef]  

17. G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015). [CrossRef]  

18. G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence Computational Cannula Microscopy,” Sci. Rep. 7(1), 44791 (2017). [CrossRef]  

19. R. Guo, Z. Pan, A. Taibi, J. Shepherd, and R. Menon, “Computational cannula microscopy of neurons using neural networks,” Opt. Lett. 45(7), 2111–2114 (2020). [CrossRef]  

20. R. Guo, Z. Pan, A. Taibi, J. Shepherd, and R. Menon, “3D computational cannula fluorescence microscopy enabled by artificial neural networks,” Opt. Express 28(22), 32342–32348 (2020). [CrossRef]  

21. R. Guo, S. Nelson, M. Regier, M. W. Davis, E. M. Jorgensen, J. Shepherd, and R. Menon, “Scan-less machine-learning-enabled incoherent microscopy for minimally-invasive deep-brain imaging,” Opt. Express 30(2), 1546–1554 (2022). [CrossRef]  

22. E. R. Andresen, S. Sivankutty, V. Tsvirkun, G. Bouwmans, and H. Rigneault, “Ultrathin endoscopes based on multicore fibers and adaptive optics: a status review and perspectives,” J. Biomed. Opt. 21(12), 121506 (2016). [CrossRef]  

23. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

24. R. Guo, S. Nelson, and R. Menon, “A needle-based deep-neural-network camera,” Appl. Opt. 60(10), B135–B140 (2021). [CrossRef]  

25. T. V. F. Abaya, M. Diwekar, S. Blair, P. Tathireddy, L. Reith, and F. Solzbacher, “Deep-tissue light delivery via optrode arrays,” J. Biomed. Opt. 19(1), 015006 (2014). [CrossRef]  

26. R. Scharf, C. F. Reiche, N. McAlinden, Y. Cheng, E. Xie, R. Sharma, P. Tathireddy, L. Rieth, K. Mathieson, and S. Blair, “A compact integrated device for spatially selective optogenetic neural stimulation based on the Utah Optrode Array,” Proc. SPIE 10482, 104820 M (2018). [CrossRef]  

27. R. Ursache, T. G. Andersen, P. Marhavý, and N. Geldner, “A Protocol for Combining Fluorescent Proteins with Histological Stains for Diverse Cell Wall Components,” Plant J. 93(2), 399–412 (2018). [CrossRef]  

28. L. Wang, K. Huang, C. Zhong, L. Wang, and Y. Lu, “Fabrication and modification of implantable optrode arrays for in vivo optogenetic applications,” Bio. Phys. Rep. 4(2), 82–93 (2018). [CrossRef]  

29. A. M. Clark, A. Ingold, C. F. Reiche, D. Cundy III, J. L. Balsor, F. Federer, N. McAlinden, Y. Cheng, J. D. Rolston, L. Rieth, M. D. Dawson, K. Mathieson, S. Blair, and A. Angelucci, “An Optrode Array for Spatiotemporally Precise Large-Scale Optogenetic Stimulation of Deep Cortical Layers in Non-human Primates,” bioRxiv2022.02.09.479779.(2022). [CrossRef]  

30. E. Mitra, R. Guo, S. Nelson, N. Nagarajan, and R. Menon, “Computational microscopy for fast widefield deep-tissue fluorescence imaging using a commercial dual-cannula probe,” Opt. Continuum 1(9), 2091–2099 (2022). [CrossRef]  

31. M. Weigert, U. Schmidt, T. Boothe, et al., “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

32. R. Menon, D. Gil, and H. I. Smith, “Experimental characterization of focusing by high-numerical-aperture zone plates,” J. Opt. Soc. Am. A 23(3), 567–571 (2006). [CrossRef]  

33. https://github.com/theMenonlab/CCM_project/.

Supplementary Material (3)

NameDescription
Supplement 1       Supplementary document
Visualization 1       Ground-truth video.
Visualization 2       Reconstructed video.

Data availability

All data are available in the main text or the supplementary materials. Code and representative data are available in Ref. [33].

33. https://github.com/theMenonlab/CCM_project/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Sketch of experimental System. (a) The system is comprised of an epifluorescence microscope coupled to the proximal end of an optrode array. A 4-F system consisting of two objectives is used to relay the image from the sample to the distal end of an optrode array. A beamsplitter was used to collect the ground-truth images with a conventional microscope. Inset shows photograph of a large optrode array. Note that we used a 1 × 2 array in our experiments in order to fit the region-of-interest within the FOV of the objective above the optrode array. (b) Imaging process. The regions-of-interest are used to crop both the ground-truth and optrode image frames to separate the data for the two optrodes. The images (128 × 128 pixels after downsampling) are then processed by their corresponding U-nets to produce the reconstructed images. U-net-L is the trained U-net with the dataset from the left optrode. U-net-R is the trained U-net with the dataset from the right optrode. Details of the U-nets are included in the Supplement 1 (Fig. S2).
Fig. 2.
Fig. 2. Imaging fluorescent beads with 1 X 2 optrode array. (a) Left optrode. (b) Right optrode. SSIM values for reconstructed images are labelled. Cross-sections through single beads and closely-spaced bead pairs were used to estimate resolution. Nominal bead diameter is 4µm and we resolved a mean diameter of 4.2µm (standard deviation = 0.42µm). Also see Figs. S12-S15.
Fig. 3.
Fig. 3. Increasing the field-of-view beyond the optrode diameter. (a) Excitation light diverges beyond the optrode with an approximate full angle of 49.4°. This corresponds to an effective NA of 0.42 (in air), which is consistent with prior work [18]. By increasing the distance of the sample from the optrode tip (d), the excitation area is increased (see Fig. S8). Example ground-truth images at various distances are shown. (b) The FOV is averaged over ∼30,000 ground-truth images at each d. (c) By training separate networks for each plane (and each optrode), image reconstructions at the larger FOV are demonstrated. When d = 400 µm, the FOV is slightly larger than 400 µm, the spacing between the optrodes, which could enable image stitching.
Fig. 4.
Fig. 4. Imaging sections of stems. Three example images from (a) the left and (b) the right optrode. Note that the images in same row are acquired simultaneously (same frame). The fluorescence from the cell wall is well resolved. (c) Contrast enhancement of ground-truth images prior to training (U-net-ce) can improve the contrast and resolution of the reconstructed images. The full-width at half-maximum of the resolved cell wall is ∼4µm for U-net and ∼2µm for U-net-ce. SSIM values are labelled in the reconstructed images. Inset: photograph of an exemplary stem section (thickness ∼40 µm).
Fig. 5.
Fig. 5. Dynamic imaging of capillary flow. Consequent frames from the reconstructed and ground-truth videos of fluorescent beads in a fluidic channel undergoing capillary flow. Note that the regions-of-interest in the reference camera do not perfectly match those in the optrode-acquired frame. The frame rates for U-net and ground-truth images were 30 FPS and 25 FPS, respectively. See Visualization 1 and Visualization 2. Also, see section 12 in Supplement 1 for temporal noise analysis.
Fig. 6.
Fig. 6. Imaging live stems. We imaged the transverse and diagonal cut surfaces of separate detached (but live) stems. Only one optrode was in focus. Its U-net trained with stem sections were used. No re-training was performed.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

L = ( 1 N i g i log p i ( 1 g i ) log ( 1 p i ) ) + ( 1 mean ( ssim ( g , p ) ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.