Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fluorescence fluctuation-based super-resolution microscopy using multimodal waveguided illumination

Open Access Open Access

Abstract

Photonic chip-based total internal reflection fluorescence microscopy (c-TIRFM) is an emerging technology enabling a large TIRF excitation area decoupled from the detection objective. Additionally, due to the inherent multimodal nature of wide waveguides, it is a convenient platform for introducing temporal fluctuations in the illumination pattern. The fluorescence fluctuation-based nanoscopy technique multiple signal classification algorithm (MUSICAL) does not assume stochastic independence of the emitter emission and can therefore exploit fluctuations arising from other sources, as such multimodal illumination patterns. In this work, we demonstrate and verify the utilization of fluctuations in the illumination for super-resolution imaging using MUSICAL on actin in salmon keratocytes. The resolution improvement was measured to be 2.2–3.6-fold compared to the corresponding conventional images.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Total internal reflection fluorescence microscopy (TIRFM) is a method used to obtain optical sectioning and sharp contrast of the sample close to the substrate layer [1]. The high-contrast images resulting from the elimination of out-of-focus signal of TIRFM have been exploited in 2D implementations of essentially all fluorescence-based super-resolution microscopy techniques, e.g. structured illumination microscopy (SIM) [2], stimulated emission depletion (STED) [3] and single molecule localization microscopy (SMLM) [4].

The typical lens-based implementation for TIRFM leads to disadvantages such as inhomogeneous illumination field (Gaussian-like intensity profile) and lack of flexibility in the choice of objective lens, limiting the associated sample illumination area and field-of-view (FOV). Approaches to achieve a more homogeneous illumination in lens-based TIRFM have been developed [5,6], and greater experimental flexibility—like a larger FOV and excitation area—can be achieved using prism-based illumination [7]. Nevertheless, the typical lens-based drawbacks still seem to limit most TIRFM systems.

Mass-producible photonic waveguide chips have been introduced as an illumination source for TIRFM (c-TIRFM). Transcending the main limitations of the lens-based approach and allowing for the possibilities within integrated optical systems, c-TIRFM has been demonstrated for super-resolution applications such as SMLM [810] and SIM [11]. By using high refractive index materials, waveguides achieve higher spatial frequencies for illumination than far-field optics, allowing to push the achievable resolution beyond conventional implementations [8,11].

Experimental setup adaptions have also extended the gentle and arbitrarily large c-TIRFM illumination area towards live-cell imaging. For example, c-TIRFM imaging of living cells was demonstrated by Tinguely et al. [12] on cell lines (Merkel cell carcinoma (MCC13) and human trophoblast (HTR-8)) and by Opstad et al. [13] on primary neurons (from rat hippocampus and Xenopus retina). The natural next step of development is super-resolution c-TIRFM for live-cell imaging applications. In terms of power and acquisition time, waveguide-based SIM can be seen as the most promising implementation. However, the necessity of multiple waveguide arms, inputs, splitters and phase controllers adds steps to the fabrication process and parts to the experimental setup [11]. A more simple avenue is the exploitation of multimodal waveguide illumination patterns for fluorescence fluctuation-based super-resolution microscopy (FF-SRM) techniques. Similar to SIM and opposed to SMLM techniques, FF-SRM techniques allow for high data acquisition speed together with almost free choice of imaging media and (non-blinking) fluorophores. Photonic chip-based FF-SRM has been explored by Diekmann et al. [8] for entropy-based super-resolution imaging (ESI) [14], Priyadarshi et al. [15] for super-resolution radial fluctuations (SRRF) [16] and by Jayakumar et al. for balanced super-resolution optical fluctuation imaging (bSOFI) in combination with Haar wavelet kernel (HAWK) analysis [17].

In particular in [17], the use of computational approaches for breaking the illumination pattern correlation, which enhances the illumination artefacts in these methods, was explored. In essence, the multimoded illumination presented an obstacle instead of an opportunity in a simple application of these techniques. The underlying cause is that the temporal auto-correlation operation of these techniques correlates the illumination patterns as compared to the non-illuminated regions, artificially improving the contrast of the illumination patterns, which does not often get filtered out after a low-pass operation such as averaging or summation over time.

FF-SRM techniques typically rely on intrinsic low levels of signal fluctuations commonly exhibited by fluorophores under a temporally stable illumination field. Most of them work under the assumption that the temporal fluctuations in the fluorescence emissions from different fluorophores are independent of each other. This condition is violated when using illumination as a mechanism for introducing the signal fluctuations. Different from other FF-SRM methods, MUSICAL [18] utilizes eigenanalysis to extract super-resolution details from the fluctuation image sequence, allowing to exploit other sources of fluctuations, such as within the illumination pattern. Image generation from the intrinsic intensity situation of fluorophores has previously been found challenging and the cause of deleterious reconstruction artifacts for both the MUSICAL and SOFI [19] techniques [20]. The difficulties and need of long image sequences when reconstructing images from the random and often faint intrinsic fluctuations of fluorophores make it particularly interesting to explore if engineered illumination can be exploited for super-resolution techniques beyond SIM and STED.

In this work, we have explored a multimodal waveguide platform to boost the signal fluctuations for the FF-SRM technique MUSICAL. Opposed to previous efforts, the illumination variation is experimentally controlled and maximized by acquiring only one image per illumination point. We show proof of concept super-resolution reconstruction fidelity via imaging of the same sample region using different numerical aperture (NA) objective lenses, display the influence of the number of frames for resolution and reconstruction accuracy, and discuss the importance of the scan starting point of the waveguide illumination coupling. Moreover, the successful on-chip cultivation and imaging of a new type of sample—primary salmon keratocytes—is demonstrated.

2. Methods

2.1 Sample preparation and imaging

An outline of the waveguide mode-scanning and sample preparation is displayed in Fig. 1. As one point on the facet of the waveguide is illuminated, one set of multimoded waveguide modes is launched in the waveguide. Such a single coupling point corresponds to one of the inputs in Fig. 1(a). The evanescent field intensity on the top surface of the waveguide is proportional to the intensity pattern of this set of modes, which is the illumination pattern that excites the fluorescent molecules in the sample close to the waveguide surface. Each input point on the facet corresponds to a distinct illumination pattern. Therefore, the changing waveguide illumination patterns were achieved by the sequential excitation of different sets of guided modes, formed by laser coupling at different locations along the waveguide input facet (Fig. 1(a)).

 figure: Fig. 1.

Fig. 1. Waveguide mode-scanning and sample preparation. (a) The waveguide mode-scanning (or stepping) is achieved by changing the laser coupling point along the waveguide facet, sequentially exciting different sets of modes; (b) photonic chip with a PDMS chamber for sample and liquid confinement; (c) TIRFM images resulting from three different mode illumination patterns (650 nm excitation) and 0.3 NA collection objective; (d) salmon scale and cell harvesting. Approximate scale bars: b: 1 cm, c: 10 µm, d: 1 cm.

Download Full Size | PDF

To allow for on-chip cell-culture and liquid confinement, the photonic waveguide chips were prepared with a PDMS chamber as displayed in Fig. 1(b).

Cells can be prepared on the chips as on cover glasses for other types of microscopy. However, the chips used for this work are opaque and must be imaged from the top using upright microscopy. To address this limitation, Priyadarshi et al. [15] have developed photonic chips on transparent substrate for compatibility also with inverted microscopy setups. The further development of chip-bottom dishes (in a similar fashion as glass-bottom dishes for confocal microscopy) would be helpful for the chip imaging technique to become popular and widespread.

An example of signal fluctuations achieved through the described mode-scanning approach is shown in Fig. 1(c). Additionally, a video of the mode fluctuations for the same data set is available in Visualization 1.

The cells used were primary keratocytes obtained from Atlantic salmon scales. These cells are of particular research interest in connection with the wound healing of farmed fish. The scale harvesting process is displayed in Fig. 1(d). The scales were placed on the photonic chips to let the keratocyte skin cells migrate from the scales onto the waveguides before chemical fixation and fluorescence labelling of filamentous actin. This approach gave a sheet of variable density of labelled keratocyte cells attached to the chip and waveguide surfaces. Scales that loosened during cell culture and sample preparation were removed from the chip culture dish, but with most of the scales still attached to the chip surface during imaging. As TIRFM cannot be performed through the scales, imaging was focused between the scales on migrated cells.

The imaging was performed using a 650 nm excitation laser together with a 670 nm longpass filter for signal collection. Complete sample preparation protocols, system description and an explanation of the waveguide mode formation are available in the detailed description of methods presented in Supplement 1, chapter 1 and Fig. S1.

3. Results and discussion

3.1 Verification of super-resolved image reconstruction

To verify that the MUSICAL super-resolution images accurately represent the underlying biological structures, the acquisition of 0.3 NA image stacks (obtained using waveguide mode-scanning) were accompanied by corresponding image stacks collected using a 1.0 NA objective lens, featuring a 3.33 times higher optical resolution according to the Abbe resolution criterion. Figure 2 shows the same sample area (filamentous actin in keratocytes) imaged using a 0.3 NA objective lens on the left (c-TIRFM mode-averaged), the MUSICAL reconstruction of the same image stack in the middle, and the mode-averaged c-TIRFM image using a 1.0 NA objective lens on the right. The results display good correspondence between the MUSICAL reconstruction and the high-NA ground-truth reference. Of particular interest are the closely spaced actin filaments unresolvable in the 0.3 NA image, but seen as clearly distinct features in the MUSICAL and 1.0 NA images. This is in contrast to the corresponding results provided by bSOFI and SRRF (Supplement 1, Fig. S8), showing the two filaments as connected. The MUSICAL reconstruction of the 1.0 NA data (Fig. 2(d), right panel) shows only minor resolution improvement over the corresponding sum image (left panel). This is likely explained by the frequency support of the waveguide illumination patterns.

 figure: Fig. 2.

Fig. 2. Large FOV c-TIRFM and computational super-resolution. All panels are c-TIRFM images of the same 600 µm wide waveguide covered with salmon keratocytes (labelled for F-actin). The extensions from the cells are pseudopods which aid cellular movements. (a) Left: Sample overview displaying the entire waveguide width captured using a 0.3 NA water dipping objective (mode-averaged intensities). Scale bar: 100 µm. Mid panel: magnified view of the indicated region, scale bar: 10 µm. Right MUSICAL reconstruction using the same data as averaged in the mid panel. Scale bar: 10 µm. The indicated rectangles are displayed magnified in panel c. (b) Same as in a, but using 1.0 NA. The magnified view (mid-panel) serves as a ground truth reference for the 0.3 NA MUSICAL image in panel a. The right panel shows the MUSICAL reconstruction for the mode-averaged 1.0 NA data on the left. The indicated area is displayed magnified below in panel d. Scale bar left panel: 20 µm; mid and right panels: 10 µm. (c) 0.3 NA intensity line profiles for mode-averaged (left) and MUSICAL (right); (d) the same for 1.0 NA. The scale bars are 2 µm. The line profiles demonstrate a clear resolution improvement in the MUSICAL images as compared to the conventional (mode-averaged) images.

Download Full Size | PDF

We bring to the readers’ notice that despite a good qualitative match between the MUSICAL image and the 1.0 NA image, there are evident differences. For example, the dynamic range of the MUSICAL image is quite different compared to the 1.0 NA image, and some features appear sharper in the MUSICAL image. For example, the pink circle in Fig. 3(c) indicates a region where the actin strands are more visible in the 1.0 NA image than in the MUSICAL image, while the yellow circle in panel d indicates strands appearing with relatively higher sharpness in the 0.3 NA MUSICAL image than in the 1.0 NA reference image.

 figure: Fig. 3.

Fig. 3. Effect of reconstruction stack size on image quality (0.3 NA). (a) Subdivision of an image stack (d1) into $n_1, n_2,\ldots , n_d$ via deinterleaving. Conventional images (mode-average) for (b) 0.3 NA and (c) 1.0 NA water dipping objectives. The pink circle indicates an area where the conventional image has significantly higher contrast than the MUSICAL image. (d) MUSICAL reconstruction of the complete 0.3 NA mode-scanned image stack (d1, 1499 frames). The yellow circle indicates a region where the MUSICAL image has higher contrast than the conventional image. (e-h) MUSICAL results using deinterleaved sub-stacks for deinterleaving parameter d = 3, 10, 20 and 30 (for the first (n1) out of d total stacks). (i-l) Sum of MUSICAL reconstructions of deinterleaved image stacks, of deinterleaving parameter d = 3, 10, 20 and 30. The sums of MUSICAL reconstructions of smaller image stacks have significantly shorter reconstruction times without apparent loss of resolution or image quality. The scale bar is 5 µm.

Download Full Size | PDF

The region in the pink circle potentially corresponds to the situation where MUSICAL loses contrast due to computationally enhanced axial sectioning at the focal plane of the 0.3 NA collection objective, which may be offset from the focal plane of the collection objective in the 1.0 NA image. While the optical sectioning property of MUSICAL has been reported earlier [21], validation of this conjecture in the future will be quite important when using a high-NA image as a "ground truth reference" for MUSICAL and other super-resolution approaches. Such investigation may be further useful in the context of TIRFM systems in assessing super-sectioning ability as well as the potential loss of details. However, these investigations deserve a dedicated and elaborate treatment and are left for future work.

Now, we consider the situation of yellow circle. Regarding this situation, we note that the so-called "ground truth reference", although an image of much higher resolution than the 0.3 NA image, is still a blurred version of the underlying sample, with structural details gradually more attenuated towards higher frequencies until completely invisible. Therefore, it is possible that the structures appearing sharper in the MUSICAL than in the 1 NA image are a more accurate representation of the underlying sample. In SIM imaging for comparison, the structural details in a frequency range corresponding to that of the illumination pattern become modulated with additional contrast (and image intensity) and clarity, which result into better reconstruction of features that remain unclear or invisible in the conventional image. This is not an issue unless one intends to quantify fluorescence intensities (or something derived from these) from the super-resolved images in addition to the qualitative rendering of morphological features of the sample like shape, co-localization, and orientation. In this sense, the MUSICAL images reconstructed from illumination-modulated raw data are analogous to the SIM approaches as well. We also include a more detailed note on resolution comparison between SIM and MUSICAL approaches in Supplement 1, note 2.F.

For the common case of a fixed objective lens magnification and a limited camera size, the main advantage of using a 0.3 NA objective over a higher NA lens, is the possibility of (all at once) collecting an extremely large FOV appropriate for the large waveguide excitation area. For the particular example displayed here, the 10X 0.3 NA objective captures a 36 times larger area than the 60X 1.0NA objective. As we demonstrate, the loss of resolution can be at least partly compensated for by applying a computational super-resolution algorithm post-acquisition. The MUSICAL reconstruction of the entire 0.3 NA collection area displayed in Fig. 2(a) is provided in Supplement 1, Fig. S7.

For all results presented here and in the following sections, the MUSICAL threshold parameter was chosen automatically (and individually for each sub-stack) according to the soft thresholding scheme presented in [21]. This is a first demonstration of this novel thresholding scheme in the context of waveguide generated excitation patterns, resolving some major challenges in the reliability and practical usability of MUSICAL. The stack-specific soft thresholding scheme (MUS-S) was found greatly superior compared to other MUSICAL reconstruction schemes for the case of multimodal waveguides. A detailed comparison of the different thresholding schemes is presented in Supplement 1, chapter 5 and Fig. S6.

3.2 Effect of number of multimoded illumination patterns

The number of frames, i.e. the stack size, required for reliable MUSICAL image reconstruction is an important parameter in several aspects. To avoid unnecessary lengthy reconstruction times, it is desirable to use just the sufficient number of frames for the best possible or required image quality. Long time-sequences also cause undesirable photobleaching and loss of image quality over time. From the perspective of live-cell imaging, the image acquisition time and potential phototoxicity are of considerable importance and they scale linearly with the number of frames. The fewer frames required for each MUSICAL time-point, the better the time resolution and more time-points can be acquired before photobleaching.

The stack size is equivalent to the number of multimoded illumination patterns, and consequently to the number of probing or sampling points on the input facet of the waveguide. The question arises of how to optimally choose a limited number of mode patterns for fluctuation-based super-resolution microscopy. A detailed mathematical treatment of this design problem and guidelines to how to solve it in practice is given in Supplement 1, chapter 2. Below, we present the experimental verification of the effect of the input facet sampling on the MUSICAL reconstructions of actin in keratocytes. Additional studies regarding this data in connection with the mathematical aspects discussed in Supplement 1, chapter 2 are presented in Supplement 1, chapter 3 and Figs. S2-S4.

In order to perform an experimental verification of the effect of the input facet sampling on the MUSICAL results, we oversampled the input facet and acquired c-TIRFM images of the sample to generate an image stack with many more frames than considered necessary. For the 600 µm wide tantalum pentoxide waveguide considered here, we acquired images by sampling the input facet every 100 nm. Then, we created smaller stacks out of the original oversampled stack through a deinterleaving approach, i.e. picking frames at a regular interval to create sub-stacks. For convenience, we refer to the over-sampled stack as the original stack, the stacks obtained by deinterleaving the original stack as the deinterleaved sub-stacks, and the interval at which the frames are picked for deinterleaved sub-stacks as the deinterleaving parameter $d$. Further, when performing deinterleaving, we have a choice to pick the first frame from the first $d$ frames to create sub-stacks of the same size. The starting frame is represented using $n$.

The division of the original image stack (deinterleaving parameter $d=1$) is illustrated for Fig. 3(a). Panel b show the sum image of the undivided stack d1 for the 0.3 NA objective (the data used for the MUSICAL images) and panel c the equivalent for the 1.0 NA objective. The sum of mode pattern images correspond to conventional c-TIRFM images. Additionally, the 1.0 NA conventional image, serves as a ground truth reference for the 0.3 NA MUSICAL reconstructions (panels d-l) of Fig. 3. For visibility of minute differences, only a magnified region is displayed.

Figure 3(d) displays the MUSICAL reconstruction resulting from using the entire stack (referred to as M-d1 hereon). Panels e-h show reconstructions from the first sub-stacks ($n=1$) following deinterleaving into 3, 10, 20 and 30 sub-stacks. Panels i-l display the sum of all the MUSICAL reconstructions from the individual sub-stacks. The similarity and differences between these images are the topics of the following section.

3.2.1 Number of illumination patterns and structural similarity of MUSICAL reconstructions

The overall structural similarity between the images appears high and in accordance with the 1.0 NA ground truth reference. However, in the MUSICAL images, there is a slight appearance of horizontal stripes which are not visible in either of the conventional images. The stripes are more apparent for higher values of the deinterleaving parameter but less visible in the corresponding sum of MUSICAL images obtained using different values of $n$. We believe these stripes are a result of higher-order waveguide modes interfering in the direction parallel to the propagation direction, that are not visible in the conventional diffraction limited images but are recognised by the MUSICAL reconstruction algorithm due to its super-resolution ability. The illumination pattern’s Fourier transform (displayed in Supplement 1, Fig. S5) shows a strong directionalily in the vertical direction, which emphasized and better resolved structural features (sample and mode patterns) in the vertical direction.

The high similarity between the deinterleaved image reconstructions seen from the images in Fig. 3 were measured quantitatively using the structural similarity index (SSIM) from Wang et al. [22]. The results of this analysis are summarized in Fig. 4(a).

 figure: Fig. 4.

Fig. 4. Effect of reconstruction stack size on image resolution and structural similarity. (a) Mean structural similarity index (SSIM) between the MUSICAL images obtained from sub-stacks and the M-d1 (full 1499 frame stack reconstruction) reference image (blue cross). The red circles mark the SSIM between the sum of MUSICAL images for each number of deinterleaving to the M-d1 reference. The magenta asterisks mark the mean SSIM for the individual (deinterleaved) MUSICAL images to the first sub-stack image (n1). The error bars indicate the SSIM standard deviation. (b) Resolution as measured by decorrelation analysis. The upper line provides a reference to the optical resolution of the conventional image (sum of modes), and the lower dashed line to the Abbe limit for the 1.0 NA objective (used as a ground truth reference in Figs. 2 and 3).

Download Full Size | PDF

First consider the magenta bars with magenta asterisks in Fig. 4(a), which present SSIM computed between M-d$_\textrm{i}$n$_\textrm{1}$ and the remaining MUSICAL images with the same value of $d$. These indicate the similarity between the individual MUSICAL reconstructions for the same deinterleaving parameter (i.e. sub-stacks of the same size but with different starting points), using the MUSICAL image from the first deinterleaved image stack (n1) as the reference image. The SSIM between the individual MUSICAL reconstructions remain acceptably high until d=5 (SSIM > 0.9), but drops to 0.80 for d=10, 0.72 for d=20, and 0.63 for d=30 (mean values). This means that the sensitivity to the starting point is higher as we use fewer frames. We attribute this to two closely related factors. First, the fewer frames used, the smaller the probability of spanning all modes with sufficient intensity. Hence, the differences between the individual reconstructions result from the different sub-sets of illumination mode patterns used for each of the individual fluorescence images. Second, and associated with the first reason, the exact high frequency multimoded illumination pattern picked by MUSICAL in the background is different with each starting point and the resulting stack. Since generally the microscopy images have much more background area compared to the foreground, we suppose that the second reason is the main factor in the degrading SSIM values.

In order to better understand the waveguide mode behaviour and effect of the excitation laser step size, we performed correlation analysis of mode image stacks (Supplement 1, chapter 3). The results showed a significant correlation of the mode patterns up to a step size of about 2000 nm. As a step size of 400 nm was used for the data analysed here, deinterleaving parameter d=5 correspond to the sweet spot, where redundant mode information is minimal, while optimally sampling the waveguide illumination frequencies.

Now, we consider the blue bars in Fig. 4(a). The blue crosses indicate the mean value of the SSIM measurements for the individual MUSICAL images to the M-d1 reference. This SSIM falls off fast compared to the sum MUSICAL images, but remains high above 0.9 until d=10. As compared to the magenta curves, these SSIM measurements are higher and far more consistent (i.e. low standard deviation) and definitely present a more optimistic prospect concerning the use of fewer frames for the MUSICAL images. This is with an exception of—if looking closely—the presence of high frequency horizontal stripes in the background. The better mean value of SSIM and the smaller standard deviation of the magenta compared to the blue curves is attributed to the following. The reference M-d1 image has insignificant horizontal stripe artifacts in the background, and the foreground patterns in both the M-d1 reference and the M-d$_\textrm{i}$n$_\textrm{j}$ match well in general. Therefore, the structural dissimilarity arises only in the locations where M-d$_\textrm{i}$n$_\textrm{j}$ have the strip patterns.

While the blue and magenta plots in Fig. 4(a) considered the effect of using fewer frames for super-resolution, we also consider the effect of using the same number of frames as the original stack but processed differently. This is in hope of reducing the presence of the undesirable horizontal features in the sum of M-d$_\textrm{i}$n$_\textrm{j}$ for a fixed value of $d$, as seen in Fig. 3. The SSIM of the sum of MUSICAL images with the reference M-d1 images are shown using red circles in Fig. 4(a), and they indicate a remarkably high similarity to the M-d1 reconstruction. The SSIM remains above 0.994 until d=10, 0.981 for d=20, and 0.955 for d=30. These are intriguing and interesting results from the aspects of reconstruction time and computational resources, attributed to the reconstruction of long image sequences being slow or even impossible to reconstruct beyond the computers’ available memory. The possibility of splitting up the reconstructions into smaller time-windows both significantly speeds up the reconstruction process, but also allows for both the MUSICAL reconstruction of larger FOVs (as in Supplement 1, Fig. S7) and of longer time-sequences, possibly enabling better quality super-resolution images. The discussion on deinterleaved MUSICAL reconstruction and memory requirements is further expanded in Supplement 1, chapter 6.

3.2.2 Resolution measurements

To objectively quantify the global image resolution and the effect of reduced number of frames, we used the parameter-free image resolution estimation based on decorrelation analysis from Descloux et al. [23]. The resolution measurements (Fig. 4(b)) display a less clear dependency on the number of frames than observed for the SSIM. As a reference, also the resolution measurement of the corresponding conventional c-TIRFM image is provided (upper red line at 1368 nm). The lower dashed line provides a reference to the Abbe diffraction limit for the 1.0 NA ground truth reference image (335 nm), using an emission wavelength of 670 nm (the lower cutoff for the emission filter passband). The coarsest resolution for individual stacks is observed at $d=20$ and is approximately 750 nm, which is 1.82 times better than the reference resolution for the c-TIRFM image. Furthermore, the sum of different sub-stacks with the same value of $d$ consistently results in better resolution than the individual sub-stacks. Interestingly, the sum of M-d30 images is measured with significantly higher resolution than any of the other measurements: 384 nm as compared to 623 nm for the M-d1 image. This is 3.56 times higher resolution than the measurement for the corresponding conventional image, and only 50 nm worse than the theoretical maximum for the 1.0 NA objective. Comparing the images of Fig. 3 by eye, we can verify that indeed the resolution of the MUSICAL d30 sum is better than the conventional 0.3 NA image and of better quality than the single M-d30 image. Further, we can subjectively judge the resolution of the M-d30 image to be slightly below the 1.0 NA conventional image. Nevertheless, the large resolution increase given by decorrelation analysis appears to be a greatly exaggerated estimate. Even with parameter-free methods available, resolution estimates remain challenging and only estimates; an exact value is normally not available for experimentally obtained data and must be expected to vary even within the same microscopy image.

3.3 Live-cell imaging

The chip surface has repeatedly been found suitable for both live-cell imaging and even the cultivation of primary neurons over the course of several weeks [13]. Also, for this work, the salmon keratocytes were kept alive and migrating on the chips for a couple of weeks. Biocompatibility is therefore not an issue. We present results demonstrating live-cell chip-based MUSICAL images in the Supplement 1, Fig. S9.

Concerning imaging time in the context of live-cell imaging, several aspects of chip-based MUSICAL can be further improved and optimized. The total imaging time is a factor of (a) the total number of frames needed for sufficient reconstruction quality, and (b) the acquisition time for each individual frame. (a) can be improved via more sophisticated engineering of the illumination, or potentially, just including more mode patterns into the same images (i.e. mode-stepping while the camera is collecting signal). For the results presented in this article, only the mode patterns from a single coupling point was included in each frame to maximize the pattern contrast for MUSICAL reconstruction. The mixing of mode patterns in each frame might be beneficial for both the reconstruction quality and the imaging time, i.e. fewer frames needed to sufficiently cover both the sample area and to span frequency space. This investigation is the scope of future work.

Factor (b), the acquisition time for each individual frame, is the sum of camera exposure time, read-out time, and the system settling time between each laser coupling point in the case of mode scanning. The latter one has in the current implementation been greatly improved over previous set-ups, where the input-facet scanning is done rapidly via a galvo-mirror solution (described in the Supplement 1, chapter 1) instead of stepping using a piezo electric actuator as in previous implementation, effectively reducing the settling time from about 20 ms per coupling point to < 1 ms. The camera exposure time (about 100 ms for the results presented in Figs. 2 and 3) can be reduced by finding the minimum sufficient signal intensity needed for sufficient reconstruction quality, and/or applying a high-power excitation laser. The last factor, the camera readout time (30 ms for the full FOV used for the present work), can be improved by reading out only a smaller region of interest, or by investing in a faster camera technology. For the results presented in Figs. 2 and 3, the camera exposure and readout time were 100 ms and 30 ms, respectively. If we consider a sufficient quality d10 (using 149 frames), the total acquisition time would be 19 s.

To achieve resolution doubling while imaging living cells, TIRF SIM is likely the fastest solution, although at the cost of system complexity and with a much smaller FOV than what we have presented. On the other hand, MUSICAL can achieve a higher resolution if the intensity fluctuation images sufficiently span Fourier space, either via intrinsic fluorophore fluctuations, via engineered illumination pattern or a combination of the two. The MUSICAL-on-chip technique could undoubtedly benefit from a more sophisticated engineered illumination as illustrated by Supplement 1 Fig. S5, which shows a strong directionality of the Fourier transform of the acquired image stack. This is the scope of future work.

4. Summary and conclusion

We have demonstrated the super-resolution technique MUSICAL on fluctuation image data obtained using c-TIRFM with multimodal waveguides. The core source of super-resolution is the fluctuations introduced by the variety of multimoded evanescent field intensity patterns achieved by illuminating different points on the input facet of the waveguide. This is a first explicit demonstration of MUSICAL using pseudorandom illumination patterns instead of intrinsic fluorescence fluctuations for super-resolution. This is also the first time that the systematic excitation of different multimoded illumination patterns was exploited for super-resolution in a c-TIRFM imaging system. The super-resolution reconstructions from 0.3 NA image data were verified by the correlated acquisition of the same sample region using a 1.0 NA collection objective, featuring a 3.33 times higher optical resolution. The main advantage of using a lower NA lens for image acquisition is the significantly larger FOV: 36 times for this particular example.

Further, we have two contributions from an application perspective. First, we have successfully demonstrated the cultivation of a completely new sample type on chip, namely primary keratocytes harvested from Atlantic salmon. These immune cells are of immense importance and research interest from the perspective of fish wound healing, especially in the context of fish farming. Second, we show that chip-microscopy is a very good application for the recently proposed soft thresholding scheme of MUSICAL, which also makes MUSICAL free of thresholding heuristics, thereby resolving a significant bottleneck of MUSICAL. Furthermore, as a development of MUSICAL, we show that creating many sub-stacks of the original stack, performing MUSICAL on each of them, and then computing the summed image may be an interesting way of adapting MUSICAL for faster computation and artifact suppression. For illumination-induced fluctuations, this approach might achieve an even better artifact suppression than the deep-learning based approach [24].

To understand the effects of using fewer frames per super-resolution image, which is especially relevant for fast, live-cell imaging, we performed image stack deinterleaving and compared the results using SSIM and decorrelation analysis for resolution measurements. The SSIM measurements showed a good structural similarity (SSIM $\ge 0.9$) between both the individual MUSICAL reconstruction of the same deinterleaving parameter and compared to the full stack reconstruction ($d=1$) until $d=5$, i.e., reconstruction using only 1/5 of the full stack image frames. For the sum of MUSICAL reconstructions, the SSIM remained above 0.9 until $d=30$. Correlation analysis of the different waveguide mode images revealed a considerable correlation of the illumination patterns until a laser stepping distance of 2000 nm, corresponding to $d=5$ in the analysed keratocyte image data acquired on 600 $\mathrm{\mu}$m wide tantalum pentoxide waveguides. This indicates that for these waveguides, an input facet probing step finer than 2 µm is likely to yield significant redundancies in the illumination patterns and the available frequency support for the super-resolution image reconstruction.

Faint horizontal stripes that are not visible in the conventional images were observed in the MUSICAL reconstructions. The stripes were more pronounced in the reconstructions with fewer sampling points (i.e. larger $d$). We believe that these stripes result from higher order waveguide modes that are not visible at conventional resolution, and/or that they are emphasized by the fluctuation-based super-resolution reconstruction procedure. Fourier spectral analysis presented in Supplement 1 Fig. S5 confirmed a significant vertical directionality of the mode image stacks, which can cause these artifacts and also emphasize structural features along the vertical Fourier axis as compared to the horizontal axis.

The mode patterns are not visible in the 0.3 or 1.0 NA conventional (sum) images because the modes average out over many frames of different coupling points. However, although the intensity in sum is fairly uniform, the waveguide surface is not hit by the modes in a uniform manner. In particular, since all the modes are excited along the optical axis there is likely a dominating set of modes in this very direction. We believe the fluctuation-based technique MUSICAL captures the most prominent of these mode fluctuations in addition to the excited sample at the waveguide surface, although the effect is faint and hardly visible. The high ability of the 1.0 NA objective in resolving the mode patterns additionally leads to only a very minor resolution improvement using MUSICAL compared to what is achieved for the 0.3 NA data, for which the mode fluctuations are significantly below the resolution limit (important for fluctuation-based super-resolution methods).

The decorrelation analysis gave a resolution estimate of 1368 nm for the conventional c-TIRFM image, 623 nm for the full stack (1499 frames) MUSICAL image, and down to 384 nm for the sum of deinterleaved reconstructions (for d30), a resolution improvement of 2.2 to 3.6 compared to the conventional image. We conjecture that the higher resolution measured for the d30 compared to the other similar MUSICAL images, arose from the higher visibility of waveguide mode patterns rather than actual improved resolvability of the sample.

Although significant improvements can be made with respect to the waveguide modes’ frequency content, chip-based MUSICAL appears overall as a promising platform for super-resolution microscopy that will be an interesting avenue to explore both for high-content screening and live-cell imaging applications in the future.

Funding

Universitetet i Tromsø (UiT Publication funding, UiT strategic funding); Norges Forskningsråd (301401); European Research Council (336716); H2020 Marie Skłodowska-Curie Actions (749666, 836355).

Acknowledgments

BSA, KA, and FS acknowledge Horizon2020 MSCA-IF fundings (749666, 836355). ISO, DHH, SA, and JC acknowledge UiT strategic funding. FTD and BSA acknowledge Horizon 2020 European Research Council funds (336716). RAD acknowledges financial support from RCN (Grant no. 301401) and from UiT – The Arctic University of Norway.

Disclosures

BSA has applied for a patent on chip-based optical nanoscopy. BSA is a co-founder of the company Chip NanoImaging AS, which commercializes on-chip super-resolution microscopy systems.

DHH and FS designed the imaging system. DHH built and synchronized the imaging set-up. DHH and ISO acquired the image data. JT did the waveguide design and preparation for imaging. AP did the waveguide chip fabrication. FTD contributed towards waveguide designs and initial experimental testing for the project. ISO designed the imaging experiments, harvested scale samples, cultivated the cells, fixed and labeled the cells and prepared the chips with chamber for cell culture and microscopy. RD and TS provided salmon for cell harvesting and advised on cell culture and cell biology. SA performed MUSICAL image reconstructions for various thresholding schemes and the mode cross-correlation study. KA performed the mathematical derivation of chip-based MUSICAL. ISO analysed the data, prepared the figures and a manuscript draft. All authors commented on and contributed to the writing of the manuscript. KA and BSA supervised the project and obtained the funding.

Data availability

Data underlying the results presented in this paper are available in [25].

Supplemental document

See Supplement 1 for supporting content.

References

1. P. J. Verveer, Advanced Fluorescence Microscopy (Humana, Springer protocols, 2015).

2. L. J. Young, F. Ströhl, and C. F. Kaminski, “A guide to structured illumination tirf microscopy at high speed with multiple colors,” J. Vis. Exp. 111, e53988 (2016). [CrossRef]  

3. T. J. Gould, J. R. Myers, and J. Bewersdorf, “Total internal reflection sted microscopy,” Opt. Express 19(14), 13351–13357 (2011). [CrossRef]  

4. H. Ma, R. Fu, J. Xu, and Y. Liu, “A simple and cost-effective setup for super-resolution localization microscopy,” Sci. Rep. 7, 1542 (2017). [CrossRef]  

5. C. J. Rowlands, F. Ströhl, P. P. V. Ramirez, K. M. Scherer, and C. F. Kaminski, “Flat-field super-resolution localization microscopy with a low-cost refractive beam-shaping element,” Sci. Rep. 8(1), 5630–5638 (2018). [CrossRef]  

6. A. L. Mattheyses, K. Shaw, and D. Axelrod, “Effective elimination of laser interference fringing in fluorescence microscopy by spinning azimuthal incidence angle,” Microsc. Res. Tech. 69(8), 642–647 (2006). [CrossRef]  

7. D. R. Gibbs, A. Kaur, A. Megalathan, K. Sapkota, and S. Dhakal, “Build your own microscope: step-by-step guide for building a prism-based tirf microscope,” Methods Protoc. 1(4), 40 (2018). [CrossRef]  

8. R. Diekmann, Ø. I. Helle, C. I. Øie, P. McCourt, T. R. Huser, M. Schüttpelz, and B. S. Ahluwalia, “Chip-based wide field-of-view nanoscopy,” Nat. Photonics 11(5), 322–328 (2017). [CrossRef]  

9. Ø. I. Helle, D. A. Coucheron, J.-C. Tinguely, C. I. Øie, and B. S. Ahluwalia, “Nanoscopy on-a-chip: super-resolution imaging on the millimeter scale,” Opt. Express 27(5), 6700–6710 (2019). [CrossRef]  

10. J.-C. Tinguely, A. M. Steyer, C. I. Øie, Ø. I. Helle, F. T. Dullo, R. Olsen, P. McCourt, Y. Schwab, and B. S. Ahluwalia, “Photonic-chip assisted correlative light and electron microscopy,” (2019).

11. Ø. I. Helle, F. T. Dullo, M. Lahrberg, J.-C. Tinguely, O. G. Hellesø, and B. S. Ahluwalia, “Structured illumination microscopy using a photonic chip,” Nat. Photonics 14(7), 431–438 (2020). [CrossRef]  

12. J.-C. Tinguely, Ø. I. Helle, and B. S. Ahluwalia, “Silicon nitride waveguide platform for fluorescence microscopy of living cells,” Opt. Express 25(22), 27678–27690 (2017). [CrossRef]  

13. I. S. Opstad, F. Strohl, M. Fantham, C. Hockings, O. Vanderpoorten, F. van Tartwijk, J. Q. Lin, J.-C. Tinguely, F. T. Dullo, G. S. Kaminski-Schierle, B. S. Ahluwalia, and C. F. Kaminski, “A waveguide imaging platform for live-cell tirf imaging of neurons over large fields of view,” J. Biophotonics 13(6), e201960222 (2020). [CrossRef]  

14. I. Yahiatene, S. Hennig, M. Müller, and T. Huser, “Entropy-based super-resolution imaging (esi): From disorder to fine detail,” ACS Photonics 2(8), 1049–1056 (2015). [CrossRef]  

15. A. Priyadarshi, F. T. Dullo, D. L. Wolfson, A. Ahmad, N. Jayakumar, V. Dubey, J.-C. Tinguely, B. S. Ahluwalia, and G. S. Murugan, “A transparent waveguide chip for versatile tirf-based microscopy and nanoscopy,” (2020).

16. N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M. Pereira, and R. Henriques, “Fast live-cell conventional fluorophore nanoscopy with imagej through super-resolution radial fluctuations,” Nat. Commun. 7(1), 12471 (2016). [CrossRef]  

17. N. Jayakumar, Ø. I. Helle, K. Agarwal, and B. S. Ahluwalia, “On-chip tirf nanoscopy by applying haar wavelet kernel analysis on intensity fluctuations induced by chip illumination,” (2020).

18. K. Agarwal and R. Macháň, “Multiple signal classification algorithm for super-resolution fluorescence microscopy,” Nat. Commun. 7(1), 13752 (2016). [CrossRef]  

19. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3d super-resolution optical fluctuation imaging (sofi),” Proc. Natl. Acad. Sci. 106(52), 22287–22292 (2009). [CrossRef]  

20. I. S. Opstad, S. Acu na, L. E. V. Hernandez, J. Cauzzo, N. Škalko-Basnet, B. S. Ahluwalia, and K. Agarwal, “Fluorescence fluctuations-based super-resolution microscopy techniques: an experimental comparative study,” arXiv preprint arXiv:2008.09195 (2020).

21. S. Acuña, I. S. Opstad, F. Godtliebsen, B. S. Ahluwalia, and K. Agarwal, “Soft thresholding schemes for multiple signal classification algorithm,” Opt. Express 28(23), 34434–34449 (2020). [CrossRef]  

22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

23. A. C. Descloux, K. S. Grussmayer, and A. Radenovic, “Parameter-free image resolution estimation based on decorrelation analysis,” Nat. Methods 16(9), 918–924 (2019). [CrossRef]  

24. S. Jadhav, S. Acuña, I. S. Opstad, B. S. Ahluwalia, K. Agarwal, and D. K. Prasad, “Artefact removal in ground truth deficient fluctuations-based nanoscopy images using deep learning,” Biomed. Opt. Express 12(1), 191–210 (2021). [CrossRef]  

25. I. S. Opstad, “Replication data for: Fluorescence fluctuation-based super-resolution microscopy using multimodal waveguided illumination,” Version 1, DataverseNO, 2021, https://doi.org/10.18710/JEN4SB.

Supplementary Material (2)

NameDescription
Supplement 1       Supplement 1
Visualization 1       Movie of waveguide mode patterns acquired using a 10X 0.3NA water dipping objective. The sample is actin in primary salmon keratocytes.

Data availability

Data underlying the results presented in this paper are available in [25].

25. I. S. Opstad, “Replication data for: Fluorescence fluctuation-based super-resolution microscopy using multimodal waveguided illumination,” Version 1, DataverseNO, 2021, https://doi.org/10.18710/JEN4SB.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Waveguide mode-scanning and sample preparation. (a) The waveguide mode-scanning (or stepping) is achieved by changing the laser coupling point along the waveguide facet, sequentially exciting different sets of modes; (b) photonic chip with a PDMS chamber for sample and liquid confinement; (c) TIRFM images resulting from three different mode illumination patterns (650 nm excitation) and 0.3 NA collection objective; (d) salmon scale and cell harvesting. Approximate scale bars: b: 1 cm, c: 10 µm, d: 1 cm.
Fig. 2.
Fig. 2. Large FOV c-TIRFM and computational super-resolution. All panels are c-TIRFM images of the same 600 µm wide waveguide covered with salmon keratocytes (labelled for F-actin). The extensions from the cells are pseudopods which aid cellular movements. (a) Left: Sample overview displaying the entire waveguide width captured using a 0.3 NA water dipping objective (mode-averaged intensities). Scale bar: 100 µm. Mid panel: magnified view of the indicated region, scale bar: 10 µm. Right MUSICAL reconstruction using the same data as averaged in the mid panel. Scale bar: 10 µm. The indicated rectangles are displayed magnified in panel c. (b) Same as in a, but using 1.0 NA. The magnified view (mid-panel) serves as a ground truth reference for the 0.3 NA MUSICAL image in panel a. The right panel shows the MUSICAL reconstruction for the mode-averaged 1.0 NA data on the left. The indicated area is displayed magnified below in panel d. Scale bar left panel: 20 µm; mid and right panels: 10 µm. (c) 0.3 NA intensity line profiles for mode-averaged (left) and MUSICAL (right); (d) the same for 1.0 NA. The scale bars are 2 µm. The line profiles demonstrate a clear resolution improvement in the MUSICAL images as compared to the conventional (mode-averaged) images.
Fig. 3.
Fig. 3. Effect of reconstruction stack size on image quality (0.3 NA). (a) Subdivision of an image stack (d1) into $n_1, n_2,\ldots , n_d$ via deinterleaving. Conventional images (mode-average) for (b) 0.3 NA and (c) 1.0 NA water dipping objectives. The pink circle indicates an area where the conventional image has significantly higher contrast than the MUSICAL image. (d) MUSICAL reconstruction of the complete 0.3 NA mode-scanned image stack (d1, 1499 frames). The yellow circle indicates a region where the MUSICAL image has higher contrast than the conventional image. (e-h) MUSICAL results using deinterleaved sub-stacks for deinterleaving parameter d = 3, 10, 20 and 30 (for the first (n1) out of d total stacks). (i-l) Sum of MUSICAL reconstructions of deinterleaved image stacks, of deinterleaving parameter d = 3, 10, 20 and 30. The sums of MUSICAL reconstructions of smaller image stacks have significantly shorter reconstruction times without apparent loss of resolution or image quality. The scale bar is 5 µm.
Fig. 4.
Fig. 4. Effect of reconstruction stack size on image resolution and structural similarity. (a) Mean structural similarity index (SSIM) between the MUSICAL images obtained from sub-stacks and the M-d1 (full 1499 frame stack reconstruction) reference image (blue cross). The red circles mark the SSIM between the sum of MUSICAL images for each number of deinterleaving to the M-d1 reference. The magenta asterisks mark the mean SSIM for the individual (deinterleaved) MUSICAL images to the first sub-stack image (n1). The error bars indicate the SSIM standard deviation. (b) Resolution as measured by decorrelation analysis. The upper line provides a reference to the optical resolution of the conventional image (sum of modes), and the lower dashed line to the Abbe limit for the 1.0 NA objective (used as a ground truth reference in Figs. 2 and 3).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.