Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast-tracking of single emitters in large volumes with nanometer precision

Open Access Open Access

Abstract

Multifocal plane microscopy allows for capturing images at different focal planes simultaneously. Using a proprietary prism which splits the emitted light into paths of different lengths, images at 8 different focal depths were obtained, covering a volume of 50x50x4 µm3. The position of single emitters was retrieved using a phasor-based approach across the different imaging planes, with better than 10 nm precision in the axial direction. We validated the accuracy of this approach by tracking fluorescent beads in 3D to calculate water viscosity. The fast acquisition rate (>100 fps) also enabled us to follow the capturing of 0.2 µm fluorescent beads into an optical trap.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fluorescence microscopy is a well-known and widely spread technique that finds applications in life sciences [1] as well as in material sciences [2,3] and chemistry [46]. One of the key aspects of fluorescence microscopy is the ability to image dynamic samples at a relatively high spatial and temporal resolution [713]. When studying dynamic systems, one of the most studied parameters is the diffusion of molecules and/or larger objects in various complex matrices. Different microscopy-based methods are available for investigating dynamic processes at a wide range of temporal scales, namely fluorescence recovery after photobleaching (FRAP) [14] and methods based on the correlation of intensity fluctuations (FCS and derivatives thereof) [1519]. However, these techniques only measure average concentration fluctuations in a small volume as a function of time and therefore delivers no information concerning the direction of the motion. Also, their application in heterogeneous samples is not straightforward. These drawbacks can be addressed by following the motion of single emitters directly on the image, a method called single-particle tracking (SPT). SPT allows the capture of direction, spatial distribution, and thus, heterogeneity [20]. Since its development, SPT has been used for protein studies in live-cells [2124], following reactions at the single-molecule level [2528] as well as the dynamics of complex matrices via microrheology [29,30].

Typically SPT is performed on a conventional wide-field microscope to enable image acquisition with a high frame rate (100 frames/sec and beyond). As a consequence, the calculated trajectories are a 2D projection of 3-dimensional motion, i.e. there is no direct information regarding the movement along the axial direction (z-axis). For non-ergodic systems, this results in the erroneous calculation of the dynamic properties. To overcome this limitation in wide-field based SPT, researchers have developed strategies to readout the axial position of an emitter. These methods usually involve either encoding/reading the axial position within a 2D image, or acquiring multiple planes in 3D, or both. Among the 2D methods, extracting the axial position from the diffraction pattern of the point spread function (PSF) [31] and engineering the shape of the PSF [3235] are the most common. In another method the detection path is divided in two and an electrically tuneable lens is used to keep particles in focus over a large axial range. The axial position is then read from the induced shift of position between the two images obtained on the camera [36]. Methods that can obtain fast 3D images directly include using a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on colour into different and selectable focal planes [37], light-sheet microscopy [12] and multiplane microscopy [3847]. In the following text, PSF engineering and multiplane approaches will be described in more detail.

PSF engineering approaches alter the detection path of the microscope to modify the emission PSF in such a way that the axial position of the particle is encoded in a single 2D image (cross-section of the 3D PSF of the emitter). Typical examples of PSF engineering include the use of a cylindrical lens to induce astigmatism (elongation of the PSF when the emitter is not in focus) [32], and interferometric methods that change the PSF into a double helix [3335] or more complex shapes such as a Tetrapod [48] .While interferometric approaches allow us to retrieve the axial position of the emitter over a large axial range (up to 20 µm), the complexity of the shape makes the calculation computationally demanding. Furthermore, it limits the density of emitters that can be observed simultaneously, as disentangling overlapped patterns is a very complex operation [49].

Multiplane imaging takes advantage of the fact that the sample plane in focus at the detector depends on the distance between the tube lens and the camera. Indeed, if the camera is positioned exactly at the focal plane of the lens, the obtained image will originate from the focal plane of the objective (assuming correct alignment of the other optical elements). Alternatively, if the camera is placed farther from or closer to the tube lens, an image originating from above or below the focal plane of the objective can be generated. Therefore, two images at a different focal positions can be generated by adding a beam splitter after the tube lens and creating two optical paths of different length. The described approach, referred to as 'biplane microscopy', is the first multiplane method reported [38,39]. Another approach to achieve biplane microscopy is the use of two objectives [40,47]. More advanced approaches involve multiple beam splitters [42], or more complex optics (e.g. prisms) to obtain up to 25 planes [41,43,44,46]. Typically, for accurate localization of single particles/emitters in multiplane microscopy, a calculated 3D PSF is fitted to the 3D image of an isolated emitter [43,50]. Alternatively, if enough planes are available determining the radial symmetry center can also be used [51]. When multiplane imaging is combined with PSF engineering approaches, the dependence of the shape of the PSF on the axial position is determined for each plane, and the axial position is later retrieved [45]. Multiplane single-molecule detection has been successfully used for super-resolution imaging [42,52]. However, there are only a few examples where multiplane microscopy has been used for 3D SPT and in most studies only two planes were used [39,5355].

Herein, we describe our multiplane wide-field microscope setup which is able to acquire 3D images with acquisition rates >100 fps in a large volume (50 x 50 x 4 µm$^3$) using two sCMOS cameras and hardware synchronization. We demonstrate the implementation of a new method to determine the axial position of single emitters, which only requires one-dimensional fitting and does not require any sample-specific calibration. We validate the proposed method by evaluating the precision of the setup and show that we can extract accurate diffusion coefficients for 0.2, 0.5, and 1 µm diameter fluorescent beads diffusing in pure water, and obtain correct viscosity values. Finally, we show that we can follow the (3D) motion of 0.2 µm fluorescent beads pushed toward the focus of an optical trap. Overall, the presented results demonstrate the high potential of the developed method. We foresee applications in a wide variety of fields including tracking of nanoparticles, trapping dynamics, opto-hydrodynamics, micro-rheology, or cellular biology, where 3D phenomena with fast dynamics cannot be analyzed by 2D SPT.

2. Multiplane setup

Fluorescence imaging was performed using a home-built multiplane wide-field setup equipped with two sCMOS cameras (Orca Flash 4.0, Hamamatsu Photonics Inc.), based on the work of Lasser's group [42,46]. A scheme of the microscope is shown in Fig. 1 and a detailed description is available in material and methods.

 figure: Fig. 1.

Fig. 1. Schematic of the multiplane wide-field microscope. The trapping laser, the cylindrical lens, and the proprietary prism are the three main components that differ from conventional wide-field microscopes. The inserted diagram shows the extended, interleaved, and equal camera configurations which can be obtained by modifying the relative alignment of the two cameras. $\delta z$ is the focal distance between two consecutive planes of one camera, d is the lateral distance between two consecutive images on the camera and is determined by the prism geometry, n is the refractive index of the prism and ML is the in-plane magnification. At the bottom: images of 2 beads in the 8 planes acquired simultaneously using the extended configuration showing how the multiplane setup allows for simultaneous 3D imaging.

Download Full Size | PDF

The simultaneous acquisition of multiple planes is enabled by the prism as shown in Fig. 1. The central axis of the prism acts as a beam splitter which splits the image generated by the tube lens (TL) in 8 different optical paths. The design of the prism is such that each of these paths has a different length, from the TL to the camera, leading to image acquisition from different depths in the sample. The prism also ensures that these images are shifted laterally so that 4 images can be separately observed on each of the cameras. An example of raw data is shown in section 1 of Supplement 1 (Fig. S1). Before calculating the accurate localization of single emitters in 3D, the individual images of each plane needed to be extracted from the raw data, aligned relative to each other (image registration) and the distance between the imaged planes needed to be determined. In the next sections, we refer to these pre-processing steps as the 'setup calibration'. The procedure is explained in details in section 2 of Supplement 1.

For the setup calibration, we used a sample containing 0.2 µm fluorescent beads spin-casted on a glass surface (see 'setup calibration sample' in Materials and Methods for details) and acquired several z-stack movies in which we scanned the sample across the 8 planes along the z-axis with 50 nm/scan step. From the raw images (Fig. S1 in Supplement 1), the lateral position of each image plane on the detector was determined using an intensity-based threshold. Once obtained, we extracted the 8 planes from the images to build an (x,y,z) matrix for each step of the z stack. From here, we denote as 'z-slice'a step of the z-stack (x,y,z), including the 8 image planes. The example shown at the bottom of Fig. 1 represents one z-slice, where 2 particles were simultaneously imaged on each of the 8 different planes.

The matching of x,y coordinates across all image planes was achieved by aligning the 8-planes relative to each other. This was performed by pixel-wise image correlation of consecutive planes. To reduce the errors associated with the sample drift during the z-scanning and thus increase the accuracy of the registration, we used image planes that were simultaneously acquired (from the same z-slice). Since two consecutive image planes cannot be in focus at the same time, we choose the z-slice where they are equally close to the focus. The parameters which allowed for image extraction and plane registration in the setup calibration were used to align the experimental data.

The final step in the setup calibration procedure is the determination of the axial distance between the image planes. For this, we calculated the in-plane image intensity gradient for every image planes in every z-slice. The image intensity gradient represents the change in intensity across the image, taking into account both the raw intensity and the sharpness of the signal (see Figs. 2(A)-(D)). It is, therefore, more sensitive to the focus than raw intensity values. For each image plane, the plot of the maximum values of the image gradient versus z-position showed a Gaussian distribution centred at the axial position of the focal plane (see Fig. 2(E)). The curve obtained for each image plane was therefore fitted with a Gaussian function to obtain the position of each plane (centre of the distribution). The distance between the image planes was calculated from the separation between the centre values retrieved from the fit. Finally, it can be seen that the different image planes have slightly different fluorescent intensity. Using the amplitude parameter from the fit, we can also correct this difference in intensity on the data.

 figure: Fig. 2.

Fig. 2. Plane calibration: A and B shows one of the image planes for two different z-slices in focus (A) and out of focus (B). C and D shows the image intensity gradient corresponding to A and B, respectively. The maximum of the image intensity gradient obtained (C, D) is then plotted in E as a function of z position. E shows the maximum image intensity gradient as a function of z (dots) for all image planes. The solid line indicates the Gaussian function fit which we used to determine the position of the plane and calculate the distance between them.

Download Full Size | PDF

Of note, the asymmetry observed in the curves presented in Fig. 2(E) is due to the asymmetry of the PSF in the axial direction. As depicted on the bottom of Fig. 1, when the focus of the plane is located below the particle’s position, diffraction rings are observed and the decay in the image gradient intensity is slower than when the focus is above the particle leading to the observed asymmetry. The analysis software is programmed to only take a 1.5 µm ($\pm$ 750 nm) range around the maxima to be fitted as the data very far from the focus are not relevant for the determination of the position of the image plane. The distance between each of the 4 image planes detected in one camera was 0.58 µm. This value is fixed and determined by the prism design and the objective used. However, it is possible to tune the inter-plane distance by modifying the relative position of the two cameras. As depicted in Figs. 1(A)-(C), there are three possible camera configurations: (i) extended, where the planes of one camera are placed below the lowest plane of the other camera (0.58 µm between each imaging plane, approximately 4 µm total range in the z-direction), (ii) interleaved, where the planes of one camera are placed in between the planes of the other camera (0.29 µm between imaging planes, approximately 2 µm total range in the z-direction), (iii) equal, where the planes of the two cameras are overlapped (0.58 µm between imaging planes, approximately 2 µm range in the z-direction). The equal configuration offers unique possibilities for multicolour or even multimodal imaging(e.g tracking two different proteins in a cell).

Another interesting possibility is the extension of the imaging volume. As depicted in Fig. 1, the interplanar distance also depends on the lateral magnification (ML). Consequently, the imaging volume can be modified by using an objective lens with different magnification, trading spatial resolution for imaging volume. For instance, the use of a 20x objective results in an interplanar distance of approximately 4 µm (total volume of 150 x 150 x 28 µm$^3$, data shown in section 3 of Supplement 1, Fig. S4). This configuration can be used to study the 3D motion of microscale objects.

3. Accurate tracking of single emitters in 3D

After aligning the different planes using the parameters determined during the setup calibration, single emitters were detected using the Generalized-likelihood ratio test developed by Sergé et al. [56]. The detected emitters were then accurately localized using a phasor-based approach [57]. Briefly, this approach consists in drawing a region of interest (ROI, typically 13x13 pixels) around an emitter and applying a 2D Fourier transform to the obtained ROI. The first Fourier coefficient along each dimension (x,y) contains the coordinate of a phasor of which the magnitude is related to the width and intensity of the signal detected along that axis (x, y) and the angle (or phase) is related to the shift from the centre of the region of interest. When the same particle was detected in more than one image plane, only the brightest plane was used to determine x- and y-positions. This plane was also used to calculate the fluorescence intensity of each particle (by subtracting the average background value of the region surrounding the particles and then summing all the pixels containing the PSF). The phasor analysis was initially implemented to retrieve the z-position by calibrating the effect of a cylindrical lens [57]. Here, we extend the method to be able to determine the z-position in absence of cylindrical lens or any other PSF engineering approach.

The phasor magnitude is quite sensitive to the signal-to-noise ratio, which in turn is related to the distance of the emitter to the focal plane. By fitting the dependence of the phasor magnitude across the different planes for the same emitter with a 1D-Gaussian function, the z-position of the emitter can be obtained with high precision. An example of such a fit is displayed in section 4 of Supplement 1, Figure S5. Of note, by reducing the calculation of the (x,y,z) coordinates to a single one dimensional fitting, we were able to reach a computation speed of 100 localizations in 3D per second, on a standard computer. Moreover, the code is not yet optimized for speed and thus has the potential to run significantly faster.

After calculating the (x,y,z) coordinates of single emitters in the time-lapse image sequence, they were connected to form trajectories. Particle connection was performed by minimizing the sum of the displacements of all particles in a certain radius [58]. This was performed via the Munkres algorithm [59]. The algorithms for the setup calibration, particle localization, and tracking are included in the in-house written software described in detail in the supporting information (section 2 of Supplement 1) and freely available at https://github.com/CamachoDejay/polymer3D.

To assess the performances of the setup, we used a sample containing fluorescent beads encapsulated in a 3D hydrogel matrix (see '3D calibration sample'in Materials and Methods for details). The sample was moved using the sample stage, following predefined steps, while the camera acquired images at each step . Localization precision and accuracy were determined by comparing the output of the tracking algorithm with the movement induced by the sample stage.

The first set of data used to evaluate the setup was obtained by stepwise scanning along each coordinate independently (x,y and z). 20 images (4 sec) were acquired for each scan step to obtain the step-like motion shown in Fig. 3(A). The localization precision was determined by calculating the standard deviation across the trajectories of different particles (after aligning the steps in the trajectory of each particle by subtracting their mean position). For 0.2 µm fluorescent beads, with an average fluorescence intensity per particle of approximately 10$^5$ counts, we obtained a precision of about 8 nm in all directions (median across 50 particles from 5 different movies for each direction).

 figure: Fig. 3.

Fig. 3. A) Tracking traces of individual particles for unidirectional motor-induced motion. The shadow on the plot shows $\pm$3 times the mean standard deviation representing a 99.7% confidence level. The solid bold line represents the real trace from the piezo stage. B) Tracking of helical motion induced by the piezo stage using extended configuration. The helix is 400 nm diameter, 1 µm per turn, and 50 nm step-size in z. The different home-built algorithms used codes to generate defined motion paths with the stage (linear, elliptical, 1D or 2D motion) were written in Java and adapted for micro-manager and are freely available at https://github.com/BorisLouis/MMScript.

Download Full Size | PDF

The accuracy of the localization algorithm was evaluated by comparing the experimental step size with the real step size, i.e., the displacement of the sample induced by the sample stage. The experimental step size was determined using the difference between the mean position of the particles localized during two consecutive steps. We obtained an accuracy of about 10-15 nm in x and y, and 27 nm in z, calculated as the mean of the absolute error (the difference between the experimental and theoretical step size). Section 5 of Supplement 1 (Fig. S6) shows the distribution of errors for x, y and z.

The difference in the accuracy calculated in lateral (x,y) and axial(z) directions is due to the method used to calculate the position. Indeed, while both x- and y-positions are directly calculated from the fluorescence signal (given as the angle of the phasor) in the brightest image plane, the z-position is determined from fitting the distribution of the phasor magnitude along the different image planes (section 4 of Supplement 1, Fig. S5). The goodness of the fitting depends on the position of the emitter in relation to the 8 planes which affects the determination of the z-position.

To evaluate the localization precision of particles moving in 3D, a helical trajectory was created by moving the piezo stage along a predefined path in 3D (Fig. 3(B)). The localization precision was calculated using the standard deviation of the spread of localizations on the helical trajectory after aligning all the traces obtained. We obtained a precision of 20 nm in x and y, and 35 nm in the z-direction. Compared to the stepwise 1D motion, the precision was lower, which can be attributed to the 3D nature of the helical trajectory. Please also note that for measuring the accuracy we combined the traces of particles from different movies together, which likely introduced some additional localization errors due to the extra alignment procedure needed.

4. Comparison of the multiplane phasor and PSF engineering approaches for the determination of the axial position

We compared the presented approach with the previously reported PSF engineering approach [45]. For this, a cylindrical lens (CL) was added in the infinity space of the setup to induce astigmatism. This CL distorts the PSF by elongating the signal along the x or y direction, depending on the z-position of the emitter with respect to the focal plane (Fig. 4(A)). Therefore, before acquiring experimental data, the z-dependent elongation of the PSF induced by the CL needed to be calibrated. The process described below is further referred to as 'astigmatism calibration'in the remaining text and allows to determine the z-position of new experimental data directly from the elongation of the PSF.

 figure: Fig. 4.

Fig. 4. Comparison between phasor and astigmatism approach. A) PSF of a single emitter in 3D. B) ellipticity(z) curves for the different imaging planes. C) Axial precision for the two tested approaches as a function of the number of photons detected. For both approaches, the axial precision was calculated using a 1D stepwise motion (see the previous section).

Download Full Size | PDF

For the astigmatism calibration, we used the previously mentioned 3D calibration sample, containing fluorescent beads immobilized in a hydrogel (details in Materials and Methods). Ten independent z-stacks were acquired, where the sample was moved along the z-axis (50 nm/scan step). Following the work of Martens et al. [57], the ratio of the magnitude of the phasor (magx/magy) was used to characterize the ellipticity of the PSF. After localization and tracking of several particles in the axial dimension, ellipticity(z-position) curves were obtained for each image plane (Fig. 4(B)). Data from different particles were combined to achieve one distribution for each image plane (for further details on the astigmatism calibration see section 2 of Supplement 1). Finally, the combined plots of each image plane were fitted with a spline function to obtain the calibration curves for each image plane. The parameters of each spline were stored to be used to retrieve the z-position from the ellipticity values in new datasets taken under similar experimental conditions.

A more in-depth analysis of the calibration data reveals that the curves from each imaging plane are slightly different from each other, which justifies the use of a individual calibration for each image plane. Additionally, the calibration curve of the higher image plane (deeper into the sample, right plot in Fig. 4(B)) displays a broader distribution of the ellipticity values. This is a reflection of aberrations affecting the PSF, which in its turn, influences the measured ellipticity [60,61].

The localization precision along the z-axis is also influenced by the fluorescence intensity of the single emitter (i.e. signal-to-noise ratio). As depicted in Fig. 4(C), the axial precision determined using the astigmatism approach decreases considerably with the decrease in the fluorescence intensity (from 8 nm to approximately 125 nm for 100x dimmer particles). Note that the multiplane phasor analysis described in the previous section is more robust to variations in the intensity of the particles than the astigmatism approach. Indeed, while the axial precision values achieved with both approaches are very similar for high amounts of photons detected, the multiplane phasor method is at least 3 times more precise (15-40 nm versus 75-175 nm) for lower amounts. This difference can be explained by two factors. Firstly, astigmatism is based on the characterization of the shape of the PSF (ellipticity) while the multiplane phasor relies on the calculated signal intensity (x,y phasor magnitude). The shape of the PSF is more sensitive to the signal-to-noise ratio than the calculated intensity of the emitter. Secondly, using PSF engineering, the z-position is calculated using the information of a single plane (where the signal to noise ratio is the highest), while the multiplane phasor approach uses the intensity information of all 8 planes to retrieve the z-position, which results in smaller errors.

In addition, a higher density of emitters can be imaged using the multiplane-phasor approach because PSF engineering approach extend the PSF in x,y and z to encode the z-position in a 2D image [48]. Furthermore, as the shape of the PSF depends on the refraction index of the solution surrounding the emitter, a calibration of the shape versus z-position must be performed for each sample, using an environment which should be the same as the environment in the sample (which is often not possible to mimic entirely) [60,61].

5. Fast-tracking of single emitters

To showcase the application of multiplane phasor for accurate tracking of fast-moving particles, the viscosity of water was calculated from tracking the movement of fluorescent beads of different sizes (0.2, 0.5, and 1 µm) suspended in water (see 'water viscosity sample' in materials and methods). Such small beads diffuse very fast in water and 2D imaging techniques will fail to gather reliable data as the particles move out-of-focus within a few frames.

To determine the 1D and 3D diffusion coefficients (D$_x$, D$_y$, D$_z$, and D$_r$) and the corresponding viscosity coefficients ($\eta _x$, $\eta _y$, $\eta _z$, and $\eta _r$), we first calculated the mean square displacement (MSD) from each trajectory [62]. The MSD determined in n dimension is linked to the diffusion coefficient by Eq. (1):

$$MSD(\tau) = 2nD_n\tau$$
Einstein equation links the diffusion coefficient and the viscosity (see Eq. (2)) [63]:
$$D = \frac{k_BT}{6\pi\eta r_0}$$
where $k_B$ is the Boltzmann constant, T is the temperature, $\eta$ is the viscosity and $r_0$ is the hydrodynamic radius. From the trajectories obtained for each particle, the diffusion coefficient can easily be obtained using Eq. (1). In addition, if the temperature and the hydrodynamic radius of the particles are known, the viscosity can be calculated from Eq. (2).

Figure 5(A) shows a representative example of a 3D trajectory of a particle diffusing in water. We were able to track the movement of small particles over a relatively long time before they went out of focus (>100 frames = 1s, >200 frames = 2s and >300 frames = 3s for 0.2, 0.5, and 1 µm diameter particles, respectively). For each extracted trajectory, the MSD($\tau$) was calculated and fitted with a linear equation to obtain the diffusion coefficient from the slope. Following Saxton's work, higher accuracy was obtained by using only the first 4 time-lag points for the fitting [64].

 figure: Fig. 5.

Fig. 5. 3D single-particle tracking A) A representative example of an experimental 3D diffusion single trace for a 1 µm bead in water. The color indicates time in second. B) 3D MSD vs lag time for 0.2, 0.5, and 1 µm diameter PS fluorescent particles with the corresponding fitting for short lag times (dashed lines). C) 3D diffusion coefficient dependence on the bead diameter.

Download Full Size | PDF

Figure 5(B) shows three MSD($\tau$) curves and their respective fit, one representative particle for each size. For longer time-lags, the mean square displacement deviated significantly from the initial linear trend. This is due to the limited statistics obtained from a single trajectory allied to the smaller amount of data for longer $\tau$. When all the trajectories were combined, the error for longer time-lags was reduced and the slope of MSD($\tau$) remained constant for a larger time range (see section 6 of Supplement 1, Fig. S7(D)). From the averaged MSD plots, the mean square displacement curves were extracted separately for x-, y- and z-directions. For short lag times, the initial slope of the three MSD($\tau$) plots were similar for all directions (section 6 of Supplement 1, Figs. S7(A)-(C)), which shows the robustness and accuracy of the method developed for tracking particles in 3D.

We obtained a diffusion coefficient (reported as mean value $\pm$ standard deviation) of 1.92 $\pm$ 0.35 µm$^2$/s, 0.89 $\pm$ 0.16 µm$^2$/s and 0.42 $\pm$ 0.09 µm$^2$/s for 0.2, 0.5 and 1 µm diameter beads, respectively. The distributions of the diffusion coefficients calculated for every single track are shown in Fig. 5(C). Dynamic light scattering measurements were used to determine the hydrodynamic radius $r_0$ of the particles. For 0.2, 0.5, and 1 µm nominal diameter beads, we obtained an r0 of 0.13, 0.29, and 0.56 µm, respectively. Using Eq. (2), we retrieved viscosity values of 0.91 $\pm$ 0.17 cP, 0.88 $\pm$ 0.17 cP, and 0.98 $\pm$ 0.23 cP, which is in concordance with the reference value of 0.93 cP at 23$^{\circ}$C for water [65]. Taken together these results show that the multiplane phasor approach can be successfully used to track fast-moving particles and retrieve accurate diffusion coefficients. We note here that this method also works for transmission and darkfield data.

6. Tracking particles as they come to an optical trap

While the majority of SPT experiments are performed using 2D imaging methods, these methods are not suitable for investigating active motion or diffusion in non-ergodic systems. To demonstrate the potential of our approach to investigate fast heterogeneous motion, particles in suspension were followed as they moved from the bulk solution to an optical trap.

Optical trapping is achieved by focusing tightly a high-intensity laser inside the sample. As a result of radiation pressure, scattering and gradient forces, particles in suspension are pushed toward the centre of the focus [66,67]. Since its discovery, optical trapping has been used in various fields from chemistry [6870], to biology [71] and material sciences [7274]. However, despite all the experiments performed and the extensive theoretical background on optical trapping, the direct observation of particles coming to a trapping site in 3D has, to our knowledge, never been achieved. This is likely due to the technical challenge of acquiring 3D images at a high frame rate with high precision and accuracy.

In this work, we followed the trapping of 0.2 µm fluorescent beads suspended in water, using four different trapping laser powers (36, 60, 120 and 240 mW after the objective). The sample preparation was similar to the one followed in the viscosity study and is explained in Material and Methods. The laser was focused slightly above the highest image plane and the sample was positioned so the trapping site was localized inside the solution. For each power, we acquired 45 independent time-lapse movies (each containing at least 1 trapping event) at a frame rate of 200 Hz. The acquisition was started immediately after turning on the trapping laser.

Figures 6(A) and 6(B) show the traces of a single particle going from the water suspension to the trapping site, for 36 and 240 mW, respectively. We calculated the radial speed, using the trapping laser focus as the centre of the coordinate system. The initial movement of the particles was dominated by Brownian motion, which yielded a relatively slow radial speed of 3 and 7 µm/s for 36 and 240 mW of laser power, respectively. However, the particles exhibited highly directional movement when approaching the trapping site increasing to an average speed of 32 µm/s and 62 µm/s for the last 10 points for 36 mW and 240 mW, respectively. We show here that the multiplane-phasor approach allows to follow particles with high speed and directed motion.

 figure: Fig. 6.

Fig. 6. Tracking fluorescent particles as they come to the trapping site. (A-B) Exemplary time-color-coded trace at 36 mW (A) and 240 mW (B). The trapping site was localized approximately at position (5,5). Locally average radial speed is noted in the inset. In both cases, the speed increases when approaching the trap. (C-D) 3D traces of all the trapping events acquired from 45 independent movies at 36 mW (C) and 240 mW (D). (E) Distribution of radial speed for the different trapping laser powers used (36, 60, 120 and 240 mW). (F) Plot of the trapping probability as a function of the trapping laser power after objective, the red solid line is a linear fit, R$^2$ = 0.98.

Download Full Size | PDF

Figures 6(C) and 6(D) show an overlay of all the traces obtained from tracking more than 50 fluorescent beads using two trapping powers, 36 mW and 240 mW (in section 7 of Supplement 1, Fig. S8 shows the same plot for every power used). We can observe how the trapped beads moved toward the focal spot. Notably, at low trapping power, the traces present more Brownian motion than at the higher power (less directed motion, the traces appear more noisy). This is because the active motion is smaller for lower power so the contribution of the Brownian motion to the overall motion is relatively larger. Another notable difference between low and high-power regimes is that the trajectories seem to stop at lower z position for high trapping powers. Indeed, as depicted in Fig. 6(C), most of the traces end close to 4 µm while in Fig. 6(D) they usually end between 3 and 4 µm. One of the reasons for this is that at the higher powers, particles were being trap very fast, to a point where some particles would already be trapped before the acquisition started. Even with the trap placed out of focus to avoid interference with the detection of the particles, the presence of one (or many) trapped particles can affect the detection of the moving particles closer to focus of the trap. Another reason is that at high power the speed and acceleration are so high that the particle would sometimes seem to jump from its position to the centre of the trap, which is placed out of our imaging volume and thus the particle was not detected. The radial speed was calculated for each trajectory and its dependence on the trapping power is shown in Fig. 6(E). As the trapping power increases, the distribution shifts towards higher speed values.

In addition to the increase in speed, the trapping power affects the rate at which particles are trapped. We calculated the trapping rate by counting the number of beads that are trapped in each time-lapse movie and divided it by the total measurement time (movies, where no event was observed, were included). As depicted in Fig. 6(F), for 36 mW there are about 0.1 trapping events per second which means that it takes on average 10 sec to observe a trapping event for this particle concentration. We observed a linear trend of the trapping rate as a function of the power, reaching a maximum of 0.75 trapping events per second for a power of 240 mW. The linear trend can be explained considering the experimental parameters that influence the trapping rate: (i) the particles concentration, which determines the probability of finding a particle within the laser field, and is constant for all experiments, and (ii) the laser power, which determines how fast a particle is going to be pushed to the trap once it has entered inside the optical force field. Since the gradient force outside the focus is low, it can be neglected. Consequently, the scattering force is responsible for the movement of the particles towards the trap. As the scattering force depends linearly on the laser power, a linear trend is obtained for the trapping rate [66].

7. Conclusions

We demonstrated that the multiplane setup and analysis framework here presented are suitable for fast particle tracking in three dimensions with high precision (reaching sub-10 nm localization precision for x, y, and z-directions). We also showed that our analysis method, which only requires a single 1-dimensional fitting to obtain the z-coordinate of a particle, yields better results than the use of astigmatism for low signal-to-noise ratios (SNR) whilst it gives similar results for higher SNR. Additionally, as no sample-specific calibration is required, this method is widely applicable. Furthermore, we showed that we could track 0.2, 0.5 and 1.0 µm fluorescent beads in water and determine diffusion coefficient and viscosity with high accuracy. Finally, we demonstrated that we could follow particles in suspension as they moved towards an optical trap. From the analysis of the obtained 3D trajectories, we can retrieve interesting properties of the particle motion, such as the speed and the trapping rate. The presented method will pave the way to further 3D studies in live-cells (e.g. proteins, mitochondria), characterization of complex fluids (e.g. microrheology), and unravelling the motion of particles inside optical vortices, where conventional microscopy methods typically fail.

8. Material and methods

8.1 Multiplane microscope

A schematic of the setup is shown in Fig. 1. A 488 nm laser line (100 mW, SpectraPhysics) is focused at the back focal plane of a 60x water-immersion objective (Olympus; UPlanSApo60XW), via a wide-field lens (WL, Thorlabs, plano-convex, f = 150 mm). The excitation light is guided into the sample using a dichroic mirror reflecting 488, 561, and 640 nm (Chroma Technology). The emission is collimated by the objective and guided towards the emission path where the fluorescence signal is filtered from excitation laser light using a bandpass filter (z488/561m, Chroma Technology). The emission light passes through a set of lenses in telecentric 4f configuration [objective - Lens 1 (L1, Thorlabs, plano-convex, f = 140 mm) - Field stop - Lens2 (L2, Thorlabs, plano-convex, f = 140 mm) - tube lens (TL, Thorlabs, plano-convex, f = 200 mm)]. The first lens (L1) creates an image that is then cropped by the field stop. The tube lens creates the final image acquired by the cameras. Between the tube lens and the cameras, there is a proprietary prism (patent EP3049859A1, specification LOB-EPFL, Lausanne; manufacturing Schott SA, Yverdon, Switzerland) which enables multiplane imaging. The central axis of the prism acts like a beam-splitter which, after multiple passes, yields eight beams. The asymmetric design of the prism ensures that each of these beams has a different path length through the prism yielding images at different focal depths [46]. The different imaging planes are collected by the two cameras, 4 image planes each (raw images are shown in section 1 of Supplement 1, Fig. S1). To allow fast imaging with proper timing, hardware synchronization is done via a National Instruments board (NI, USB-6343) and a home-built software written in Labview is used to trigger the simultaneous acquisition. To maintain telecentricity, the distance between the sample and the prism cannot be altered. Therefore, the focus is achieved by moving the sample stage using 3 uni-axis PI motors (E-871-1A1), controlled with Micromanager [75,76]. For the evaluation of the combination of multiplane and PSF engineering, a cylindrical lens (Melles Griot, RCX-25.0-30.0-5000.O-U) was added between Lens2 and the tube lens. For the optical trapping experiments, a Nd:YAG 1064 nm (continuous wave, Spectra-Physics) was installed. The 1064 nm trapping laser was guided to the sample by reflection on IR mirrors (Thorlabs), then collimated via a beam expander that ensures the filling of the back aperture of the objective. The laser is then focused on the sample by the objective, and thus generates the optical trap. The axial position of the 1064-laser optical trap was controlled by adjusting the relative position of the two lenses making the beam expander.

8.2 Setup calibration sample

The 2D calibration sample consists of 0.2 µm fluorescent spheres (FluoSphere, 505/515, carboxylate-modified, ThermoFisher Scientific) spin-cast onto a glass coverslip. Before preparing the sample, the coverslip was cleaned using heat treatment, >24h at 450$^{\circ}$C, followed by ozone treatment for 60 min. To reduce aberrations due to the index of refraction variations, a drop of a polyisocyanopeptide-based hydrogel (PIC-gel) was added onto the sample, at a concentration higher than 2 mg/mL [77]. Then the PIC-gel was formed by warming up the solution above the gelation temperature (typically between 16 and 20$^{\circ}$C).

8.3 3D calibration sample

The 3D sample was prepared by mixing a solution containing 0.2 µm fluorescent beads (0.001 % solid - a 2000x dilution from the 2% solid stock) and a PIC solution (4 mg/ml) in a 1:1 ratio, on ice. The mixture was added to a small imaging chamber (9 mm DIA x 0.12 mm depth; Grace Bio-Labs, Bend, Oregon, USA), glued onto a clean coverslip. Directly afterward the chamber was sealed with an additional coverslip to prevent evaporation. The sample thickness in the chamber is therefore 120 µm To induce gelation of the PIC-gel, the sample was slowly heated at 40$^{\circ}$C for 5 min.

8.4 Water viscosity sample

A solution of 0.2 µm, 0.5 µm, or 1 µm fluorescent beads (FluoSphere, 505/515, carboxylate-modified, ThermoFisher Scientific) was diluted in Milli-Q water (5000x dilution of stock, 0.0004 % solid). After that, 8 µl were added in an imaging spacer (9 mm DIA x 0.12 mm Depth; Grace Bio-Labs, Bend, Oregon, USA) stuck onto a clean coverslip. The sample was then covered with another clean coverslip. All Experiments were performed in a controlled environment (23$^{\circ}$C).

8.5 Trapping sample

A solution of 0.2 µm diameter fluorescent polystyrene (PS) beads in Milli-Q water (2000x dilution of stock, 0.001% solid) was added to a clean coverslip. The sample was sealed using an imaging spacer (9 mm DIA x 0.12 mm depth; Grace Bio-Labs, Bend, Oregon, USA) and another clean coverslip.

Funding

Ministry of Education; Ministry of Science and Technology, Taiwan (108-2112-M-009-008, 108-2113-M-009-015, 109-2634-F-009-028); KU Leuven (C14/16/053); Fonds Wetenschappelijk Onderzoek (1186220N, 11B1119N, 12J2616N, 12Z8120N, 1529418N, G0A817N).

Acknowledgement

We thank Dr. Olivier Deschaume for measuring the hydrodynamic radius of the different PS beads used and Dr. Eduard Fron for the help provided with the software for camera synchronization.

Disclosures

The authors declare no conflicts of interest.

See Supplement 1 for supporting content.

References

1. B. O. Leung and K. C. Chou, “Review of Super-Resolution Fluorescence Microscopy for Biology,” Appl. Spectrosc. 65(9), 967–980 (2011). [CrossRef]  

2. D. Wöll, E. Braeken, A. Deres, F. C. De Schryver, H. Uji-i, and J. Hofkens, “Polymers and single molecule fluorescence spectroscopy, what can we learn?” Chem. Soc. Rev. 38(2), 313–328 (2009). [CrossRef]  

3. D. Wöll and C. Flors, “Super-resolution Fluorescence Imaging for Materials Science,” Small Methods 1(10), 1700191 (2017). [CrossRef]  

4. T. Cordes and S. A. Blum, “Opportunities and challenges in single-molecule and single-particle fluorescence microscopy for mechanistic studies of chemical reactions,” Nat. Chem. 5(12), 993–999 (2013). [CrossRef]  

5. W. E. Moerner, Y. Shechtman, and Q. Wang, “Single-molecule spectroscopy and imaging over the decades,” Faraday Discuss. 184, 9–36 (2015). [CrossRef]  

6. S. Shashkova and M. Leake, “Single-molecule fluorescence microscopy review: shedding new light on old problems,” Biosci. Rep. 37(4), BSR20170031 (2017). [CrossRef]  

7. B. Huang, M. Bates, and X. Zhuang, “Super-resolution fluorescence microscopy,” Annu. Rev. Biochem. 78(1), 993–1016 (2009). [CrossRef]  

8. J. Tam and D. Merino, “Stochastic optical reconstruction microscopy (STORM) in comparison with stimulated emission depletion (STED) and other imaging methods,” J. Neurochem. 135(4), 643–658 (2015). [CrossRef]  

9. S. Abrahamsson, H. Blom, A. Agostinho, D. C. Jans, A. Jost, M. Müller, L. Nilsson, K. Bernhem, T. J. Lambert, R. Heintzmann, and H. Brismar, “Multifocus structured illumination microscopy for fast volumetric super-resolution imaging,” Biomed. Opt. Express 8(9), 4135 (2017). [CrossRef]  

10. P. R. Nicovich, D. M. Owen, and K. Gaus, “Turning single-molecule localization microscopy into a quantitative bioanalytical tool,” Nat. Protoc. 12(3), 453–460 (2017). [CrossRef]  

11. G. Vicidomini, P. Bianchini, and A. Diaspro, “STED super-resolved microscopy,” Nat. Methods 15(3), 173–182 (2018). [CrossRef]  

12. C.-H. Lu, W.-C. Tang, Y.-T. Liu, S.-W. Chang, F. C. M. Wu, C.-Y. Chen, Y.-C. Tsai, S.-M. Yang, C.-W. Kuo, Y. Okada, Y.-K. Hwu, P. Chen, and B.-C. Chen, “Lightsheet localization microscopy enables fast, large-scale, and three-dimensional super-resolution imaging,” Commun. Biol. 2(1), 177 (2019). [CrossRef]  

13. Y. Chen, S. Zhu, W. Kan, F. Chen, L. Zhang, B. Ding, and Z. Shen, “Rapid determination of fluorescent molecular orientation in leakage radiation microscopy,” Results Phys. 16, 102938 (2020). [CrossRef]  

14. D. Axelrod, D. Koppel, J. Schlessinger, E. Elson, and W. Webb, “Mobility measurement by analysis of fluorescence photobleaching recovery kinetics,” Biophys. J. 16(9), 1055–1069 (1976). [CrossRef]  

15. D. Magde, E. Elson, and W. W. Webb, “Thermodynamic Fluctuations in a Reacting System–Measurement by Fluorescence Correlation Spectroscopy,” Phys. Rev. Lett. 29(11), 705–708 (1972). [CrossRef]  

16. N. Petersen, P. Höddelius, P. Wiseman, O. Seger, and K. Magnusson, “Quantitation of membrane receptor distributions by image correlation spectroscopy: concept and application,” Biophys. J. 65(3), 1135–1146 (1993). [CrossRef]  

17. M. A. Digman, C. M. Brown, P. Sengupta, P. W. Wiseman, A. R. Horwitz, and E. Gratton, “Measuring Fast Dynamics in Solutions and Cells with a Laser Scanning Microscope,” Biophys. J. 89(2), 1317–1327 (2005). [CrossRef]  

18. C. M. Brown, R. B. Dalal, B. Hebert, M. A. Digman, A. R. Horwitz, and E. Gratton, “Raster image correlation spectroscopy (RICS) for measuring fast protein dynamics and concentrations with a commercial laser scanning confocal microscope,” J. Microsc. 229(1), 78–91 (2008). [CrossRef]  

19. R. Cerbino and V. Trappe, “Differential Dynamic Microscopy: Probing Wave Vector Dependent Dynamics with a Microscope,” Phys. Rev. Lett. 100(18), 188102 (2008). [CrossRef]  

20. H. C. Berg, “How to Track Bacteria,” Rev. Sci. Instrum. 42(6), 868–871 (1971). [CrossRef]  

21. S. Ram, D. Kim, R. J. Ober, and E. S. Ward, “3D Single Molecule Tracking with Multifocal Plane Microscopy Reveals Rapid Intercellular Transferrin Transport at Epithelial Cell Barriers,” Biophys. J. 103(7), 1594–1603 (2012). [CrossRef]  

22. K. Notelaers, N. Smisdom, S. Rocha, D. Janssen, J. C. Meier, J.-M. Rigo, J. Hofkens, and M. Ameloot, “Ensemble and single particle fluorimetric techniques in concerted action to study the diffusion and aggregation of the glycine receptor α3 isoforms in the cell plasma membrane,” Biochim. Biophys. Acta, Biomembr. 1818(12), 3131–3140 (2012). [CrossRef]  

23. K. Notelaers, S. Rocha, R. Paesen, N. Smisdom, B. De Clercq, J. C. Meier, J.-M. Rigo, J. Hofkens, and M. Ameloot, “Analysis of α3 GlyR single particle tracking in the cell membrane,” Biochim. Biophys. Acta, Biomembr. 1843(3), 544–553 (2014). [CrossRef]  

24. G. Eelen, C. Dubois, A. R. Cantelmo, J. Goveia, U. Brüning, M. DeRan, G. Jarugumilli, J. van Rijssel, G. Saladino, F. Comitani, A. Zecchin, S. Rocha, R. Chen, H. Huang, S. Vandekeere, J. Kalucka, C. Lange, F. Morales-Rodriguez, B. Cruys, L. Treps, L. Ramer, S. Vinckier, K. Brepoels, S. Wyns, J. Souffreau, L. Schoonjans, W. H. Lamers, Y. Wu, J. Haustraete, J. Hofkens, S. Liekens, R. Cubbon, B. Ghesquiére, M. Dewerchin, F. L. Gervasio, X. Li, J. D. van Buul, X. Wu, and P. Carmeliet, “Role of glutamine synthetase in angiogenesis beyond glutamine synthesis,” Nature 561(7721), 63–69 (2018). [CrossRef]  

25. D. Wöll, H. Uji-i, T. Schnitzler, J.-I. Hotta, P. Dedecker, A. Herrmann, F. C. De Schryver, K. Müllen, and J. Hofkens, “Radical Polymerization Tracked by Single Molecule Spectroscopy,” Angew. Chem. Int. Ed. 47(4), 783–787 (2008). [CrossRef]  

26. S. Rocha, J. A. Hutchison, K. Peneva, A. Herrmann, K. Müllen, M. Skjøt, C. I. Jørgensen, A. Svendsen, F. C. De Schryver, J. Hofkens, and H. Uji-i, “Linking Phospholipase Mobility to Activity by Single-Molecule Wide-Field Microscopy,” ChemPhysChem 10(1), 151–161 (2009). [CrossRef]  

27. Y. Tian, M. V. Kuzimenkova, M. Xie, M. Meyer, P.-O. Larsson, and I. G. Scheblykin, “Watching two conjugated polymer chains breaking each other when colliding in solution,” NPG Asia Mater 6(10), e134 (2014). [CrossRef]  

28. F. C. Hendriks, F. Meirer, A. V. Kubarev, Z. Ristanović, M. B. J. Roeffaers, E. T. C. Vogt, P. C. A. Bruijnincx, and B. M. Weckhuysen, “Single-Molecule Fluorescence Microscopy Reveals Local Diffusion Coefficients in the Pore Network of an Individual Catalyst Particle,” J. Am. Chem. Soc. 139(39), 13632–13635 (2017). [CrossRef]  

29. A. J. Levine and T. C. Lubensky, “One- and Two-Particle Microrheology,” Phys. Rev. Lett. 85(8), 1774–1777 (2000). [CrossRef]  

30. J. C. Crocker, M. T. Valentine, E. R. Weeks, T. Gisler, P. D. Kaplan, A. G. Yodh, and D. A. Weitz, “Two-Point Microrheology of Inhomogeneous Soft Materials,” Phys. Rev. Lett. 85(4), 888–891 (2000). [CrossRef]  

31. L. Gardini, M. Capitanio, and F. S. Pavone, “3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition,” Sci. Rep. 5(1), 16088 (2015). [CrossRef]  

32. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]  

33. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U. S. A. 106(9), 2995–2999 (2009). [CrossRef]  

34. M. Badieirostami, M. D. Lew, M. A. Thompson, and W. E. Moerner, “Three-dimensional localization precision of the double-helix point spread function versus astigmatism and biplane,” Appl. Phys. Lett. 97(16), 161103 (2010). [CrossRef]  

35. M. A. Thompson, M. D. Lew, M. Badieirostami, and W. E. Moerner, “Localizing and Tracking Single Nanoscale Emitters in Three Dimensions with High Spatiotemporal Resolution Using a Double-Helix Point Spread Function,” Nano Lett. 10(1), 211–218 (2010). [CrossRef]  

36. G. Sancataldo, L. Scipioni, T. Ravasenga, L. Lanzanó, A. Diaspro, A. Barberis, and M. Duocastella, “Three-dimensional multiple-particle tracking with nanometric precision over tunable axial ranges,” Optica 4(3), 367 (2017). [CrossRef]  

37. M. Duocastella, C. Theriault, and C. B. Arnold, “Three-dimensional particle tracking via tunable color-encoded multiplexing,” Opt. Lett. 41(5), 863 (2016). [CrossRef]  

38. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods 5(6), 527–529 (2008). [CrossRef]  

39. S. Ram, P. Prabhat, J. Chao, E. S. Ward, and R. J. Ober, “High accuracy 3D quantum dot tracking with multifocal plane microscopy for the study of fast intracellular dynamics in live cells,” Biophys. J. 95(12), 6025–6043 (2008). [CrossRef]  

40. S. Ram, P. Prabhat, E. S. Ward, and R. J. Ober, “Improved single particle localization accuracy with dual objective multifocal plane microscopy,” Opt. Express 17(8), 6881 (2009). [CrossRef]  

41. S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. Dahan, and M. G. L. Gustafsson, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Methods 10(1), 60–63 (2013). [CrossRef]  

42. S. Geissbuehler, A. Sharipov, A. Godinat, N. L. Bocchio, P. A. Sandoz, A. Huss, N. A. Jensen, S. Jakobs, J. Enderlein, F. Gisou van der Goot, E. A. Dubikovskaya, T. Lasser, and M. Leutenegger, “Live-cell multiplane three-dimensional super-resolution optical fluctuation imaging,” Nat. Commun. 5(1), 5830 (2014). [CrossRef]  

43. B. Hajj, J. Wisniewski, M. El Beheiry, J. Chen, A. Revyakin, C. Wu, and M. Dahan, “Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy,” Proc. Natl. Acad. Sci. U. S. A. 111(49), 17480–17485 (2014). [CrossRef]  

44. S. Abrahamsson, M. McQuilken, S. B. Mehta, A. Verma, J. Larsch, R. Ilic, R. Heintzmann, C. I. Bargmann, A. S. Gladfelter, and R. Oldenbourg, “MultiFocus Polarization Microscope (MF-PolScope) for 3D polarization imaging of up to 25 focal planes simultaneously,” Opt. Express 23(6), 7734 (2015). [CrossRef]  

45. L. Oudjedi, J.-B. Fiche, S. Abrahamsson, L. Mazenq, A. Lecestre, P.-F. Calmon, A. Cerf, and M. Nöllmann, “Astigmatic multifocus microscopy enables deep 3D super-resolved imaging,” Biomed. Opt. Express 7(6), 2163 (2016). [CrossRef]  

46. A. Descloux, K. S. Grußmayer, E. Bostan, T. Lukes, A. Bouwens, A. Sharipov, S. Geissbuehler, A.-L. Mahul-Mellier, H. A. Lashuel, M. Leutenegger, and T. Lasser, “Combined multi-plane phase retrieval and super-resolution optical fluctuation imaging for 4D cell microscopy,” Nat. Photonics 12(3), 165–172 (2018). [CrossRef]  

47. A. Huang, D. Chen, H. Li, D. Tang, B. Yu, J. Li, and J. Qu, “Three-dimensional tracking of multiple particles in large depth of field using dual-objective bifocal plane imaging,” Chin. Opt. Lett. 18(7), 071701 (2020). [CrossRef]  

48. Y. Shechtman, L. E. Weiss, A. S. Backer, S. J. Sahl, and W. E. Moerner, “Precise Three-Dimensional Scan-Free Multiple-Particle Tracking over Large Axial Ranges with Tetrapod Point Spread Functions,” Nano Lett. 15(6), 4194–4199 (2015). [CrossRef]  

49. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods 17(7), 734–740 (2020). [CrossRef]  

50. H. Kirshner, F. Aguet, D. Sage, and M. Unser, “3-D PSF fitting for fluorescence microscopy: implementation and localization application,” J. Microsc. 249(1), 13–25 (2013). [CrossRef]  

51. S.-L. Liu, J. Li, Z.-L. Zhang, Z.-G. Wang, Z.-Q. Tian, G.-P. Wang, and D.-W. Pang, “Fast and High-Accuracy Localization for Three-Dimensional Single-Particle Tracking,” Sci. Rep. 3(1), 2462 (2013). [CrossRef]  

52. W. Liu, K. C. Toussaint, C. Okoro, D. Zhu, Y. Chen, C. Kuang, and X. Liu, “Breaking the Axial Diffraction Limit: A Guide to Axial Super-Resolution Fluorescence Microscopy,” Laser Photonics Rev. 12(8), 1700333 (2018). [CrossRef]  

53. P. Prabhat, Z. Gan, J. Chao, S. Ram, C. Vaccaro, S. Gibbons, R. J. Ober, and E. S. Ward, “Elucidation of intracellular recycling pathways leading to exocytosis of the Fc receptor, FcRn, by using multifocal plane microscopy,” Proc. Natl. Acad. Sci. 104(14), 5889–5894 (2007). [CrossRef]  

54. E. Toprak, H. Balci, B. H. Blehm, and P. R. Selvin, “Three-Dimensional Particle Tracking via Bifocal Imaging,” Nano Lett. 7(7), 2043–2045 (2007). [CrossRef]  

55. X. Wang, H. Yi, I. Gdor, M. Hereld, and N. F. Scherer, “Nanoscale Resolution 3D Snapshot Particle Tracking by Multifocal Microscopy,” Nano Lett. 19(10), 6781–6787 (2019). [CrossRef]  

56. A. Sergé, N. Bertaux, H. Rigneault, and D. Marguet, “Dynamic multiple-target tracing to probe spatiotemporal cartography of cell membranes,” Nat. Methods 5(8), 687–694 (2008). [CrossRef]  

57. K. J. A. Martens, A. N. Bader, S. Baas, B. Rieger, and J. Hohlbein, “Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): An algorithm for MHz localization rates using standard CPUs,” The J. Chem. Phys. 148(12), 123311 (2018). [CrossRef]  

58. J. C. Crocker and D. G. Grier, “Methods of Digital Video Microscopy for Colloidal Studies,” J. Colloid Interface Sci. 179(1), 298–310 (1996). [CrossRef]  

59. J. Munkres, “Algorithms for the Assignment and Transportation Problems,” J. Soc. Ind. Appl. Math. 5(1), 32–38 (1957). [CrossRef]  

60. R. McGorty, J. Schnitzbauer, W. Zhang, and B. Huang, “Correction of depth-dependent aberrations in 3D single-molecule localization and super-resolution microscopy,” Opt. Lett. 39(2), 275 (2014). [CrossRef]  

61. Y. Li, Y.-L. Wu, P. Hoess, M. Mund, and J. Ries, “Depth-dependent PSF calibration and aberration correction for 3D single-molecule localization,” Biomed. Opt. Express 10(6), 2708 (2019). [CrossRef]  

62. X. Michalet, “Mean square displacement analysis of single-particle trajectories with localization error: Brownian motion in an isotropic medium,” Phys. Rev. E 82(4), 041914 (2010). [CrossRef]  

63. A. Einstein, “über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen,” Ann. Phys. 322(8), 549–560 (1905). [CrossRef]  

64. M. Saxton, “Single-particle tracking: the distribution of diffusion coefficients,” Biophys. J. 72(4), 1744–1753 (1997). [CrossRef]  

65. L. Korson, W. Drost-Hansen, and F. J. Millero, “Viscosity of water at various temperatures,” J. Phys. Chem. 73(1), 34–39 (1969). [CrossRef]  

66. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, “Observation of a single-beam gradient force optical trap for dielectric particles,” Opt. Lett. 11(5), 288 (1986). [CrossRef]  

67. A. Ashkin, “Forces of a single-beam gradient laser trap on a dielectric sphere in the ray optics regime,” Biophys. J. 61(2), 569–582 (1992). [CrossRef]  

68. H. Masuhara and F. C. De Schryver, Microchemistry: spectroscopy and chemistry in small domains, North-Holland delta series (North-Holland, Amsterdam, New York, 1994).

69. T. Sugiyama, K.-I. Yuyama, and H. Masuhara, “Laser trapping chemistry: from polymer assembly to amino acid crystallization,” Acc. Chem. Res. 45(11), 1946–1954 (2012). [CrossRef]  

70. K.-I. Yuyama, T. Sugiyama, and H. Masuhara, “Laser Trapping and Crystallization Dynamics of l-Phenylalanine at Solution Surface,” J. Phys. Chem. Lett. 4(15), 2436–2440 (2013). [CrossRef]  

71. A. Ashkin, J. M. Dziedzic, and T. Yamane, “Optical trapping and manipulation of single cells using infrared laser beams,” Nature 330(6150), 769–771 (1987). [CrossRef]  

72. A. Ashkin, “Optical trapping and manipulation of neutral particles using lasers,” Proc. Natl. Acad. Sci. 94(10), 4853–4860 (1997). [CrossRef]  

73. T. Kudo, S.-F. Wang, K.-I. Yuyama, and H. Masuhara, “Optical Trapping-Formed Colloidal Assembly with Horns Extended to the Outside of a Focus through Light Propagation,” Nano Lett. 16(5), 3058–3062 (2016). [CrossRef]  

74. T. Kudo, S.-J. Yang, and H. Masuhara, “A Single Large Assembly with Dynamically Fluctuating Swarms of Gold Nanoparticles Formed by Trapping Laser,” Nano Lett. 18(9), 5846–5853 (2018). [CrossRef]  

75. A. Edelstein, N. Amodaj, K. Hoover, R. Vale, and N. Stuurman, “Computer Control of Microscopes Using µManager,” Curr. Protoc. Mol. Biol. (2010).

76. A. D. Edelstein, M. A. Tsuchida, N. Amodaj, H. Pinkard, R. D. Vale, and N. Stuurman, “Advanced methods of microscope control using µManager software,” J. Biol. Methods 1(2), 10 (2014). [CrossRef]  

77. P. H. J. Kouwer, M. Koepf, V. A. A. Le Sage, M. Jaspers, A. M. van Buul, Z. H. Eksteen-Akeroyd, T. Woltinge, E. Schwartz, H. J. Kitto, R. Hoogenboom, S. J. Picken, R. J. M. Nolte, E. Mendes, and A. E. Rowan, “Responsive biomimetic networks from polyisocyanopeptide hydrogels,” Nature 493(7434), 651–655 (2013). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental1

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic of the multiplane wide-field microscope. The trapping laser, the cylindrical lens, and the proprietary prism are the three main components that differ from conventional wide-field microscopes. The inserted diagram shows the extended, interleaved, and equal camera configurations which can be obtained by modifying the relative alignment of the two cameras. $\delta z$ is the focal distance between two consecutive planes of one camera, d is the lateral distance between two consecutive images on the camera and is determined by the prism geometry, n is the refractive index of the prism and ML is the in-plane magnification. At the bottom: images of 2 beads in the 8 planes acquired simultaneously using the extended configuration showing how the multiplane setup allows for simultaneous 3D imaging.
Fig. 2.
Fig. 2. Plane calibration: A and B shows one of the image planes for two different z-slices in focus (A) and out of focus (B). C and D shows the image intensity gradient corresponding to A and B, respectively. The maximum of the image intensity gradient obtained (C, D) is then plotted in E as a function of z position. E shows the maximum image intensity gradient as a function of z (dots) for all image planes. The solid line indicates the Gaussian function fit which we used to determine the position of the plane and calculate the distance between them.
Fig. 3.
Fig. 3. A) Tracking traces of individual particles for unidirectional motor-induced motion. The shadow on the plot shows $\pm$3 times the mean standard deviation representing a 99.7% confidence level. The solid bold line represents the real trace from the piezo stage. B) Tracking of helical motion induced by the piezo stage using extended configuration. The helix is 400 nm diameter, 1 µm per turn, and 50 nm step-size in z. The different home-built algorithms used codes to generate defined motion paths with the stage (linear, elliptical, 1D or 2D motion) were written in Java and adapted for micro-manager and are freely available at https://github.com/BorisLouis/MMScript.
Fig. 4.
Fig. 4. Comparison between phasor and astigmatism approach. A) PSF of a single emitter in 3D. B) ellipticity(z) curves for the different imaging planes. C) Axial precision for the two tested approaches as a function of the number of photons detected. For both approaches, the axial precision was calculated using a 1D stepwise motion (see the previous section).
Fig. 5.
Fig. 5. 3D single-particle tracking A) A representative example of an experimental 3D diffusion single trace for a 1 µm bead in water. The color indicates time in second. B) 3D MSD vs lag time for 0.2, 0.5, and 1 µm diameter PS fluorescent particles with the corresponding fitting for short lag times (dashed lines). C) 3D diffusion coefficient dependence on the bead diameter.
Fig. 6.
Fig. 6. Tracking fluorescent particles as they come to the trapping site. (A-B) Exemplary time-color-coded trace at 36 mW (A) and 240 mW (B). The trapping site was localized approximately at position (5,5). Locally average radial speed is noted in the inset. In both cases, the speed increases when approaching the trap. (C-D) 3D traces of all the trapping events acquired from 45 independent movies at 36 mW (C) and 240 mW (D). (E) Distribution of radial speed for the different trapping laser powers used (36, 60, 120 and 240 mW). (F) Plot of the trapping probability as a function of the trapping laser power after objective, the red solid line is a linear fit, R$^2$ = 0.98.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

M S D ( τ ) = 2 n D n τ
D = k B T 6 π η r 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.