Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Monte Carlo simulation and maximum-likelihood analysis of single-molecule recycling in a nanochannel

Open Access Open Access

Abstract

Prolonged observation of a single molecule in solution using a confocal microscope is possible by flowing solution through a nanochannel and reversing the flow a fixed delay after each passage so that the molecule passes back and forth through the laser focus. In this paper, Monte Carlo simulations are used to provide insight on capabilities and limitations of the single-molecule recycling procedure. Various computational methods for using photon detection times to estimate the times of passage of the molecule through the laser focus, based on matched digital filters and maximum-likelihood (ML) analysis, are compared using simulations. A new ML-based methodology is developed for estimating the single molecule diffusivity, and the uncertainty in the estimate, from the variation in the intervals between times of passage. Simulations show that with recycling ∼200 times, it should be possible to resolve molecules with diffusivities that differ by a factor of ∼1.3, which is smaller than that resolvable in ligand-binding measurements by fluorescence correlation spectroscopy. Also, it is found that the mean number of times a molecule is recycled can be extended by adjusting the delay between flow reversals to accommodate the diffusional motion of statistical outliers.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fluorescence detection of single molecules provides unique capabilities for observation of heterogeneities and dynamical processes. [1,2] Confocal microscopy usually offers best signal-to-noise [3] and fastest time resolution, as fluorescence from only a small volume is focused to a point detector, such as a single-photon avalanche diode (SPAD), which measures the timing of each photon down to sub-nanosecond precision. However, Brownian diffusion limits the time the molecule remains in the confocal volume to a millisecond or less. Extending the observation time is important for improving the precision of spectroscopic measurements and for witnessing dynamical changes, such as protein folding and molecular interactions. Tethering or optically trapping the molecule may alter its behavior. [4,5] Feedback-driven tracking and trapping, in which the displacement of the molecule from the center of the confocal volume is measured and used for real-time control of the position of the sample or laser focus, offers extended observation time with less perturbation and heating than optical trapping. [6] However, the feedback for such methods, with either piezo [7,8] or electrokinetic [9,10] repositioning in one, [11] two, [12,13] or three [14,15] dimensions, requires a continual fluorescence signal and thus sustained excitation, and hence the observation duration is limited by photobleaching and/or photoblinking.

Single-molecule recycling (SMR) in a nanochannel, in which the molecule quickly passes through the laser focus and the flow is reversed after a set delay following each passage, offers new possibilities for prolonged confocal single-molecule measurements (particularly for single-molecule fluorescence resonance energy transfer studies). [16] As the molecule is periodically and most of the time in the dark, there is opportunity for recovery from photogenerated reversible dark states and the time before irreversible photobleaching is highly extended. Even if the laser power is lowered so that some passages are not detected, the flow may be reversed following each anticipated passage and thus the overall observation duration can be extended to more than 10 s. [16]

In this paper, Monte Carlo (MC) simulations are developed to ascertain limitations and projected capabilities for SMR using parameters from previously published works. [16,17] The approach is similar to that in early reports on single-molecule detection in solution, [18] where simulations were used to validate experimental results and determine feasibility limits for future experiments, and where a digital filter was employed to help discriminate molecular photon bursts from background fluctuations. While this and other early MC simulations used course-grained physical models to generate the number of photons in sequential time bins, [19,20] a simulation algorithm that follows photophysical processes with continuous time evolution to generate the time of each photon detection was subsequently developed and applied to studies of fluorescence correlation spectroscopy (FCS) [21] and feedback-driven trapping in a nanochannel. [16] In this work, similar procedures are used to generate the "time stamp" of each detected photon, as is available in modern photon-counting instrumentation. New methods for using the photon time stamps based on a matched digital filter [22] and maximum-likelihood analysis are developed to estimate the exact time of passage of the molecule through the center of the laser focus. Simulations are used to compare these with the photon binning and thresholding method used in previous SMR experiments. The time of passage is used to schedule the time for the next flow reversal, which effects SMR. Simulations are used to test various algorithms for setting flow reversals, with provisions for reversing the flow at either a fixed or increasing delay after an anticipated passage is not detected, and for bringing in a new molecule following repeated missed detections after photobleaching, or a mistimed detection due to an "invader" molecule. It is found that more sophisticated control algorithms can increase the number of times a single molecule is recycled by accounting for statistical outliers in the diffusional trajectory of the molecule, as well as enabling the laser power to be reduced to delay photobleaching.

In addition to prolonged observation times, SMR provides a means for experimentally investigating molecular motion, interactions, and transport in nano-confined spaces, topics of increasing interest. [23] To this end, this paper presents a new maximum-likelihood (ML) based methodology for measuring single-molecule diffusivity from fluctuations in the intervals between detected passages. The technique is valid even for a small number of recycles and also returns the statistical confidence or error of the estimate. The method for determining the error relies on specifying a maximum possible diffusivity so that the likelihood function may be normalized. A similar approach may be applicable in other single-molecule measurements (e.g., fluorescence lifetime).

In fluidic systems, the diffusivity of a molecule depends on its size and shape and the properties of the suspending medium, making it an important nanoscale characteristic. SMR offers a method for measuring diffusivity that is applicable within the confines of a nanochannel, which is valuable since diffusivity in a nanochannel sometimes differs from that in bulk solution. [24,25] In high-throughput screening and pharmaceutical drug discovery research, a speed-up in the diffusivity of a fluorescent ligand when it becomes unbound from a target biomolecule indicates the competitive binding of one of the molecules from the library being screened. [21] For such studies, the diffusivity may be measured by FCS [26,27] or 2-focus FCS2 [28] by fitting the autocorrelation function. Mixtures of molecular species with different diffusivities, such as bound and unbound fluorescent ligands, can be resolved by FCS, but species with diffusivities differing by less than a factor of about 1.6 are not resolvable, [29] so ligand unbinding from small target proteins is difficult to assay. While FCS is an ensemble measurement, the diffusivity of a single molecule can be estimated in feedback-driven tracking and trapping experiments by fitting the mean-square displacement of the molecule’s trajectory, while accounting for corrective motions of the feedback system. [30,31] In tracking experiments, the precision of the measured diffusion coefficient depends much more on the duration of the trajectory or number of frames than on the information content of each frame. [32] A similar situation is found here in that the precision of the diffusivity estimate depends primarily on the number of times the molecule is recycled and not so much on precisely determining the intervals between measured times of passage. An important finding is that MC simulations show that by collecting a sufficient number of photon bursts ( 200), it should be possible to resolve single molecules with diffusivities that differ by a factor of <1.3, i.e., a smaller amount than is practically resolvable by FCS, and hence SMR could be useful for ligand binding studies in pharmaceutical drug discovery research.

2. Methods

2.1 Monte Carlo simulation

The simulation generates the sequence of photon time stamps resulting when a control algorithm switches the flow direction of solution in the nanochannel in response to real time analysis of the photons. The models of molecular photophysics and the noise of SPAD are based on previous work in Ref. 17. The methods used for simulating molecular motion due to flow and diffusion along the nanochannel, photophysics, SPAD photon detector behavior, and timing errors in recording photon time stamps are explained in section 1 of the supplement. The simulation is written in C with use of the Intel Math Kernal Library.

2.2 Detecting photon bursts

To detect each passage of a molecule through the laser focus, photon bursts must be above the shot noise fluctuations of background photons. In previous SMR experiments, photons were counted into successive 1 ms bins using a PCI-6602 counter card and LabVIEW (National Instruments), and two different thresholds were used to distinguish signal from background noise. A molecule passage was detected if successive bins each had 7 or more photons and if there were at least 40 photons in total (supplement of Ref. 17). However, the 1 ms binning limits the precision of the estimated time of passage. In addition, the authors noted that the time resolution of their feedback was limited by a 2 ms LabVIEW loop time, and that faster time resolution could be possible by analyzing photon arrival times with use of a field programmable gate array (FPGA) or LabVIEW RealTime system.

Therefore, in order to improve the timing of photon bursts as well as their discrimination from background, simulations are used to investigate processing the stream of photon time stamps by a weighted sliding sum (WSS), in which the weights are proportional to the expected temporal profile of the fluorescence signal as a molecule passes through the laser focus. Thus the WSS ideally corresponds to a matched digital filter. For a molecule passing at constant velocity $v$ through a Gaussian laser focus of waist $\omega _0$, the weights are taken to be $w(t) = A\exp (-(t-3\sigma _t)^2/2\sigma _t^2),0 \le t \le 6\sigma _t$, where $A = 128$ provides an amplitude with suitable dynamic range for 16-bit computations and the width of the weight function is ideally $\sigma _t = \omega _0/(2v)$. Section 2 of the supplement explains how the ratio $\omega _0/(2v)$ can be experimentally measured by fitting the normalized autocorrelation function calculated from the photon time stamps. However, the WSS works well even if the width is non-ideal, as may occur if the value of $\omega _0/(2v)$ is not accurately measured, if the experiment is using different types of molecules with differing electrophoretic velocities $v$, or if diffusion causes the effective transit velocity through the laser focus to fluctuate. Section 3 of the supplement explains how the WSS is efficiently computed from the stream of photon time stamps, as implemented in the simulation, and also in an experimental system using a LabVIEW Realtime program with NI-PCI-7833R FPGA data acquisition card. [33]

As the WSS is updated, peaks and valleys are found, while ignoring small fluctuations. To distinguish photon bursts due to the passage of a molecule from those due to background, peaks must be above a threshold, as shown in Fig. 1 (solid red line). The threshold is chosen by recording the statistics of background-induced fluctuations in the WSS while no molecules pass, as presented in the insets of Fig. 1. A very low rate of background peaks is preferable to avoid incorrectly timed flow reversals. For a background photon rate of 600 $\textrm {s}^{-1}$, a threshold of 1750 gives about 1 false peak per $10^5$ s (as seen by the red dashed line in the lower inset of Fig. 1), yet is low enough that most molecule passages are detected.

 figure: Fig. 1.

Fig. 1. The histogram of peak values of the WSS during a $10^4$ s simulated experiment of SMR for parameters of Table 1 in the results section. The solid vertical line (red) is a threshold at an amplitude of 1750, also indicated by the cross (also red) in the upper inset and found by the point of intersection of the dashed line (also red) with the $x$-axis in the lower inset ($10^{-1}$ peaks per $10^4$ s). Peak values above the threshold are almost all due to passage of molecules. For no molecules present, the upper inset shows a plot of the thresholds above which there is a WSS peak rate of 1 per $10^4$ s versus the background count rate for WSS weights of widths $\sigma _t$ = 1.0, 1.5, and 2.0 ms; the lower inset shows the number of peaks below threshold versus threshold during $10^4$ s simulations with $\sigma _t$ = 1.5 ms and background rates of 400, 600, and 800 $\textrm {s}^{-1}$. Similar curves are used to generate the data in the upper inset.

Download Full Size | PDF

Tables Icon

Table 1. Parameters used in MC simulationsa

2.3 Estimating the time of passage

Once a burst of photons from the passage of a molecule is detected, it is possible to estimate the passage time $t_p$ (the exact moment at which the molecule passes through the center of the focus) in several ways. MC simulations are used to compare four different estimation methods, with recycling one molecule with no diffusion and with a fixed time $2T$ between flow reversals, so that the exact time of passage is known. (All other parameters are the same as those in Table 1, results section, except photobleaching is set to 0 and there are no invader molecules.) The timing errors for the four methods are summarized in Fig. 2. Method 1 is to collect photons in 1 ms bins and record the time at which there are successive bins with $\ge 7$ photons and $\ge 40$ photons in total, as implemented in Ref. 17. As expected, the precision in this case is limited by the 1 ms photon binning time. Method 2, which gives significant improvement, is to take the time of the first point at which the WSS attains its maximum value, with a shift of $-3\sigma _t$ to account for the delay between the photons and the evaluated WSS. Method 3 is to take the midpoint of the times at the leading and trailing edges at which the WSS crosses a level that is half way between the threshold (1750) and the peak (again with $-3\sigma _t$ shift). This gives a further small improvement of about $4\%$, probably because photon shot noise can alter the position of the maximum of the peak more than the edges. Method 4, which requires longer to compute, is to average the times of all photons that occur within a chosen interval around the center of the burst, which is taken to be the estimated time of passage first found by method 2. In effect, this gives the ML estimate of the passage time, as the timing of each photon from fluorescence gives an equally likely measure of the molecule passage time. As shown in the inset of Fig. 2, method 4 has best precision for an interval of $\pm 3\sigma _t$ and in this case the timing error is about $84\%$ of that obtained by method 2. Except for method 1, the timing errors are normally distributed with standard deviations $\delta _t$ that vary with the laser power, amplitude of WSS peak, and number of photons in the burst. For the ML estimate of method 4, if $n$ photons are detected and the expected temporal profile is a Gaussian with a standard deviation $\sigma _t$, and if background is negligible, then the estimated passage time is theoretically expected to have a shot-noise limited error of $\sigma _t/\sqrt {n}$, and hence the interval between two passages would have an error of $\delta _t = \sqrt {2}\sigma _t/\sqrt {n}$, which is shown by the red circles in Fig. 2. Clearly, bursts with more photons may be more precisely timed, but on the other hand it is favorable to detect each passage with as few photons as possible in order to extend the number of times a molecule may be recycled before it is photobleached. Also, as seen in the inset of Fig. 2, if the interval about the center of the photon burst is increased beyond $\pm {3\sigma _t}$ in order to attempt to collect more photons for the ML estimate, the precision decreases as background photons outnumber signal photons near the edges of a burst.

 figure: Fig. 2.

Fig. 2. Timing errors found by recycling a molecule with and fixed time between flow reversals. The main plot shows the standard deviations of the intervals between successive molecule passages as a function of laser power (with background held at 600 $\textrm {s}^{-1}$) for four methods for estimating the times of passage (solid lines, from highest to lowest standard deviations). The red circles plot the shot-noise limit for timing error if background is negligible. The inset shows the standard deviation of the ML estimates for a laser power of $98\mu \textrm {W}$ for intervals of $\pm {f\sigma _t}$ where $f$ varies from 1 to 4.

Download Full Size | PDF

The technique of WSS filtering followed by ML analysis for estimating the time of passage from the photon burst could be extended to two-dimensions for estimating the position a molecule from its image for use in single-molecule tracking or sub-diffraction microscopy. [34]

2.4 Setting the times for flow reversals

As soon as the passage time $t_p$ is estimated using one of the above four methods, it may be used to adjust the scheduled time for the next flow reversal. Figure 3 shows several algorithms for doing this, with provisions for automatic recycling at fixed intervals for up to $M$ missed detections, loading a new molecule following the end of recycling or detection of an invader molecule, and adjustment of the recycling timing following detection of a photon burst from the passage of a molecule. An explanation of the algorithm choices for when a photon burst is detected at a time when not expected (as may occur from an invader molecule) is given in section 4 of the supplement. Further refinements of the algorithms are discussed in the results section.

 figure: Fig. 3.

Fig. 3. Algorithms for single-molecule recycling

Download Full Size | PDF

2.5 ML estimation of diffusivity

The diffusivity of a single molecule in the nanochannel may be determined by analyzing the intervals between passage times using ML methods. To this end, in section 5 of the supplement the following result is derived: If a molecule passes through the center of the focus $x = 0$ at time $T$, and the flow is reversed at time $2T$, the PDF that it crosses back across the origin within $dt$ of time $t$ is

$$P(t)dt = \left(\frac{1}{2} + \frac{T}{t}\right)\frac{v}{\sqrt{4{\pi}Dt}}\exp\left[\frac{-v^2(t - 2T)^2}{4Dt}\right]dt, t \in (0, \infty).$$
As shown in Fig. 4, this function is asymmetric with a peak at $t < 2T$, but for longer recycle times $T \gg D/v^2$ it may be approximated by a Gaussian centered at $t = 2T$ with a width of $\sigma _D = \sqrt {4DT} / v$, i.e., $p(t)dt = N(t - 2T, \sigma _D)dt$, where $N(t, \sigma )$ is a normal distribution with standard deviation $\sigma$. For typical parameters (such as those in Table 1, results section), $D = 10^{-10} \textrm {m}^{2}\textrm {s}{-1}$, $v = 3.4 \times 10^{-4} \textrm {ms}{-1}$, and $T = 3 \times 10^{-2} \textrm {s}$, we have $T \approx 35D/v^2$ and $\sqrt {4DT}/v \approx 6 \textrm {ms}$. In principle, the diffusivity may be estimated from just a single measurement $t_i$ of the interval between any two molecule passages. To simplify the discussion, consider the case where the method used for estimating the passage time from the burst of photons is exact, so the interval between two estimated passage times $t^e_i$ exactly measures the actual interval between the two passages (i.e., $t^e_i = t_i$), and where there are no missed detections, and where $0 < t_i < \infty$. According to the ML method [34], when the measurement result $t_i$ is inserted into Eq. (1), we get a function that expresses the likelihood of the parameter $D$ for the given measurement,
$$L(D;t_i) = \left(\frac{1}{2} + \frac{T}{t_i}\right) \frac{v}{\sqrt{4{\pi}Dt_i}}\exp\left[\frac{-v^2(t_i - 2T)}{4Dt_i}\right], D \in [0, \infty).$$

 figure: Fig. 4.

Fig. 4. Probability density function of interval between molecule passages, as given by Eq. (1).

Download Full Size | PDF

The ML estimate $\hat {D}$ is the value of $D$ for which Eq. (2) is a maximum, which is found by solving $\partial L(D;t_i)/\partial {D} = 0$. As shown in section 7 of the supplement, this gives the ML estimate

$$\hat{D_i} = v^2(t_i - 2T)^2 / (2t_i).$$
When $R$ independent measurements $t_i$, $i = 1,\ldots ,R$, are made of the intervals between detections, if each estimate is of equal reliability, the ML estimate from the set would simply be the mean of the individual measurements,
$$\hat{D} = \frac{v^2}{2R}\sum_{i = 1}^{R}\frac{(t_i - 2T)^2}{t_i}.$$
If the Gaussian approximation is valid, we have $L(D;t) = \frac {v}{\sqrt {8\pi DT}}\exp [\frac {-v^2(t_i - 2T)^2}{8DT}]$, which gives
$$\hat{D_i} = v^2(t_i - 2T)^2 / (4T),$$
and for $R$ independent measurements, Eq. (4) becomes $\hat {D} = \frac {v^2}{4RT}\sum _{i = 1}^{R} (t_i - 2T)^2$, which is $v^2/(4T)$ multiplied by the variance of the intervals (as implemented in Ref. 16).

2.6 Confidence limits of the ML estimate

The likelihood function in Eq. (2) and its Gaussian approximation are not normalizable (i.e., $\int _{0}^{\infty }L(D;t_i)dD = \infty$). However, if we restrict measurement results for $D$ to those possible when the interval between passages is in the range $T < t_i < 3T$, then the estimate $\hat {D}$ is always within a finite range $0 \le \hat {D} \le D_{max}$, where $D_{max} = v^2T/4$ if the number of missed detections $m = 0$ and $T \gg D/v^2$. If we restrict $D$ to this same finite domain (or any smaller finite domain based on other a priori knowledge), we can normalize the likelihood function, as the integral over the domain is now finite, e.g.,

$$\int_{0}^{v^2T / 4}L(D;t_i)dD = \frac{v^2}{\sqrt{8\pi}}\exp\left[\frac{-(t_i - 2T)^2}{2T^2}\right] + \frac{v^2|t_i - 2T|}{4T}\left[erf\left(\frac{|t_i - 2T|}{2T}\right) - 1\right] = K(t_i),$$
where $erf(x)$ is the Gaussian error function. Hence the PDF for $D$ confined to a finite domain, which allows us to determine the confidence limits of an estimate, is
$$p(D;t_i) = \frac{L(D;t_i)dD}{\int_{0}^{D_{max}}dD} = \frac{1}{K(t_i)}\frac{v}{\sqrt{8\pi DT}}\exp\left[\frac{-v^2(t_i - 2T)^2}{8DT}\right]dD, D \in [0, D_{max}].$$
By setting $x = D / D_{max}$ and $d = |t_i - 2T|/T$, Eq. (7) may be put into non-dimensionalized form $p(x;d)dx = x^{-1/2}\exp (-d/x)dx / K(d), x \in [0, 1]$, where $K(d) = \int _{0}^{D_{max}}p(x;d)dx$. Figure 5 shows non-dimensionalized plots of Eq. (7) for several values of $d = |t_i - 2T|/T$, while the inset of Fig. 5 gives an example of how to determine confidence limits for $D$ from the PDF for the case in which $d = 0.3$ corresponding to a measured interval of $t_i = 2.3T$ or $t_i = 1.7T$. The ML estimate of $D/D_{max}$ is the value at which the PDF is a maximum, which for the example case is $D/D_{max} = d^2 = 0.09$, consistent with Eq. (7). Also, with the prior knowledge that $D$ is in the range $0 \le D \le D_{max}$ and with a single measurement of $t_i = 2T \pm 0.3T$, one can say with $98\%$ confidence that $D > 0.043D_{max}$ as $\int _{0}^{0.043}p(x)dx = 0.02$ and similarly, one has $90\%$ confidence that $D < 0.923D_{max}$ as $\int _{0.923}^{1}p(x)dx = 0.1$.

 figure: Fig. 5.

Fig. 5. Plot of Eq. (7) in non-dimensionalized units for five values of $d = |t_i - 2T|/T$ (solid lines in order from top to bottom at the left of the graph), where $D_{max} = v^2T / 4$. The inset shows that for $d =0.3$, the ML estimate is $\hat {D} = 0.09D_{max}$ (indicated by the vertical line), that with $98\%$ confidence $D/D_{max} > 0.035$ (excluding shaded area at left) and that with $90\%$ confidence $D/D_{max} < 0.892$ (excluding shaded area at right).

Download Full Size | PDF

2.7 Procedure for numerically evaluating the PDF of the diffusivity during SMR

For a dataset of multiple independent measurements $t^e_i, i = 1,2,\ldots ,R$, the PDF for $D$ may be found by normalizing the product of the likelihood functions for each of the measurements:

$$p(D;t^e_1,t^e_2,\ldots,t^e_R)dD = \left.\left[\sum_{i =1}^{R} L(D;t^e_i)\right]dD \middle/ \int_{0}^{\frac{v^2T}{4}}\left[\sum_{i=1}^{R}L(D;t^e_i)\right]\right.dD.$$
Eq. (8) can be evaluated numerically for a given set of measurements $t^e_i, i = 1,2,\ldots ,R$, and then used to find the ML-estimate $\hat {D}$ and its confidence limits. To facilitate the computation during the SMR process, Eq. (8) divided by $A(D)$ is multiplied by $2\sqrt {2\pi }T$ to get the dimensionless quantity
$$L(D;t^e_i)2\sqrt{2\pi}T = \left[\frac{D}{D_{max}} + \left(\frac{\delta_t}{T}\right)^2\right]^{-1/2} \left.\exp{\frac{-\left[(t^e_i - 2T)/T\right]^2}{2\left[D/D_{max} + (\delta_t/T)^2\right]}}\middle/A(D)\right.,$$
where $A(D) = \int _{T}^{3T}g(t - 2T, \sqrt {4DT/v^2 + \delta _t^2})dt = erf(1/\sqrt {2(D/D_{max} + (\delta _t/T)^2}))$ and the detail definition is given in section 7 of the supplement. For $T<t^e_i<3T$ and $0\le D \le D_{max}$ with $D_{max} = v^2T/4$, the logarithm of Eq. (9) is evaluated for a two-dimensional array of possible values of $D$, (i.e., for $D_j = jD_{max}/j_{max}, j = 0,1,\ldots ,j_{max}, j_{max} \approx 10,000$) and possible values of $|t^e_i - 2T|/T$, (i.e., for $|t^e_i - 2T|/T = d_k = k\Delta t_w/T$, with $k = 0,1,\ldots ,k_{max}$, where $k_{max} = T/\Delta t_w$ and $\Delta t_w = 10 \textrm {mics}$ is the update interval of the WSS, defined in section 3 of the supplement); and these values are stored in a look-up table:
$$s(j, k) = -0.5\left[\ln e_j + d_k^2/e_j + 2\ln erf\left(1/\sqrt{2e_j}\right)\right],$$
where $e_j = D_j/D_{max} + (\delta _t/T)^2 = j/j_{max} + (\delta _t/T)^2$. The last term in Eq. (10) corrects for bias due to restricting measured intervals to be within the finite range $T < t_i^e < 3T$ and can be neglected if $D \ll D_{max}$.

As each passage time and interval $t_i^e$ are estimated during SMR, we determine $k = |t_i^e -2T|/\Delta t_w$, and we add the corresponding values from the look-up-table $s(j,k)$ to an array $S(j)$, for $j = 0,1,\ldots ,j_{max}$. At the end of the recycling, we subtract the maximum of $S(j)$ from each element of the array (to effect scaling to avoid numerical underflow), take the antilogarithm, and normalize (divide by the sum) to find the PDF of Eq. (8).

Fig. 6 plots an example of the evolution of the PDF obtained by numerically evaluating Eq. (8) by summing subarrays taken from Eq. (10) during a SMR simulation. The simulation parameters are as in Table 1 of the results section, except there is only one molecule, the concentration in both reservoirs is zero (no invader molecules), photobleaching efficiency = 0, and the diffusion coefficient is $D = 4.9 \times 10^{-11}\textrm {m}^{2}\textrm {s}{-1}$. The times of passage of the molecule are found by method 4, i.e., by averaging the times of photons that are within $\pm 3\sigma _t$ of the position of the peak of the WSS and the timing error is taken to be $\delta _t = 1.43 \times 10^{-4} \textrm {s}$ (from the minimum point in the inset of Fig. 2). In this example, after 200 recycles, the ML estimate found from the peak of the PDF is $\hat {D} = 4.66 \times 10^{-11} \textrm {m}^{2}\textrm {s}$. Confidence limits for the ML estimate may be determined from the PDF. For example, after 200 recycles, the PDF is approximately a Gaussian but is slightly skew and with $\pm 42.5\%$ confidence, $D$ is within a range of $(-17.4, +22.7)\%$ of the ML estimate. The actual value is $1.05\hat {D}$, which is indeed within the $95\%$ confidence interval.

 figure: Fig. 6.

Fig. 6. Evolution of the probability density function of diffusivity $D$ as a single molecule is recycled up to 200 times. The known diffusivity ($4.9 \times 10^{-11}\textrm {m}^{2}\textrm {s}{-1}$) is indicated by the vertical line (red). The inset shows a histogram of the intervals between passages (dots), which have a variance of $5.16 \times 10^{-5}\textrm {s}^{2}$ and the solid line (red) is a Gaussian with the same variance and area.

Download Full Size | PDF

To illustrate an alternative way for finding $\hat {D}$, albeit without finding the PDF and confidence limits and valid only if the number of recycles is large, the inset of Fig. 6 shows a histogram of the intervals between passages, which have a standard deviation of $7.18 \times 10^{-3}\textrm {s}$ and a Gaussian with this same standard deviation, which by Eq. (7), gives $\hat {D} = 4.98 \times 10^{-11}\textrm {m}^{2}\textrm {s}{-1}$. The timing error $\delta _t$ could be accounted for by subtracting it in quadrature from the standard deviation, but in this case gives only $2\%$ bias. Similarly, the bias from restricting the range of intervals to $t_i^e \in (T, 3T)$ is small since $D \ll D_{max} = 8.67 \times 10^{-10} \textrm {m}^{2}\textrm {s}{-1}$

The example above illustrates that the diffusivity of a single molecule can be measured to a precision of about $\pm 10 \%$ when it is recycled 200 times or more. In order to resolve two molecules with different diffusivities, the PDFs must be resolvable. As the PDFs are approximately Gaussian, such will be the case for diffusivity measurements of $D_1 \pm 10\%$ and $D_2 \pm 10\%$ if $D_2 \ge 1.27D_1$. Hence it should be possible to resolve single molecules with diffusivities that differ by a factor of 1.3, i.e., a smaller amount than is practically resolvable by FCS ( 1.6), [29] if the molecules can be recycled 200 times or more.

3. Results and discussion

3.1 Simulations using parameters from prior experiments

We have used the MC simulation, SMR algorithms, and analysis methods discussed above to study the choice of parameters for an experiment to measure the diffusivities of single molecules, beginning with the parameters listed in Table 1, which are set to model previously published experiments for such measurements. [16] The simulated experiment times are 1,000 s, while the execution times are 200 s on a 2.6 GHz 2$\times$2-core Intel Xenon CPU desktop PC with 20 GB of RAM.

In simulations using the parameters of Table 1 (or parameters similar to these), we find that recycling of the same molecule usually continues for only a small number of cycles—for example, when using method 4 of section 2.3 and algorithm (a) of Fig. 3 (reload if an unexpected molecule passage is detected), with a maximum number of missed detections of $M = 4$, on average, a molecule is recycled only 14 times before the SMR terminates and a new molecule is loaded, while only very occasionally is a molecule recycled for hundreds of times. We find that the SMR terminates after a small number of recycles even if photobleaching and triplet crossing are set to zero and if there is no possibility of an invader molecule. On inspection, as shown in the next section, we find that the terminations are due to statistical outliers in the diffusion trajectories leading to poorly timed flow switching, and can be reduced with a better choice of parameters, and an improved recycling algorithm, so as to increase the number of recycles. On the other hand, if the concentration $C$ is suitably increased, invader molecules can replace those lost due to mistimed flow switching and recycling can appear to be continual.

3.2 Choice of parameters for diffusivity measurements

According to Table 1, the diffusional residence time, which is the average time it takes for a molecule beginning at the origin to diffuse beyond the beam waist, is $\tau _D = \omega _0^2/(4D) = 2.35 \textrm {ms}$ whereas the flow residence time, which is the time it takes for a molecule beginning at the origin to flow beyond the beam waist, is $\tau _F = \omega _0/v = 2.82 \textrm {ms}$. Since $\tau _F > \tau _D$, the flow is still slower than the effective speed due to diffusion, so much of the time, the molecule diffuses in and out of the laser focus during each transit. As a consequence, the WSS often results in multiple peaks, as seen in the example in Fig. 7, which shows the trajectory of a molecule, the times of the photons, and the corresponding WSS, which in this case has two peaks.

 figure: Fig. 7.

Fig. 7. Example of the trajectory of a molecule as it transits the laser focus (solid line in top graph), the times of detected photons (short vertical lines in bottom graph), and the corresponding weighted sliding sum (solid curve in bottom graph). The dashed horizontal lines in the top graph represent the beam waist of the laser focus.

Download Full Size | PDF

Even with algorithms (b) and (c) near the end of the flow diagram in Fig. 3, the SMR is often disrupted when a molecule by chance diffuses against the flow so that it would arrive at the laser focus late, after an automatic recycle event is scheduled, as shown in the example trajectories of Fig. 8.

 figure: Fig. 8.

Fig. 8. Trajectories of a molecule undergoing SMR, with parameters from Table 1. The thickest lines (red) are for simulations of SMR without automatic recycling (maximum number of missed detections of $M = 0$), the medium thickness line (blue) in the first plot is a simulation in which automatic recycling is used with $M = 3$, and the thin line (black) is for automatic recycling with $M = 4$.

Download Full Size | PDF

Further discussion of Fig. 8 with calculations for the probability of termination of SMR due to statistical outliers is given in section 8 of the supplement. However, the essential insight provided by the simulations is that better diffusivity measurements with longer recycling is obtained if the beam waist and flow speed are increased to reduce the probabilities of multiple crossings and would-be late arrivals after the flow is automatically reversed. Termination of SMR is then mostly due to photobleaching, so we now reduce the laser power so as to reduce the probability of photobleaching.

3.3 Single-molecule diffusivity measurements

As explained in the methods, during the course of SMR, the PDF of the diffusivity $D$ for the molecule may be acquired using Eq. (10). However, Eq. (10) must be modified to account for the increasing time delay when automatic recycling follows one or more missed detections. With an increasing delay given by $4m'T$, the mean interval between passages $t_i$ is $2m'^2T$, which increases quadratically with $m'$, the description of the algorithm is given in the section 9 of the supplement. If the timing error is small compared to the diffusional spread (i.e., if $\delta _t \ll \sqrt {4DT} / v$), the measured interval between passages $t^e_i$ also increases quadratically with $m'$, so the standard deviation of the distribution of intervals increases linearly with $m'$. Whenever a passage is detected, the interval $t_i^e$ since the previous detected passage is measured in the usual way, but the measurement is discarded unless it is within the range $2m'^2T - m'T < t_i^e < 2m'^2T + 2m'T$, where $m'$ is 1 plus the number of missed detections occuring before the detected passage. As the allowed range has been extended to allow for late-comers, the look-up table of Eq. (10) is replaced by

$$s(j, k) = -0.5\left[\ln{e_j} + d_k^2/e_j + \ln{erf\left(1/\sqrt{2e_j}\right)} + \ln{erf\left(2/\sqrt{2e_j}\right)}\right],$$
where, as before, $e_j = j/j_{max} + (\delta _t/T)^2$, $j = 0,1,\ldots ,j_{max}$, $j_{max} \approx 10,000$, $k = 0,1,\ldots ,k_{max}$, but now $k_{max} = 2T/\Delta t_w$. Whenever a passage is detected, the interval $t_i^e$ and previously recorded missed detections $m'$ are used to find
$$k = |t_i^e - 2m'^2T|/(m'\Delta t_w).$$
We then proceed in the same way as before: We add the corresponding values from the look-up-table $s(j, k)$ to an array $S(j)$, for $j = 0,1,\ldots ,j_{max}$, and at the end of the recycling, we take the antilog and normalize to find the PDF of Eq. (8).

In Fig. 9, we plot the ML estimates of the diffusivity and their error bars versus the number of recycles for the 199 single molecules, which are obtained over the course of the 10,000 s simulated experiment reported in Table S2, column Tab 2* in section 9 of the supplement. To obtain the results in Fig. 9, the PDF of the diffusivity $D$ for each molecule is acquired using Eqs. (11) and (12). The ML estimate $\hat {D}$ is then the value of $D$ at the peak of the PDF, and the one-standard deviation error bars are found from the intervals on either side of the peak for which the area of the PDF exceeds 0.341 (i.e., $0.5erf(1/\sqrt {2})$. There is a small bias or systematic error in the mean of the results, which we attribute to the neglect of multiple crossings. However, most estimates are reasonably precise ($< \pm 10 \%$ error) if the number if recycles is sufficiently large (> 200).

 figure: Fig. 9.

Fig. 9. ML estimated diffusivities versus numbers of recycles for 199 molecules acquired during a 10,000 s simulation. The error bars represent the one-standard deviation confidence intervals. The dashed vertical line (red) is the weighted average of all measurements, where weights are determined from error bars.

Download Full Size | PDF

3.4 Resolving molecules with different diffusivities

In some applications, such as distinguishing fluorescent ligands bound and unbound to a larger target molecule, it is of interest to resolve different individual fluorescently labelled molecules on the basis of their diffusivities. To examine this possibility, in Fig. 10, we plot results from a simulation to use diffusivity measurements to resolve dsDNA15mers with diffusivity $D_1 = 0.98 \times 10^{-10}\textrm {m}^{2}\textrm {s}{-1}$ and dsDNA13mers with $D_2 = 1.3 \times 10^{-10} \textrm {m}^{2}\textrm {s}{-1}$, in accordance with an experiment reported in Ref. 16. The figure plots results only for molecules that have undergone SMR for at least 100 times and for which an invader was not identified (150/218 molecules in total). The results are for a simulation of an equal mix and total upstream concentration of 1 pM with electrokinetic flow velocities set as $v_1 = 5.11 \times 10^{-4} \textrm {m}\;\textrm {s}^{-1}$ and $v_2 = 4.75 \times 10^{-4} \textrm {m}\;\textrm {s}^{-1}$, which would be achieved in Ref. 16 with a potential of 150 V. Overall, the precision of the estimates is sufficient to correctly separate most of the molecules into two populations of different diffusivities.

 figure: Fig. 10.

Fig. 10. Graph displaying a sequence of diffusivity measurements for 150 molecules recycled over a simulated 10,000 s experiment. Cross (green) and solid round (red) points represent molecules that are known to be DNA 15mers and DNA 13mers respectively. For each molecule, the error bars are obtained from the estimated probability density functions.

Download Full Size | PDF

4. Conclusions

Previous experiments [16] have shown that single-molecule recycling (SMR) in a nanochannel is an effective technique to achieve prolonged observation of a single fluorescently labeled molecule in solution using a confocal microscope, and that the diffusivity of molecules in a nanochannel may be determined from fluctuations in the intervals between successive detections. In this work, a Monte Carlo simulation program, which generates individual photon detection times, is used to gain physical insight on the limitations, capabilities, selection of experimental parameters, and choice of control algorithms for the single-molecule recycling process. A computational methodology for real-time processing of photon time stamp data with a Gaussian weighted sliding sum is found to give more sensitive photon burst detection than a previously used photon binning method. It enables use of lower laser power so that molecules may be recycled more times before photobleaching and also improved timing of photon bursts.

This paper also presents a new computational methodology based on a statistical model of diffusion, which enables numerically updating the probability density function of the possible diffusivity of the single molecule from the estimated intervals between passage times during the course of a SMR experiment. This enables a maximum-likelihood estimate of the single-molecule diffusivity and also determination of the confidence intervals of the estimate, which are valid even for a small number of recycles. It is found that the diffusivity of a single molecule can be determined to within about $\pm 10\%$ if the molecule can be recycled for 200 times or more. This precision is sufficient to resolve molecules with diffusivities that differ by a factor as small as 1.3.

The control and analysis algorithms are applied to a case in which individual molecules from a $50\%$ mixture with diffusivities of $D_1 = 0.98 \times 10^{-10}\textrm {m}^{2}\textrm {s}{-1}$ and $D_2 = 1.3 \times 10^{-10} \textrm {m}^{2}\textrm {s}{-1}$ are recycled one by one until they are photobleached or otherwise lost, with the diffusivity and error determined for each molecule. Most of those recycled 100 times or more can be correctly identified on the basis of diffusivity measurement and hence separately counted. The results indicate that single-molecule recycling in a nanochannel could be useful in identifying molecular interactions that result in only a modest change in the diffusivity, which may have application to pharmaceutical drug discovery research.

Funding

Center for Laser Applications and National Science Foundation (100408).

Acknowledgments

This work was supported by the Center for Laser Applications and National Science Foundation grant 100408. We thank Dr. Brian Canfield for help in setting up an experimental system for single-molecule recycling in a nanochannel.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

See Supplement 1 for supporting content.

References

1. E. Barkai, Y. Jung, and R. Silbey, “Theory of single-molecule spectroscopy: Beyond the ensemble average,” Annu. Rev. Phys. Chem. 55(1), 457–507 (2004). PMID: 15117260. [CrossRef]  

2. W. E. W. E. Moerner, “Nobel lecture: Single-molecule spectroscopy, imaging, and photocontrol: Foundations for super-resolution microscopy,” Rev. Mod. Phys. 87(4), 1183–1212 (2015). [CrossRef]  

3. W. E. Moerner and D. P. Fromm, “Methods of single-molecule fluorescence spectroscopy and microscopy,” Rev. Sci. Instrum. 74(8), 3597–3619 (2003). [CrossRef]  

4. R. H. Goldsmith and W. Moerner, “Watching conformational- and photodynamics of single fluorescent proteins in solution,” Nat. Chem. 2(3), 179–186 (2010). [CrossRef]  

5. Y. Pang and R. Gordon, “Optical trapping of a single protein,” Nano Lett. 12(1), 402–406 (2012). PMID: 22171921. [CrossRef]  

6. W. E. Moerner, “New directions in single-molecule imaging and analysis,” Proc. Natl. Acad. Sci. 104(31), 12596–12602 (2007). [CrossRef]  

7. H. Cang, C. S. Xu, D. Montiel, and H. Yang, “Guiding a confocal microscope by single fluorescent nanoparticles,” Opt. Lett. 32(18), 2729–2731 (2007). [CrossRef]  

8. N. P. Wells, G. A. Lessard, and J. H. Werner, “Confocal, three-dimensional tracking of individual quantum dots in high-background environments,” Anal. Chem. 80(24), 9830–9834 (2008). PMID: 19072277. [CrossRef]  

9. J. A. Germann and L. M. Davis, “Three-dimensional tracking of a single fluorescent nanoparticle using four-focus excitation in a confocal microscope,” Opt. Express 22(5), 5641–5650 (2014). [CrossRef]  

10. K. D. Dissanayaka, B. K. Canfield, and L. M. Davis, “Three-dimensional feedback-driven trapping of a single nanoparticle or molecule in aqueous solution with a confocal fluorescence microscope,” Opt. Express 27(21), 29759–29769 (2019). [CrossRef]  

11. A. E. Cohen and W. E. Moerner, “Method for trapping and manipulating nanoscale objects in solution,” Appl. Phys. Lett. 86(9), 093109 (2005). [CrossRef]  

12. L. M. Davis, B. K. Canfield, X. Li, W. H. Hofmeister, G. Shen, I. P. Lescano-Mendoza, B. W. Bomar, J. P. Wikswo, D. A. Markov, P. C. Samson, C. Daniel, Z. Sikorski, and W. N. Robinson, “Electrokinetic delivery of single fluorescent biomolecules in fluidic nanochannels,” in Biosensing, vol. 7035M. Razeghi and H. Mohseni, eds., International Society for Optics and Photonics (SPIE, 2008), pp. 56–67.

13. A. E. Cohen, “Control of nanoparticles with arbitrary two-dimensional force fields,” Phys. Rev. Lett. 94(11), 118102 (2005). [CrossRef]  

14. A. E. Cohen and W. E. Moerner, “Controlling brownian motion of single protein molecules and single fluorophores in aqueous buffer,” Opt. Express 16(10), 6941–6956 (2008). [CrossRef]  

15. J. K. King, B. K. Canfield, and L. M. Davis, “Three-dimensional anti-brownian electrokinetic trapping of a single nanoparticle in solution,” Appl. Phys. Lett. 103(4), 043102 (2013). [CrossRef]  

16. J. F. Lesoine, P. A. Venkataraman, P. C. Maloney, M. E. Dumont, and L. Novotny, “Nanochannel-based single molecule recycling,” Nano Lett. 12(6), 3273–3278 (2012). PMID: 22662745. [CrossRef]  

17. W. N. Robinson and L. M. Davis, “Simulation of single-molecule trapping in a nanochannel,” J. Biomed. Opt. 15(4), 045006 (2010). [CrossRef]  

18. E. B. Shera, N. K. Seitzinger, L. M. Davis, R. A. Keller, and S. A. Soper, “Detection of single fluorescent molecules,” Chem. Phys. Lett. 174(6), 553–557 (1990). [CrossRef]  

19. D. H. Bunfield and L. M. Davis, “Monte carlo simulation of a single-molecule detection experiment,” Appl. Opt. 37(12), 2315–2326 (1998). [CrossRef]  

20. S. Nie, D. T. Chiu, and R. N. Zare, “Real-time detection of single molecules in solution by confocal fluorescence microscopy,” Anal. Chem. 67(17), 2849–2857 (1995). [CrossRef]  

21. L. Davis, P. Williams, D. Ball, K. Swift, and E. Matayoshi, “Data reduction methods for application of fluorescence correlation spectroscopy to pharmaceutical drug discovery,” Curr. Pharm. Biotechnol. 4(6), 451–462 (2003). [CrossRef]  

22. G. Turin, “An introduction to matched filters,” IEEE Trans. Inf. Theory 6(3), 311–329 (1960). [CrossRef]  

23. K. M. Weerakoon-Ratnayake, F. I. Uba, N. J. Oliver-Calixte, and S. A. Soper, “Electrophoretic separation of single particles using nanoscale thermoplastic columns,” Anal. Chem. 88(7), 3569–3577 (2016). PMID: 26963496. [CrossRef]  

24. W. A. Lyon and S. Nie, “Confinement and detection of single molecules in submicrometer channels,” Anal. Chem. 69(16), 3400–3405 (1997). [CrossRef]  

25. K. Pappaert, J. Biesemans, D. Clicq, S. Vankrunkelsven, and G. Desmet, “Measurements of diffusion coefficients in 1-d micro- and nanochannels using shear-driven flows,” Lab Chip 5(10), 1104–1110 (2005). [CrossRef]  

26. D. Magde, E. Elson, and W. W. Webb, “Thermodynamic fluctuations in a reacting system—measurement by fluorescence correlation spectroscopy,” Phys. Rev. Lett. 29(11), 705–708 (1972). [CrossRef]  

27. O. Krichevsky and G. Bonnet, “Fluorescence correlation spectroscopy: the technique and its applications,” Rep. Prog. Phys. 65(2), 251–297 (2002). [CrossRef]  

28. T. Dertinger, V. Pacheco, I. von der Hocht, R. Hartmann, I. Gregor, and J. Enderlein, “Two-focus fluorescence correlation spectroscopy: A new tool for accurate and absolute diffusion measurements,” ChemPhysChem 8(3), 433–443 (2007). [CrossRef]  

29. U. Meseth, T. Wohland, R. Rigler, and H. Vogel, “Resolution of fluorescence correlation measurements,” Biophys. J. 76(3), 1619–1631 (1999). [CrossRef]  

30. A. E. Cohen and W. E. Moerner, “Suppressing brownian motion of individual biomolecules in solution,” Proc. Natl. Acad. Sci. 103(12), 4362–4365 (2006). [CrossRef]  

31. X. Michalet and A. J. Berglund, “Optimal diffusion coefficient estimation in single-particle tracking,” Phys. Rev. E 85(6), 061916 (2012). [CrossRef]  

32. C. L. Vestergaard, “Optimizing experimental parameters for tracking of diffusing particles,” Phys. Rev. E 94(2), 022401 (2016). [CrossRef]  

33. B. Wang and L. M. Davis, “Improved timing and diffusivity measurement in single-molecule recycling in a nanochannel,” in Single Molecule Spectroscopy and Superresolution Imaging X, vol. 10071J. Enderlein, I. Gregor, Z. K. Gryczynski, R. Erdmann, and F. Koberling, eds., International Society for Optics and Photonics (SPIE, 2017), pp. 20–29.

34. A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes (McGraw-Hill, 1991).

35. S. A. Soper, H. L. Nutter, R. A. Keller, L. M. Davis, and E. B. Shera, “The photophysical constant of several fluorescent dyes pertaining to ultrasensitive fluorescence spectroscopy,” Photochem. Photobiol. 57(S1), 972–977 (1993). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Information

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The histogram of peak values of the WSS during a $10^4$ s simulated experiment of SMR for parameters of Table 1 in the results section. The solid vertical line (red) is a threshold at an amplitude of 1750, also indicated by the cross (also red) in the upper inset and found by the point of intersection of the dashed line (also red) with the $x$-axis in the lower inset ($10^{-1}$ peaks per $10^4$ s). Peak values above the threshold are almost all due to passage of molecules. For no molecules present, the upper inset shows a plot of the thresholds above which there is a WSS peak rate of 1 per $10^4$ s versus the background count rate for WSS weights of widths $\sigma _t$ = 1.0, 1.5, and 2.0 ms; the lower inset shows the number of peaks below threshold versus threshold during $10^4$ s simulations with $\sigma _t$ = 1.5 ms and background rates of 400, 600, and 800 $\textrm {s}^{-1}$. Similar curves are used to generate the data in the upper inset.
Fig. 2.
Fig. 2. Timing errors found by recycling a molecule with and fixed time between flow reversals. The main plot shows the standard deviations of the intervals between successive molecule passages as a function of laser power (with background held at 600 $\textrm {s}^{-1}$) for four methods for estimating the times of passage (solid lines, from highest to lowest standard deviations). The red circles plot the shot-noise limit for timing error if background is negligible. The inset shows the standard deviation of the ML estimates for a laser power of $98\mu \textrm {W}$ for intervals of $\pm {f\sigma _t}$ where $f$ varies from 1 to 4.
Fig. 3.
Fig. 3. Algorithms for single-molecule recycling
Fig. 4.
Fig. 4. Probability density function of interval between molecule passages, as given by Eq. (1).
Fig. 5.
Fig. 5. Plot of Eq. (7) in non-dimensionalized units for five values of $d = |t_i - 2T|/T$ (solid lines in order from top to bottom at the left of the graph), where $D_{max} = v^2T / 4$. The inset shows that for $d =0.3$, the ML estimate is $\hat {D} = 0.09D_{max}$ (indicated by the vertical line), that with $98\%$ confidence $D/D_{max} > 0.035$ (excluding shaded area at left) and that with $90\%$ confidence $D/D_{max} < 0.892$ (excluding shaded area at right).
Fig. 6.
Fig. 6. Evolution of the probability density function of diffusivity $D$ as a single molecule is recycled up to 200 times. The known diffusivity ($4.9 \times 10^{-11}\textrm {m}^{2}\textrm {s}{-1}$) is indicated by the vertical line (red). The inset shows a histogram of the intervals between passages (dots), which have a variance of $5.16 \times 10^{-5}\textrm {s}^{2}$ and the solid line (red) is a Gaussian with the same variance and area.
Fig. 7.
Fig. 7. Example of the trajectory of a molecule as it transits the laser focus (solid line in top graph), the times of detected photons (short vertical lines in bottom graph), and the corresponding weighted sliding sum (solid curve in bottom graph). The dashed horizontal lines in the top graph represent the beam waist of the laser focus.
Fig. 8.
Fig. 8. Trajectories of a molecule undergoing SMR, with parameters from Table 1. The thickest lines (red) are for simulations of SMR without automatic recycling (maximum number of missed detections of $M = 0$), the medium thickness line (blue) in the first plot is a simulation in which automatic recycling is used with $M = 3$, and the thin line (black) is for automatic recycling with $M = 4$.
Fig. 9.
Fig. 9. ML estimated diffusivities versus numbers of recycles for 199 molecules acquired during a 10,000 s simulation. The error bars represent the one-standard deviation confidence intervals. The dashed vertical line (red) is the weighted average of all measurements, where weights are determined from error bars.
Fig. 10.
Fig. 10. Graph displaying a sequence of diffusivity measurements for 150 molecules recycled over a simulated 10,000 s experiment. Cross (green) and solid round (red) points represent molecules that are known to be DNA 15mers and DNA 13mers respectively. For each molecule, the error bars are obtained from the estimated probability density functions.

Tables (1)

Tables Icon

Table 1. Parameters used in MC simulationsa

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

P ( t ) d t = ( 1 2 + T t ) v 4 π D t exp [ v 2 ( t 2 T ) 2 4 D t ] d t , t ( 0 , ) .
L ( D ; t i ) = ( 1 2 + T t i ) v 4 π D t i exp [ v 2 ( t i 2 T ) 4 D t i ] , D [ 0 , ) .
D i ^ = v 2 ( t i 2 T ) 2 / ( 2 t i ) .
D ^ = v 2 2 R i = 1 R ( t i 2 T ) 2 t i .
D i ^ = v 2 ( t i 2 T ) 2 / ( 4 T ) ,
0 v 2 T / 4 L ( D ; t i ) d D = v 2 8 π exp [ ( t i 2 T ) 2 2 T 2 ] + v 2 | t i 2 T | 4 T [ e r f ( | t i 2 T | 2 T ) 1 ] = K ( t i ) ,
p ( D ; t i ) = L ( D ; t i ) d D 0 D m a x d D = 1 K ( t i ) v 8 π D T exp [ v 2 ( t i 2 T ) 2 8 D T ] d D , D [ 0 , D m a x ] .
p ( D ; t 1 e , t 2 e , , t R e ) d D = [ i = 1 R L ( D ; t i e ) ] d D / 0 v 2 T 4 [ i = 1 R L ( D ; t i e ) ] d D .
L ( D ; t i e ) 2 2 π T = [ D D m a x + ( δ t T ) 2 ] 1 / 2 exp [ ( t i e 2 T ) / T ] 2 2 [ D / D m a x + ( δ t / T ) 2 ] / A ( D ) ,
s ( j , k ) = 0.5 [ ln e j + d k 2 / e j + 2 ln e r f ( 1 / 2 e j ) ] ,
s ( j , k ) = 0.5 [ ln e j + d k 2 / e j + ln e r f ( 1 / 2 e j ) + ln e r f ( 2 / 2 e j ) ] ,
k = | t i e 2 m 2 T | / ( m Δ t w ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.