## Abstract

We introduce a fast and robust technique for single-particle tracking with nanometer accuracy. We extract the center-of-mass of the image of a single particle with a simple, iterative algorithm that efficiently suppresses background-induced bias in a simplistic centroid estimator. Unlike many commonly used algorithms, our position estimator requires no prior information about the shape or size of the tracked particle image and uses only simple arithmetic operations, making it appropriate for future hardware implementation and real-time feedback applications. We demonstrate it both numerically and experimentally, using an inexpensive CCD camera to localize 190 nm fluorescent microspheres to better than 5 nm.

© 2008 Optical Society of America

## 1. Introduction

In single-particle tracking experiments using optical microscopy, fluorescent molecules, quantum dots, metal nanoparticles, and polymer microspheres are routinely localized on a 10 nm length scale, and in exceptional cases even on a 1 nm length scale, despite the much longer optical wavelength encoding that information. This ability to track the nanometer-scale motion of small objects with optical microscopy has greatly improved experimental access to nanoscale biological behavior [1, 2, 3, 4] and microscale-to-nanoscale colloid interactions [5, 6, 7].

In most cases, particle tracking is accomplished by analyzing a series of digital images obtained, for example, from the output of a CCD camera. However, the analysis of these images presents a challenging data processing task, and a correspondingly large body of research has been devoted to developing and testing particle-tracking software routines. When selecting an algorithm for extracting the underlying position of a particle from a noisy image, researchers have been forced to accept tradeoffs among accuracy, robustness, and speed. For example, it is generally accepted that the most accurate results can be achieved by fitting an image to a constrained (usually Gaussian) instrument resolution function [8, 9, 10]. Unfortunately, the computational complexity of such an algorithm renders it quite slow and therefore unavailable for real-time or time-critical applications. Other numerical search algorithms offer much shorter execution times by incorporating prior knowledge of the image shape through masking [10] or spatial filtering [11]. Furthermore, a recently introduced algebraic solution for locating a two- or three-dimensional Gaussian from sparsely sampled brightness data provides an efficient particle localization algorithm with no numerical searching when the shape of the Gaussian function and the image background level are known [12, 13]. Various commercial tracking software packages have also been quantitatively evaluated, with emphasis on the care required to incorporate prior knowledge without introducing bias and errors in these automated packages [14]. However, all of these methods require foreknowledge of the particle shape and the image background and are not robust to variations in the size or shape of the tracked object that may arise when tracking an asymmetric object such as a nanorod or when a single particle diffuses into and out of focus.

Moreover, recently developed experimental techniques for manipulating nanoparticles using feedback control [15, 16, 17, 18, 19, 20] place stringent requirements on the speed and accuracy of particle-tracking routines. Because image processing routines are simply too slow, many particle-tracking feedback controllers use fast photodiodes rather than CCD cameras, sacrificing widefield spatial information for real-time control. However, real-time control of multiple, spatially-separated particles [21] demands an imaging method and will be directly enhanced by the development of simple, fast, and accurate algorithms for localizing individual particles within subsections of a larger image. Future experiments that combine nanometer-resolution particle localization with real-time feedback control will enable new techniques for monitoring and engineering structures at the nanoscale.

In this paper, we progress toward this goal of robust, real-time particle tracking by introducing a simple, fast estimator for extracting the center position of an object from an image corrupted by noise, pixelation, and a constant (unknown) background. Our algorithm is a refinement of the simple centroid or center-of-mass (CM) algorithm, which is computationally efficient but can only provide an unbiased estimate when a particle lies exactly at the center of the image. In the estimator algorithm developed here, we enforce this otherwise serendipitous arrangement by iteratively testing whether an object lies at the image center and subsequently refining the image window in order to better center the object. This procedure efficiently suppresses the estimator bias (exponentially in the number of iterations), resulting in a very fast, unbiased localization algorithm with sub-pixel accuracy. It requires few input parameters and makes *no assumptions* about the object shape, making it robust and effective for localizing objects with varying sizes and complex shapes in the presence of uncharacterized background noise. Furthermore, the algorithm is computationally simple and can be executed with a few lines of code. Its performance is comparable to or exceeds full nonlinear least-squares minimization in many cases, while its execution time is orders of magnitude shorter. For all of these reasons, our algorithm is a promising candidate for implementation in signal-processing hardware for real-time applications.

The paper is organized as follows. In section 2, we examine the error in the CM estimator in detail and show that this error can be estimated in real-time. In section 3, we define an iterative “VirtualWindow” CM (VWCM) algorithm that exploits this fact to eliminate the average error in the CM estimator. In section 4, we present the results of extensive Monte Carlo simulations of the new estimator, and in section 5 we apply the VWCM in our own particle-tracking experiment. Finally, section 6 summarizes our results and provides an outlook for future applications and refinements.

## 2. Bias in the center-of-mass estimator

As mentioned in section 1, the center-of-mass (CM) estimator of a particle’s position is simple, very fast to execute, and provides an estimate with no assumptions about the shape or size of the underlying object. Unfortunately, it is strongly biased by any background signal and exhibits poor noise rejection outside the region of interest. In this section, we will derive the mean error, or bias, in the CM estimator and show that this error can itself be estimated in real-time.

With the output of a CCD camera in mind, we define an image to be a two-dimensional matrix *S* with elements *S _{jk}* representing the total number of counts in pixel

*P*with width Δ centered at location (

_{jk}*x*,

_{jk}*y*). [The pixel size Δ and coordinates (

_{jk}*x*,

*y*) are always considered in the object plane of the optical system, so that, for example, Δ is the actual CCD pixel size divided by the system magnification

*M*.]

*S*represents the experimental data from a single shot of the experiment,

*e.g.*a single CCD frame. In order to analyze, and later design, a position estimator algorithm, we must assume some underlying statistical model for such an image. For this purpose, we assume the signal from the particle is drawn from a Poisson distribution with spatially-dependent mean value

*N*(

_{S}*x*,

*y*)/Δ

^{2}, which represents the point-spread function of the optical system convolved with the shape of the fluorescent object. This mean value is explicitly normalized by the pixel area Δ

^{2}so that

*N*(

_{S}*x*,

*y*) represents a dimensionless number of counts. In order to calculate the mean number of counts in a particular pixel, averaged over an ensemble of individual images

*S*, we must integrate

*N*(

_{S}*x*,

*y*)/Δ

^{2}over the area of that pixel, as in Eq. (1a) below.

For a general scenario, we can also consider a background signal drawn from a distribution with mean *N _{B}*(

*x*,

*y*)/Δ

^{2}and (spatially-uncorrelated) variance

*σ*

^{2}

*(*

_{B}*x*,

*y*)/Δ

^{2}. The noise term may represent scattered light or technical noise in the camera, with statistical properties (e.g. Poisson, Gaussian or other distribution) included in the definitions of

*N*and

_{B}*σ*

*2*

*. Mathematically, we can now write the mean and covariance of an ensemble of frames*

_{B}*S*each arising from the image of a particle at position (

*x*

_{0},

*y*

_{0}):

The angle brackets 〈〉 denote the expectation value over an ensemble of images, while *∫∫P _{jk}* denotes integration over the area of pixel

*P*so that, for example, the mean count rate 〈

_{jk}*S*〉 is just the integral of the spatially-varying average count rate over the pixel area. Without the background contribution, Eqs. (1a)–(1b) represent familiar Poisson counting statistics, where the mean, 〈

_{jk}*S*〉, and variance, 〈

_{jk}*S*

^{2}

*〉-〈*

_{jk}*S*〉

_{jk}^{2}, of the counts in pixel

*P*are equal [22, 23]. Equations (1a)–(1b) give a prescription for calculating the statistics of the image

_{jk}*S*that includes the effects of pixelation, image truncation, background noise, and counting statistics.

Now consider the CM estimator of the object’s *x*-position *x*
_{0}:

While it is simple to calculate from the data, *x*̂* _{CM}* is nevertheless a nonlinear function of the image

*S*, making statistical calculations difficult. However, the statistics of a linearized approximation are accurate to order 𝓝

^{-3/2}, where 𝓝=

*∑*is the

_{jk}S_{jk}*total*number of counts in the image. The corresponding linearized approximation to Eq. (2) is

We can now calculate the mean and variance of the centroid estimator for an image *S _{jk}* satisfying (1a):

Equations (4)–(5) allow us to calculate the statistics of the centroid estimator for any underlying image function *N _{S}*(

*x*,

*y*) superimposed on a spatially-varying, Poisson- or Gaussian-distributed background. The mean and mean-square errors (bias and variance) resulting from pixelation, truncation, shot noise, and background noise can each be derived from these expressions. For more complicated systems, such as electron-multiplying CCD cameras, the mathematical form of the noise term [Eq. (1b)] must be modified to accommodate signal-dependent noise introduced by on-chip gain. For the remainder of this paper, we will be concerned only with the mean value 〈

*x*̂

*〉, which determines the estimator bias; the noise (variance) terms are included as a reference.*

_{CM}Let us now calculate the bias arising from a spatially-invariant background signal with *N _{B}*(

*x*,

*y*)=

*N*. We leave the

_{B}*noise*σ

^{2}

*(*

_{B}*x*,

*y*) on this background level unspecified, as it does not enter a calculation of the bias. Introducing the shorthand notation,

${N}_{S}^{\Delta}({x}_{\mathrm{jk}}-{x}_{0},{y}_{\mathrm{jk}}-{y}_{0})={\iint}_{{P}_{\mathrm{jk}}}\frac{\mathrm{dxdy}}{{\Delta}^{2}}{N}_{S}(x-{x}_{0},y-{y}_{0})$

and rearranging Eq. (4), we find that the average CM estimator error, the bias *B _{CM}*, can be written as

$$=\frac{1}{\sum _{\mathrm{jk}}{N}_{S}^{\Delta}({x}_{\mathrm{jk}}-{x}_{0},{y}_{\mathrm{jk}}-{y}_{0})}\left[\sum _{\mathrm{jk}}({x}_{\mathrm{jk}}-{x}_{0}){N}_{S}^{\Delta}({x}_{\mathrm{jk}}-{x}_{0},{y}_{\mathrm{jk}}-{y}_{0})+{N}_{B}\sum _{\mathrm{jk}}({x}_{\mathrm{jk}}-\u3008{\hat{x}}_{\mathrm{CM}}\u3009)\right].$$

It is important to note that, for *N _{B}*(

*x*,

*y*)=

*N*, Eq. (4) and Eq. (6) are mathematically identical, and no approximations have yet been made. Let us now consider the different contributions to the bias in Eq. (6). The normalization factor ∑

_{B}

_{jk}N^{Δ}

*(*

_{S}*x*-

_{jk}*x*

_{0},

*y*-

_{jk}*y*

_{0}) represents the average number of detected counts from the particle itself, and is only a weak function of

*x*

_{0}and

*y*

_{0}as long as the particle image is not strongly truncated at the edges of the array. We may then denote this term by 〈𝓝

*〉 and neglect its functional dependence on (*

_{S}*x*

_{0},

*y*

_{0}). The first term in brackets on the right-hand side of Eq. (6) is a discretized analog of the continuous center-of-mass

The last integral is proportional to the *x*-coordinate of the center-of-mass of the image function *N _{S}*(

*x*,

*y*), which we assume to be 0 [this condition can always be enforced through the definition of

*N*(

_{S}*x*,

*y*)]. We assume that the particle is near the center of the image and is not significantly truncated at the edges of the array. Violations of the approximate equality in Eq. (7) now correspond to estimator bias arising from pixelation and truncation of the underlying image. Assuming these are negligible for now, we are left with the following expression for the estimator bias

where 〈𝓝* _{S}*〉 and 〈𝓝

*〉 are the average number of signal and background counts in the entire image, respectively, and*

_{B}*x*̄ is the geometric center of the pixel array,

*i.e.*the unweighted average

*x*-coordinate. Equation (8) reveals that

*the estimator bias is proportional to the difference between the center of the pixel array*

*x*̄

*and the average estimate*〈

*x*̂

*〉. The constant of proportionality is the ratio of background to signal counts in the image. Recall that the estimator bias is unknown to the experimenter, since the underlying particle position*

_{CM}*x*

_{0}is unknown. However, the difference between the estimate

*x*̂

*and the center of the pixel array can be calculated in each shot of the experiment. In our algorithm described below, we exploit this fact in order to form an online estimate of the bias then correct it by truncating the pixel window to center the particle in the image array.*

_{CM}## 3. Virtual window center-of-mass algorithm

In section 2, we derived the bias in the CM estimator *x*̂* _{CM}* arising from a statistically constant background. In a realistic experimental scenario, this bias greatly limits the practical utility of the CM algorithm. When the background signal is sufficiently well characterized, its detrimental effects can be suppressed or eliminated through background subtraction, image thresholding, or both. However, these procedures are critically dependent on the choice of background levels and thresholds [9] and are unavailable in cases where the background is unknown, uncharacterized, or time-varying. In this section, we will derive a more robust CM algorithm that requires no specific foreknowledge of the image to be analyzed but efficiently suppresses the bias arising from background levels.

Earlier, we found that the bias in the CM estimator *x*̂* _{CM}* can itself be estimated in real time through Eq. (8). The central concept of our Virtual Window Center-of-Mass (VWCM) estimator is to use this information to modify the image window in order to center it on the object and eliminate the estimator bias. The procedure is iterative, such that at the

*n*th iteration, we calculate the center-of-mass

*x*̂

^{(n)}

*then modify the image array by eliminating a portion of the image near one edge, effectively shifting the geometric center*

_{CM}*x*̄

^{(n)}of the window. The update rule that defines our iterative algorithm (explained below) is to truncate the window at the

*n*th iteration such that

*x*̄

_{(n+1)}=

*x*̂

_{(n)}

*.*

_{CM}To see how eliminating a portion of the image shifts the array center, consider eliminating a single row of pixels (width Δ) from the edge of an array. The center of the resulting (rectangular) array is shifted by Δ/2 as a result. If our image resolution were infinitely fine, we could translate the array center *x*̄ any desired amount by discarding arbitrarily small portions of the image at one edge. Since we do not have infinitely fine resolution in practice, we may still approximate a sub-pixel truncation of the array by weighting the pixels and pixel coordinates along one edge of the image. For example, if we wish to translate the array center by a small amount *δ*/2<Δ/2, we simply multiply the pixel intensities *S*
* _{jk}* at the one edge by 1-

*δ*/Δ and redefine the coordinates along that edge by

*x*→

_{jk}*x*+δ. This procedure approximates the truncation of a region of width

_{jk}*δ*from the negative edge of the image, defining a virtual window shifted by

*δ*/2 as desired:

*x*̄→

*x*̄±

*δ*/2. To complete the algorithm, the user sets two termination conditions,

*ε*and

*n*, such that the algorithm terminates when |

_{max}*x*̄

^{(n+1)}-

*x*̄

^{(n)}|/Δ<

*ε*or the number of iterations exceeds

*n*. With these concepts, we can now precisely describe the VWCM algorithm, displayed graphically in Fig. 1:

_{max}*Virtual window center-of-mass (VWCM) algorithm*

1. Let the image *S*
^{(1)}=*S* and coordinates (*x*
^{(1)}
* _{jk}*,

*y*

^{(1)}

*)=(*

_{jk}*x*,

_{jk}*y*) correspond to the raw data.

_{jk}2. Beginning with *n*=1, Calculate the center-of-mass *x*̂^{(n)}
* _{CM}* from the image

*S*

^{(n)}and coordinates

*x*

^{(n)}

*:*

_{jk}${\hat{x}}_{\mathrm{CM}}^{\left(n\right)}=\frac{\sum _{\mathrm{jk}}{x}_{\mathrm{jk}}^{\left(n\right)}{S}_{\mathrm{jk}}^{\left(n\right)}}{\sum _{\mathrm{jk}}{S}_{\mathrm{jk}}^{\left(n\right)}}.$.

Repeat for *y*̂^{(n)}
* _{CM}*.

3. Define a new image *S*
^{(n+1)} and new coordinate system (*x*
^{(n+1)}
* _{jk}*,

*y*

^{(n+1)}

*) by truncating the previous image such that*

_{jk}*x*̄^{(n+1)}=*x*̂^{(n)}, *y*̄^{(n+1)}=*y*̂^{(n)}.

For subpixel shifts, use the virtual window procedure described above.

4. Iterate until |*x*̄^{(n+1)}-*x*̄^{(n)}|<*ε* or *n*=*n _{max}*.

The update rule *x*̄^{(n+1)}=*x*̂^{(n)}
* _{CM}* corresponds to a shift of the image that centers the array on the current estimate of the particle position. Denoting the bias in the nth iteration by

*B*

^{(n)}

*=〈*

_{CM}*x*̂

^{(n)}

*〉-*

_{CM}*x*

_{0}, we find from Eq. (8) and the update rule

*x*̄

^{(n+1)}

*=*

_{CM}*x*̂

^{(n)}

_{CM}where 〈𝓝* _{S}*〉 and 〈𝓝

*〉 are the average number of counts arising from the signal and background respectively. Equation (9) shows that*

_{B}*the bias in the VWCM algorithm tends exponentially to zero with the number of iterations n*. In fact, the signal-to-background ratio 〈𝓝

*〉/〈𝓝*

_{S}*〉 changes at each iteration as the background and signal are truncated differently during the image-shifting procedure. However, the VWCM is designed to truncate more background than signal, so that 〈𝓝*

_{B}*〉/〈𝓝*

_{S}*〉*

_{B}*increases*at each iteration. Equation (9) is therefore a conservative estimate, providing a lower bound on the convergence rate. Regardless, the correction is small, and we have found Eq. (9) to give an accurate prediction of the bias-suppression rate for a wide range of parameters. The convergence rate depends on the signal-to-background ratio 〈𝓝

*〉/〈𝓝*

_{S}*〉, but the algorithm requires no knowledge of this quantity. In fact, the algorithm requires no input beyond the image*

_{B}*S*and pixel coordinates (

*x*,

_{jk}*y*), and termination conditions ε and

_{jk}*n*. Finally, note that the algorithm requires only simple arithmetic operations on the image and is therefore very fast, particularly when the signal-to-background ratio gives a satisfactory convergence rate.

_{max}## 4. Numerical simulations

We performed extensive numerical simulations in order to confirm the predicted features of the VWCM algorithm and compare its performance to other algorithms. For this test, we implemented four algorithms in standard numerical analysis software. We chose to compare the Gaussian fit, Gaussian mask [10], CM and VWCM. We included the Gaussian fit, because it is generally assumed to provide superior accuracy, and the Gaussian mask and CM because these are among the fastest executing algorithms and therefore most promising for real-time implementation.

For the Gaussian fit, we used a function of the form

where *x*
_{0}, *y*
_{0}, *σ _{x}*,

*σ*,

_{y}*A*, and

*B*were the fit parameters. For both the Gaussian mask (see Ref. [10] for details) and the VWCM, we used termination conditions

*ε*=10

^{-3}and

*n*=200. Finally, we preprocessed every image array

_{max}*S*passed to the algorithms by subtracting the minimum value from the entire array. This simple procedure ensures that the counts in each pixel are nonnegative (a prerequisite for the VWCM algorithm) and removes spuriously large offsets, which severely degrade the CM algorithm, make the Gaussian fit more sensitive to its initial fit conditions, and require more iterations to reach convergence in the VWCM.

A graphical summary of our simulations is shown in Fig. 2. In each case, we generated underlying count rate functions *N _{S}*(

*x*,

*y*) corresponding to fluorescent objects with varying brightness, shape and position, as described in the figure caption. We then integrated

*N*(

_{S}*x*,

*y*) over the pixel coordinates to generate the pixelated rate

*N*

^{Δ}

*(*

_{S}*x*,

_{jk}*y*). For each image realization, an array of Poisson-distributed random numbers was generated based on this rate. Finally, we added constant Poisson-distributed background noise, with a rate determined by the user-defined signal-to- background ratio 〈𝒩

_{jk}*〉/〈𝒩*

_{S}*〉. The pixel size was taken to be Δ=123 nm, corresponding to our experiment (see section 5). For each type of object, we varied the position across a square grid corresponding to one pixel, generating a set of 1000 images at each position. The underlying object position was estimated for each image using the four algorithms, and each resulting distribution is plotted in Fig. 2 as a circle centered at the mean estimated position with a radius of one standard deviation (1σ).*

_{B}Note that the Gaussian fit requires an initial parameter set that strongly affects its convergence and accuracy, while the Gaussian mask requires a mask size and initial position guess to execute. For each particular image, the Gaussian mask’s performance can be optimized by adjusting the mask size. Because robustness and automation are a key performance objective for our algorithm development, in Fig. 2 we optimized the Gaussian mask for the tightly focused image then used that mask shape for the remaining images without further “by-hand” adjustment. In contrast to the Gaussian fit and mask algorithms, the CM and VWCM accept no input other than the image matrix *S*.

Overall, the simulations show that the VWCM algorithm is fast, accurate, and robust. None of the other algorithms we investigated shared this combination of properties: the Gaussian fit algorithm has excellent accuracy and precision but is very slow and fails to converge in some cases; the Gaussian mask algorithm is both fast and accurate for a known, nearly-Gaussian image shape but its performance is significantly degraded when the mask is ill-suited to the object; the CMalgorithm is the fastest, but exhibits a severe bias towards the geometric center of the image array, as shown in Fig. 2 and predicted in Eq. (8). The VWCM completely eliminates this bias in the CM algorithm and performs robustly, with little variation in accuracy for a variety of underlying object shapes and sizes. It is less accurate than the Gaussian mask and Gaussian fit for tightly focused images within a large field but exhibits comparable and in some cases dramatically improved accuracy for images that deviate from Gaussian shape. Its execution time is typically a few times longer than the CM (corresponding to the number of iterations *n* require to reach convergence) but 2 to 10 times shorter than the Gaussian mask and 100 to 1000 times shorter than the Gaussian fit even with initial fit parameters chosen favorably in order to minimize “runaway” unstable cases that do not converge at all.

## 5. Experimental results

In order to test the VWCM algorithm in a realistic experimental setting, we tracked the motion of fluorescent microparticles using digital video microscopy. Our samples consisted of 190 nm diameter fluorescent dye-labeled polystyrene microspheres dispersed and dried (immobilized) on a glass cover slip. The sample was illuminated by a 488 nm solid-state laser, and fluorescence was collected through a water-immersion objective (magnification 60x, *NA* 1.2) and separated from the excitation by a dichroic filter. Images were obtained with a CCD camera operating at 30 frames per second. Using a three-axis piezoelectric stage, we displaced particles with nanometer precision, and for each image we located the object using the four algorithms discussed in section 4.

In Fig. 3, we show the distribution of estimated particle positions when a single in-focus particle was scanned in 100 nm increments over a 5×5 grid. At each point, approximately 30 images were captured and the resulting estimated position distributions are plotted as circles with 2σ radius (for clarity). This type of data, consisting of in-focus images within a large field of view, represents a “best-case” scenario for the Gaussian fit and mask algorithms (in both accuracy and execution time), since the point-spread function for this case is well-approximated by a Gaussian with constant shape. The data in the figure show that the VWCM eliminates the bias in the CM and gives position estimates commensurate with the Gaussian fit and mask. The latter algorithms are roughly two times more accurate than the VWCM for this in-focus case, but the VWCM executes faster and with no input parameters.

To see how the VWCM performs with images of a more complex nature, we captured images of an asymmetric aggregate of (most likely) two microspheres as it is moved outside the focal plane of the microscope. This asymmetric, extended object was moved in 100 nm increments over five steps while simultaneously defocusing by 1 *µ*m at each step. The resulting images and position estimates are shown in Fig. 4. Despite the complicated, asymmetric particle shape, the VWCM algorithm tracks the particle motion with high fidelity. For data of this type, the initial parameters for the Gaussian fit and the mask size for the Gaussian mask algorithm would need to be tailored for each image in order to achieve satisfactory tracking. In contrast, the VWCM requires no adjustment for these (or any other) images.

## 6. Conclusions

In this paper, we described a new algorithm for finding the center of mass of a compact image while suppressing the bias due to a flat background. This VWCM algorithm iteratively centers the particle in a “virtual window” where there is no bias. We demonstrated the VWCM both numerically and experimentally, confirming its predicted simplicity and robustness. It is well-suited for real-time applications in which particles of various sizes and shapes will be tracked simply, robustly and accurately. In the future, we hope to extend our virtual-window method to treat images with a sloped (non-constant) background.

## Acknowledgements

The authors gratefully acknowledge Peter Carmichael for stimulating discussions. A. B. is supported by the National Research Council.

## References and links

**1. **M. J. Saxton and K. Jacobson, “Single-particle tracking: applications to membrane dynamics,” Annu. Rev. Biophys. Biomol. Struct. **26**, 373–399 (1997). [CrossRef] [PubMed]

**2. **A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goodman, and P. R. Selvin, “Myosin V walks hand-overhand: Single fluorophore imaging with 1.5-nm localization,” Science **300**, 2061–2065 (2003). [CrossRef] [PubMed]

**3. **X. Michalet, F. F. Pinaud, L. A. Bentolila, J. M. Tsay, S. Doose, J. J. Li, G. Sundaresan, A. M. Wu, S. S. Gambhir, and S. Weiss, “Quantum Dots for Live Cells, in Vivo Imaging, and Diagnostics,” Science **307**, 538–544 (2005). [CrossRef] [PubMed]

**4. **D. Weihs, T. G. Mason, and M. A. Teitell, “Bio-Microrheology: A Frontier in Microrheology,” Biophys. J. **91**, 4296–4305 (2006). [CrossRef] [PubMed]

**5. **P. Bahukudumbi and M. A. Bevan, “Imaging energy landscapes with concentrated diffusing colloidal probes,” J. Chem. Phys. **126**, 244702 (2007). [CrossRef] [PubMed]

**6. **H.-J. Wu, W. Everett, S. Anekal, and M. Bevan, “Mapping Patterned Potential Energy Landscapes with Diffusing Colloidal Probes,” Langmuir **22**, 6826–6836 (2006). [CrossRef] [PubMed]

**7. **S. K. Sainis, V. Germain, and E. R. Dufresne, “Statistics of Particle Trajectories at Short Time Intervals Reveal fN-Scale Colloidal Forces,” Phys. Rev. Lett. **99**, 018303 (2007). [CrossRef] [PubMed]

**8. **N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Rev. Sci. Instrum. **57**, 1152 (1986). [CrossRef]

**9. **M. Cheezum, W. Walker, and W. Guilford, “Quantitative Comparison of Algorithms for Tracking Single Fluorescent Particles,” Biophys. J. **81**, 2378–2388 (2001). [CrossRef] [PubMed]

**10. **R. Thompson, D. Larson, and W. Webb, “Precise Nanometer Localization Analysis for Individual Fluorescent Probes,” Biophys. J. **82**, 2775–2783 (2002). [CrossRef] [PubMed]

**11. **J. Crocker and D. Grier, “Methods of Digital Video Microscopy for Colloidal Studies,” J. Colloid Interface Sci. **179**, 298–310 (1996). [CrossRef]

**12. **S. B. Andersson, “Position estimation of fluorescent probes in a confocal microscope,” in *Proceedings of IEEE Conference on Decision and Control *(IEEE, 2007) pp. 2445–2450.

**13. **T. Sun and S. Andersson, “Precise 3-D localization of fluorescent probes without numerical fitting,” in *Proceedings of IEEE Annual International Conference of the Engineering in Medicine and Biology Society* (IEEE, 2007) pp. 4181–4184.

**14. **B. Carter, G. Shubeita, and S. Gross, “Tracking single particles: a user-friendly quantitative evaluation,” Phys. Biol. **2**, 60–72 (2005). [CrossRef] [PubMed]

**15. **V. Levi, Q. Ruan, and E. Gratton, “3-D particle tracking in a two-photon microscope. Application to the study of molecular dynamics in cells,” Biophys. J. **88**, 2919–2928 (2005). [CrossRef] [PubMed]

**16. **A. J. Berglund and H. Mabuchi, “Tracking-FCS: Fluorescence Correlation Spectroscopy of Individual Particles,” Opt. Express **13**, 8069–8082 (2005). [CrossRef] [PubMed]

**17. **M. Armani, S. Chaudhary, R. Probst, and B. Shapiro, “Using feedback control and micro-fluidics to steer individual particles,” in *Proceedings of IEEE Conference on Micro Electro Mechanical Systems* (IEEE, 2005), pp. 855–858.

**18. **A. E. Cohen and W. E. Moerner, “Method for Trapping and Manipulating Nanoscale Objects in Solution,” Appl. Phys. Lett. **86**, 093109 (2005). [CrossRef]

**19. **H. Cang, C. M. Wong, C. S. Xu, A. H. Rizvi, and H. Yang, “Confocal three dimensional tracking of a single nanoparticle with concurrent spectroscopic readout,” Appl. Phys. Lett. **88**, 223901 (2006). [CrossRef]

**20. **K. McHale, A. J. Berglund, and H. Mabuchi, “Quantum dot photon statistics measured by three-dimensional particle tracking,” Nano Lett. **7**, 3535–3539 (2007). [CrossRef] [PubMed]

**21. **M. Armani, S. Chaudhary, R. Probst, and B. Shapiro, “Using feedback control of microflows to independently steer multiple particles,” IEEE J. Microelectromech. Syst. **15**, 945–956 (2006). [CrossRef]

**22. **C. W. Gardiner, *Handook of Stochastic Methods for Physics, Chemistry and the Natural Sciences*, 2nd ed. (Springer-Verlag, 1985).

**23. **N. G. van Kampen, *Stochastic processes in physics and chemistry *(Elsevier Science Pub. Co., 2001).

**24. **L. Novotny and B. Hecht, *Principles of Nano-Optics* (Cambridge University Press, 2006).