Abstract

Single particle tracking is essential in many branches of science and technology, from the measurement of biomolecular forces to the study of colloidal crystals. Standard methods rely on algorithmic approaches; by fine-tuning several user-defined parameters, these methods can be highly successful at tracking a well-defined kind of particle under low-noise conditions with constant and homogenous illumination. Here, we introduce an alternative data-driven approach based on a convolutional neural network, which we name DeepTrack. We show that DeepTrack outperforms algorithmic approaches, especially in the presence of noise and under poor illumination conditions. We use DeepTrack to track an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. We then demonstrate how DeepTrack can also be used to track multiple particles and non-spherical objects such as bacteria, also at very low signal-to-noise ratios. In order to make DeepTrack readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific applications.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

In many experiments, the motion of a microscopic particle is used as a local probe of its surrounding microenvironment. For example, it is used to calibrate optical tweezers [1], to measure biomolecular forces [2], to explore the rheology of complex fluids [3], to monitor the growth of colloidal crystals [4], and to determine the microscopic mechanical properties of tissues [5]. The first necessary step in order to be able to statistically analyze the trajectory of a microscopic particle is to track the particle position. This is often done by acquiring a video of the particle and then employing computer algorithms to determine the particle position frame by frame. This technique was introduced about 20 years ago and is generally referred to as digital video microscopy [6,7]. With the increasingly improved quality of digital image acquisition devices and the steady growth in available computational power, the experimental bottleneck has now become the determination of the best data analysis algorithm and its relative parameters for each specific experimental situation.

Currently, single particle tracking is dominated by algorithmic approaches. In fact, a large number of algorithms have been developed, especially to track fluorescent particles and molecules [713]. Some of the most commonly employed algorithms are the calculation of the centroid of the particle after thresholding the image to convert it to black and white [7] and the calculation of the radial symmetry center of the particle [12]. When their parameters and the image acquisition process are optimized by the user, these methods routinely achieve subpixel resolution. However, because of their algorithmic nature, they perform best when their underlying assumptions are satisfied; in general, these assumptions are that the particle is spherically symmetric, that the illumination is homogenous and constant, and that the particle remains in the same focal plane for the whole duration of the experiment. Their performance degrades severely at low signal-to-noise ratios (SNRs) or under unsteady or inhomogeneous illumination, often requiring significant intervention by the user to reach an acceptable performance, which in turn introduces user bias. In practice, in these conditions, scientists need to manually search the space of available algorithms and parameters, a process that is often laborious, time consuming, and user dependent. This has severely limited the widespread uptake of these methods outside specialized research labs, while leading to a flourishing of research activity devoted to comparing their performance in challenging conditions [1417].

Alternative data-driven approaches have been comparatively largely overlooked, despite their potential for better, more autonomous performance. In fact, data-driven deep-learning algorithms based on convolutional neural networks [18] have been extremely successful in image recognition and classification for a wealth of applications from face recognition [19] to microscopy [20], and some pioneering work has already shown some of their capabilities for particle tracking [21,22].

Here, we introduce a fully automated deep-learning network design that achieves subpixel resolution for a broad range of particle kinds, also in the presence of noise and under poor, unsteady illumination conditions. We demonstrate this approach tracking an optically trapped particle under very noisy and unsteady illumination conditions, where standard algorithmic approaches fail. Then, we show how it can be used to track multiple particles and non-spherical objects such as bacteria. In order to make this approach readily available for other users, we provide a Python software package, called DeepTrack, which can be readily personalized and optimized for the needs of specific users and applications.

2. RESULTS

A. DeepTrack Neural-Network Architecture and Performance

While standard algorithmic approaches require the user to give explicitly rules (i.e., the algorithm) to process the input data in order to obtain the sought-after result, machine-learning systems are trained through large series of input data and corresponding results from which they autonomously determine the rules for performing their assigned task. Neural networks are one of the most successful tools for machine learning [23]; they consist of a series of layers that, when appropriately trained, output increasingly meaningful representations of the input data leading to the sought-after result. These layers can be of various kinds (for example, convolutional layers, max-pooling layers, and dense layers), and their number is the network depth (hence the term deep learning). In particular, convolutional neural networks have been shown to perform well in image classification [2426] and regression tasks [27]; their architecture consists of a series of convolutional layers (convolutional base) followed by a series of dense layers (dense top). In each convolutional layer, a series of 2D filters is convolved with the input image, producing as output a series of feature maps. The size of the filters with respect to the input image determines the features that can be detected in each layer; to gradually detect larger features, the feature maps are downsampled by adding a max-pooling layer after each convolutional layer. The max-pooling layers retain the maximum values of the feature maps over a certain area of the input image. The downsampled feature maps are then fed as input to the next network layer. After the last convolutional layer there is a dense top, which consists of fully connected dense layers. These layers integrate the information contained in the output feature maps of the last max-pooling layer to determine the sought-after result. Initially, the weights of the convolutional filters and of the dense layers are random, but they are iteratively optimized using a back-propagation algorithm [28].

A schematic of the neural-network architecture we employed in this work is shown in Fig. 1(a), and its details are in Supplement 1. We chose this specific architecture with three convolutional layers and two dense layers because it is deep enough to learn and generalize the training data set but not so deep that the learning rate is excessively slowed down by the vanishing gradient problem. Given an input image, this neural network returns the x, y, and r coordinates of the particle, where the x and y coordinates are the Cartesian coordinates and the r coordinate is the radial distance of the particle from the center of the image. Even though the r coordinate might seem redundant, it is useful to identify images with no particles, for which the neural network is automatically trained to return a very large r-coordinate value, as these images resemble images with a particle that is far outside the frame. We have implemented this neural network using the Python-based Keras library [29] with a TensorFlow backend [30] because of their broad adoption in research and industry; nevertheless, we remark that the approach we propose is independent of the deep-learning framework used for its implementation. Furthermore, once trained, the neural networks can be exported and readily integrated with other widely employed computational platforms such as MatLab and LabVIEW.

 

Fig. 1. DeepTrack neural-network architecture and performance. (a) DeepTrack architecture consists of a convolutional base (three convolutional neural network layers depicted in orange, each followed by a max-pooling layer depicted in gray) followed by a dense top (two fully connected dense layers and a dense output layer). In the convolutional base, the image is iteratively filtered to extract an increasing number of feature maps and downsampled. In the dense top, the feature maps are used to predict the values of the x, y, and r coordinates of the particle. (b) Mean absolute error (MAE) of the position detection as a function of signal-to-noise ratio (SNR) for DeepTrack (orange circles) and the centroid (gray circles) and radial symmetry (gray squares) algorithms. The bordeaux asterisks represent the performance achieved by averaging the coordinates obtained with 100 independently trained neural networks. (c) Same as (b) as a function of the gradient intensity at SNR = 50. For each SNR or gradient intensity value, we used 1000 simulated images; the error bars are contained within the symbols. See also Example 1a of Code 1, Ref. [42], which demonstrates the performance of a pre-trained neural network, and Example 1b of Code 1, Ref. [42], which illustrates the training and operation of the neural network.

Download Full Size | PPT Slide | PDF

Once the network architecture is defined, we need to train it on a set of particle images for which we know the ground-truth values of the x, y, and r coordinates of the particle. In each training step, the neural network is tasked with predicting the coordinates corresponding to a set of images; the neural-network predictions are then compared to the ground-truth values of the coordinates, and the prediction errors are finally used to adjust the trainable parameters of the neural network using the back-propagation training algorithm [28]. The training of a neural network is notoriously data intensive, requiring in our case several millions particle images, as described in detail in Supplement 1. Therefore, in order to have enough images and to accurately know the ground-truth values of the corresponding coordinates, we simulate the particle images. The particle generation routine is described in detail in Supplement 1; briefly, we generate the images of the particles using Bessel functions because they very well approximate the appearance of colloidal particles in digital video microscopy [31]. Setting the parameters of the image generation function, we can generate images where the particles are represented by dark or bright spots or rings of different intensities on a bright or dark background with varying SNR and illumination gradients; some examples of these images can be seen in the insets of Figs. 1(b) and 1(c). We train the neural network using a grand total of about 1.5 million images, which we present to the network in gradually increasing batches (see Supplement 1). In this way, at the beginning the neural-network optimization process can easily explore a large parameter space, and later it gets annealed towards an optimal parameter set [32]. We simulate a new batch of images before each training step. Since each image is shown to the network only once and then discarded, this permits us to make a very efficient use of the computer memory as well as, more importantly, to prevent overtraining and to avoid the need for real-time validation of the network performance. Overall, this training process is very efficient, taking about 3 h on a standard laptop computer, which can be reduced by up to 2 orders of magnitude on a GPU-enhanced computer. Furthermore, we remark that once the neural network is trained, its use is very computationally efficient, and its execution time is comparable to that of standard algorithms. For example, DeepTrack tracks 700 frames per second for single particle images (120×120px, see Fig. 2) on a standard desktop computer (processor Intel Core i5 6500 @3.2 GHz).

 

Fig. 2. Experimental tracking of an optically trapped particle: (a)–(d) DeepTrack and standard algorithms lead to the same results when tracking and analyzing the trajectory of an optically trapped particle (silica microsphere, diameter 1.98 μm) under optimal illumination conditions (see also Visualization 1). (a) Image of the optically trapped particle and its position obtained by DeepTrack (orange circle) and by the radial symmetry algorithm (gray cross); (b) part of the trajectory tracked by DeepTrack (orange line) and by the radial symmetry algorithm (gray line); (c) probability distributions and (d) autocorrelation functions of the particle position obtained from the trajectory tracked by DeepTrack (orange lines) and by the radial symmetry algorithm (gray lines) and corresponding fits to theory (black lines). (e)–(h) DeepTrack outperforms standard algorithms when the illumination is unsteady (here obtained by illuminating the sample with a standard lamp flickering at 100 Hz) and noisy (setting the camera gain to its highest value): (e) image of the same particle in the same optical trap with noisy illumination (see also Visualization 2); (f) the trajectory reconstructed by DeepTrack appears qualitatively more similar to those shown in (b); (g) the probability distribution and (h) the autocorrelation function obtained by DeepTrack (orange lines) agree with those obtained in (c) [the black lines are the same as in (c) and (d)], while the probability distribution from the radial symmetry algorithm [gray symbols in (g)] is widened by the noise, and the autocorrelation function [gray line in (h)] features a large peak at 0, which is the signature of white noise, and some small oscillations at 100 Hz, which are due to the flickering of the illumination. The scale bars in (a) and (e) represent 20 pixels corresponding to 1.4 μm. See also Example 2 of Code 1, Ref. [42], which uses a pre-trained neural network to track these particles.

Download Full Size | PPT Slide | PDF

In Fig. 1(b), we test the performance of DeepTrack on simulated images with a range of SNR values and homogeneous illumination (i.e., without any illumination gradient). Examples of these images are shown in the insets of Fig. 1(b) with increasing SNR from left to right. As shown by the orange circles in Fig. 1(b), DeepTrack achieves subpixel accuracy over the whole range of SNRs, from SNR = 3.2 (noisiest images on the left) to SNR = 80 (almost perfect images on the right). We then benchmark DeepTrack against the centroid (gray circles) and radial symmetry (gray squares) algorithms by testing them on the same set of images. While the radial symmetry algorithm is better for almost perfect images, DeepTrack outperforms both algorithms in high-noise conditions up to SNR = 40 and the centroid algorithm over the whole range of SNRs. Furthermore, the performance of DeepTrack can be significantly improved by averaging the particle coordinates obtained from several independently trained neural networks; for example, the bordeaux asterisks in Fig. 1(b) represent the results obtained averaging 100 neural networks and show that the neural-network approach outperforms both algorithmic approaches over the whole range of SNRs.

In Fig. 1(c), we explore the influence of illumination gradients, which are known to introduce artefacts in digital video microscopy. In fact, illumination gradients are often present in experiments as a consequence of illumination inhomogeneities, for example, due to the presence of out-of-focus microfluidic structures or biological tissues. Although it is sometimes possible to correct for such gradients by image flattening, often the direction and intensity of the gradient vary as a function of position and time so that these cannot be straightforwardly corrected [9]. We test DeepTrack on 1000 SNR = 50 images affected by a range of illumination gradients with random direction. Examples of these images are shown in the insets of Fig. 1(c) with increasing intensity gradient from left to right. As shown by the orange circles in Fig. 1(c), DeepTrack predicts the particle coordinates with an accuracy of less than 0.1 pixels over the whole range of gradient intensities, in contrast to the performances of the centroid (gray circles) and radial symmetry (gray squares) algorithms that rapidly deteriorate as soon as any intensity gradient is present. As shown by the bordeaux asterisks, also in this case, the performance of DeepTrack can be significantly improved by averaging the coordinates obtained from multiple independently trained neural networks.

B. Experimental Tracking of an Optically Trapped Particle

Until now we have trained and tested our neural network on simulated images. In order to see how DeepTrack works in real life, we now test its performance on some experimental images, while still training it on simulated images as above. As a first experiment, we track the trajectory of a particle held in an optical tweezers (see Supplement 1). An optical tweezers is a focused laser beam that can trap a microscopic particle near its high-intensity focal spot [1]. From the motion of the particle, it is possible to calibrate the optical tweezers and then use it for quantitative force measurement. The critical step is to track the position of the particle, which is often done using digital video microscopy. We record the video of an optically trapped microsphere (silica, diameter 1.98 μm) in an optical tweezers (wavelength 532 nm, power 2.5 mW at the sample). First, we record the particle video illuminating the sample with a high-power LED, which offers ideal illumination conditions and therefore provides us with a very clear image of the particle [Fig. 2(a) and Visualization 1]. In these conditions, standard methods (we show results for the radial symmetry algorithm because it is the one that performs best amongst the standard methods we tested) can track the particle extremely well [gray cross in Fig. 2(a) and Visualization 1; gray line in Fig. 2(b)] and serve as a benchmark for our neural network (orange circle in Fig. 2(a) and Visualization 1; orange line in Fig. 2(b)]: the two trajectories agree to within 0.089 pixels (mean absolute difference). In order to obtain a more quantitative measure of the agreement between the two tracking methods, we calculated the probability distribution [Fig. 2(c)] and the autocorrelation function [Fig. 2(d)] of the particle position, which are standard methods to calibrate optical tweezers [1]. In both cases, the DeepTrack results (orange lines) agree well with the standard algorithm results (gray lines); furthermore, these results agree with the fits to the corresponding theoretical functions (black lines), which are, respectively, a Gaussian function (see Supplement 1) and an inverted exponential (see Supplement 1). Here, we trained DeepTrack with about 1.5 million simulated images similar to the experimental images, where the particle is represented by the sum of a Bessel function of the first order with positive intensity (bright spot) and a Bessel function of the second order with negative intensity (dark ring); the background level, SNR, and illumination gradient are randomized for each image (see Example 2 of Code 1, Ref. [42]).

We now make the experiment more challenging by substituting the LED illumination with a very poor illumination device: a low-power incandescence light bulb connected to an AC plug placed 10 cm above the sample without any condenser. The 50 Hz AC current results in the illumination light flickering at 100 Hz, and the low power requires us to increase to the maximum the gain of the camera, leading to a high level of electronic noise [Fig. 2(e) and Visualization 2]. Even in these very challenging conditions, DeepTrack manages to track the particle position accurately [orange circle in Fig. 2(e) and Visualization 2; orange line in Fig. 2(f)], while the standard tracking algorithm loses its accuracy [gray cross in Fig. 2(e) and Visualization 2; gray line in Fig. 2(f)]. We can quantify these observations by calculating the probability distribution [Fig. 2(g)] and the autocorrelation function [Fig. 2(h)] of the particle position. The probability distribution calculated from the trajectory obtained by the standard method [gray line in Fig. 2(g)] is significantly widened because of the presence of the illumination noise, while that from the DeepTrack trajectory (orange line) agrees well with the theoretical fit [black line, same as in Fig. 2(c)]. Even more strikingly, the autocorrelation from the standard algorithm trajectory (gray line in Fig. 2(h)] does not retain any of the properties of the motion of an optically trapped particle. It features a large peak at τ=0, which is the signature of white noise, and some small oscillations at 100 Hz, which are due to the flickering of the illumination. Instead, the autocorrelation from the DeepTrack trajectory (orange line) agrees very well with the theoretical fit [black line, same as in Fig. 2(d)], demonstrating that the neural network successfully removes essentially all the overwhelming noise introduced by the poor illumination, removing also the heavy flickering of the light source as shown by the absence of 100 Hz oscillations.

C. Tracking of Multiple Particles

In many experiments in, e.g., biology [33], statistical physics [34], and colloidal science [35], it is necessary to track multiple particles simultaneously. To demonstrate how DeepTrack works in a multi-particle case, we record a video with multiple microspheres (silica, diameter 1.98 μm) under the same ideal illumination conditions as in Figs. 2(a)2(d). A sample frame is shown in Fig. 3(a) (see also Visualization 3). DeepTrack is used as above with only three modifications (see also Example 3 of Code 1, Ref. [42]). First, during the training, the neural network is exposed to simulated images containing multiple particles as well as images of single particles, and it is in all cases trained to determine the coordinates of the most central particle in each image (here, we employed 2 million images containing one to four particles similar to those employed in Fig. 2). Second, during the detection, the frame is divided into overlapping square boxes [for example, the blue, red, and green boxes shown in Fig. 3(a)] whose sides are approximatively twice the particle diameter (in Fig. 3, we use 51×51 px boxes separated by 5 pixels), and for each box the neural network determines the coordinates of the most central particle [e.g., blue dots in Fig. 3(b)]. Importantly, in order to easily detect empty boxes, the neural network is trained to return a large radial distance if no particle is present; therefore, only particle positions for which the radial distance is smaller than a certain threshold (typically, a value between particle radius and diameter) are retained for further analysis. Third, depending on the size of the boxes and their separation, the same particle can be detected multiple times; therefore, all detections [blue dots in Fig. 3(c)] whose inter-distance is smaller than a certain threshold (typically, a value between particle radius and diameter) are assigned to the same particle, and the particle coordinates are then calculated as the centroid of these positions [orange circle in Fig. 3(c)]. Following this procedure, DeepTrack can accurately track particles even when they are close to contact, which is relevant in many experiments. We have also checked that DeepTrack does not present pixel bias in the presence of a nearby second particle [36] (see Supplement 1). There is a trade-off between detection accuracy and computational efficiency, as a smaller separation between the boxes increases the number of detections available to estimate the particle coordinates but requires more computational resources. Also, it can be noticed in Visualization 3 that the particles closer than slightly less than a half-box distance to the border of the image are not detected; if necessary, this effect can easily be eliminated by padding the image.

 

Fig. 3. Tracking of multiple particles. (a) DeepTrack tracks the position (orange circles) of several microspheres (silica, diameter 1.98 μm) diffusing above a coverslip (see also Visualization 3 and Example 3 of Code 1, Ref. [42]). The tracking is performed as follows. First, the frame is divided into overlapping square boxes (for example, the blue, red, and green boxes) whose sides are approximatively twice the particle diameter (here we use 51×51 pixel boxes separated by 5 pixels). (b) Then, each box is tracked and the particle x, y, and r coordinates are determined [the blue, red, and green boxes correspond to those shown in (a)], so that each particle is detected multiple times (blue dots in each square). If no particle is present within a box, the network is trained to return a value of the radial distance much larger than the particle radius; importantly, only the particle coordinates for which the radial coordinate is smaller than the particle diameter are retained. (c) Finally, the multiple particle detections are clustered into sets of points whose inter-distance is smaller than the particle radius (blue dots) and the corresponding particle x and y coordinates are then obtained by calculating the centroid of these points (orange circle). All scale bars indicate 20 pixels corresponding to 1.4 μm.

Download Full Size | PPT Slide | PDF

We can now test DeepTrack on more challenging multi-particle videos, as shown in Fig. 4 (see also Visualization 4 and Visualization 5 and Example 4 and 5 of Code 1, Ref. [42]). For example, we consider the more challenging case where we want to track only one of two kinds of objects in very noisy conditions. We record a video with microspheres (silica, diameter 4.28 μm) and fluorescent B. subtilis bacteria (see Supplement 1) with electronic noise due to the camera gain at low illumination and changes in intensity of the background as well as particles and bacteria from frame to frame due to changes in the frame contrast. We extensively tried to track this video in a reliable way using other standard algorithms without success. When using DeepTrack, we train two different neural networks: one for detecting the Brownian particles while ignoring the fluorescent bacteria and the other to detect the fluorescent bacteria while ignoring the Brownian particles. We train both networks with simulated images of multiple particles similar to the two kinds present in the solution (a second-order Bessel function with negative intensity for the Brownian particles and a first-order Bessel function with positive intensity for the bacteria) with a low background level and SNR; the ground-truth values given to the neural network are the coordinates for the most central particle in the image. For the network detecting Brownian particles, the most central particle is always set to be a particle representing a microsphere (thus training to ignore the surrounding bright spots), and for the network detecting fluorescent bacteria the most central particle is always set to be a particle representing a bacterium. Training with about 1.3 million images in each case allows us to accurately and selectively track only the Brownian particles or bacteria, respectively, as shown in Fig. 4(a) (see also Visualization 4 and Example 4 of Code 1, Ref. [42]).

 

Fig. 4. Tracking in poor illumination conditions and in different axial planes. (a) DeepTrack can be trained to selectively track either Brownian particles (orange dots) while ignoring fluorescent B. subtilis bacteria or fluorescent bacteria (orange circles) while ignoring the Brownian particles at very low SNR (see also Visualization 4 and Example 4 of Code 1, Ref. [42]). (b) DeepTrack can also track Brownian particles in different focal planes (see also Visualization 5 and Example 5 of Code 1, Ref. [42].) Both scale bars indicate 20 pixels corresponding to (a) 5.2 μm and (b) 1.4 μm.

Download Full Size | PPT Slide | PDF

Another experimental situation that is often encountered is particles diffusing in and out of focus. For this case, we record a video with microspheres (polystyrene, diameter 0.5 μm) diffusing above the surface of a coverslip so that in each frame there are particles in different planes. To track the particles, we train the neural network with simulated images of multiple particles similar to the images of the particles in the video (combinations of Bessel functions of orders 1–4 with negative intensities) with high background intensities and SNRs; the ground-truth values given to the neural network are the coordinates for the most central particle in the image. Training with about 1.3 million images allows us to accurately track particles in different focal planes as shown in Fig. 4(b) (see also Visualization 5 and Example 5 of Code 1, Ref. [42]).

In order to further test the performance of DeepTrack, we benchmark DeepTrack against a widely known comparison of particle-tracking methods based on results from an open competition [17]. We generate the image data with the open bioimage informatics platform Icy [37] in the same way as in the competition [17]. We generate videos (250×250 pixels) representing fluorescent biological vesicles for three particle densities: low (25 particles), medium (125 particles), and high (250 particles) and four levels of SNRs (SNR = 1, 2, 4, and 7). Some examples of the resulting frames are shown in Figs. 5(a)5(c). In order to compare the localization accuracy of DeepTrack to that of the other methods, we calculate the root-mean-square error (RMSE) for matching points in each frame as in Ref. [17]: particle predicted coordinates and a ground-truth value are considered to match if their distance is less than 3 pixels, while a distance larger than 3 pixels is considered a false negative detection. A random prediction that does not match any ground truth value is considered a false positive detection and is eliminated in the same manner as it would be if particle trajectories would be calculated (DeepTrack incorrect detections are less than 1% for all cases when SNR4). We train the neural network with simulated images of multiple particles similar to the particles to be tracked (a first-order Bessel function with positive intensity) with a low background level and SNR; the ground-truth values given to the neural network in this case are the coordinates for the most central particle in the image. Training with about 1.3 million images allows us to outperform all the other methods for all given particle densities and SNRs, as shown in Figs. 5(d)5(f) (see also Visualization 6, Visualization 7, Visualization 8 and Example 6 of Code 1, Ref. [42]), where the performance of DeepTrack is represented by the orange symbols and those of the other methods by the gray symbols.

 

Fig. 5. Comparison of DeepTrack to other particle-tracking methods. The comparison is made on localization accuracy for image data generated by the open bioimage informatics platform Icy [37]. The images represent fluorescent biological vesicles at three particle densities (low, medium, high) and four SNR levels (1, 2, 4, and 7). Examples of tracked frames are shown for (a) low density and SNR = 2, (b) medium density and SNR = 4, and (c) high density and SNR = 7. (d)–(f) DeepTrack (orange lines) outperforms the other methods (gray lines, see details in Ref. [17]) in all cases (see also Visualization 6, Visualization 7, Visualization 8 and Example 6 of Code 1, Ref. [42]).

Download Full Size | PPT Slide | PDF

3. DISCUSSION

We provide a data-driven neural-network approach that enhances digital video microscopy going beyond the state of the art available through standard algorithmic approaches to particle tracking. We provide a Python freeware software package called DeepTrack, which can be readily personalized and optimized for the needs of specific users and applications. We have shown that DeepTrack can be trained for a variety of experimentally relevant images and outperforms traditional algorithms, especially when image conditions become non-ideal due to low or non-uniform illumination. Therefore, DeepTrack provides an approach that permits one to improve particle tracking, allowing us also to track particles from videos acquired in conditions that are currently not trackable with alternative methods. To facilitate adoption of this approach, we provide example codes of the training and testing of DeepTrack for each of the cases we discuss. The neural networks trained using DeepTrack can be saved and used also on other computing platforms (e.g., MatLab and LabVIEW). Even though DeepTrack is already sufficiently computationally efficient to be trained on a standard laptop computer in a few hours, its speed can be significantly enhanced by taking advantage of parallel computing and GPU computing, for which neural networks are particularly suited [38,39], especially since the underlying neural-network engine provided by Keras and TensorFlow is already GPU compatible [29,30]. Furthermore, this approach can potentially be implemented in the future in hardware [40] or using alternative neural-network computing paradigms such as reservoir computing [41].

Funding

H2020 European Research Council (ERC) Starting Grant ComplexSwimmers (677511).

Acknowledgment

We thank Falko Schmidt for help with the figures, Giorgio Volpe for critical reading of the manuscript, and Santosh Pandit and Erçağ Pinçe for help and guidance with the bacterial cultures.

 

See Supplement 1 for supporting content.

REFERENCES

1. P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).

2. K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008). [CrossRef]  

3. T. A. Waigh, “Advances in the microrheology of complex fluids,” Rep. Prog. Phys. 79, 074601 (2016). [CrossRef]  

4. B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016). [CrossRef]  

5. S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.

6. S. Inoué and K. Spring, Video Microscopy (Springer, 1997).

7. J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996). [CrossRef]  

8. I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005). [CrossRef]  

9. S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007). [CrossRef]  

10. S. B. Andersson, “Localization of a fluorescent source without numerical fitting,” Opt. Express 16, 18714–18724 (2008). [CrossRef]  

11. S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.

12. R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nat. Methods 9, 724–726 (2012). [CrossRef]  

13. R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002). [CrossRef]  

14. M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001). [CrossRef]  

15. R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004). [CrossRef]  

16. A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober, “Quantitative study of single molecule location estimation techniques,” Opt. Express 17, 23352–23373 (2009). [CrossRef]  

17. N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014). [CrossRef]  

18. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015). [CrossRef]  

19. O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.

20. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017). [CrossRef]  

21. J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018). [CrossRef]  

22. M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26, 15221–15231 (2018). [CrossRef]  

23. F. Chollet, Deep Learning with Python (Manning, 2017).

24. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012). [CrossRef]  

25. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

26. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

27. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016). [CrossRef]  

28. D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986), Vol. 1 Foundations.

29. F. Chollet, “Keras,” 2015, https://keras.io.

30. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: large-scale machine learning on heterogeneous systems,” 2015, https://www.tensorflow.org/.

31. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).

32. S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

33. J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007). [CrossRef]  

34. A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014). [CrossRef]  

35. J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005). [CrossRef]  

36. Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017). [CrossRef]  

37. F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012). [CrossRef]  

38. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015). [CrossRef]  

39. S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

40. C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

41. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

42. DeepTrack 1.0, https://doi.org/10.6084/m9.figshare.7828208.

References

  • View by:
  • |
  • |
  • |

  1. P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).
  2. K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008).
    [Crossref]
  3. T. A. Waigh, “Advances in the microrheology of complex fluids,” Rep. Prog. Phys. 79, 074601 (2016).
    [Crossref]
  4. B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
    [Crossref]
  5. S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.
  6. S. Inoué and K. Spring, Video Microscopy (Springer, 1997).
  7. J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996).
    [Crossref]
  8. I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005).
    [Crossref]
  9. S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
    [Crossref]
  10. S. B. Andersson, “Localization of a fluorescent source without numerical fitting,” Opt. Express 16, 18714–18724 (2008).
    [Crossref]
  11. S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.
  12. R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nat. Methods 9, 724–726 (2012).
    [Crossref]
  13. R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
    [Crossref]
  14. M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
    [Crossref]
  15. R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
    [Crossref]
  16. A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober, “Quantitative study of single molecule location estimation techniques,” Opt. Express 17, 23352–23373 (2009).
    [Crossref]
  17. N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
    [Crossref]
  18. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
    [Crossref]
  19. O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.
  20. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
    [Crossref]
  21. J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
    [Crossref]
  22. M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26, 15221–15231 (2018).
    [Crossref]
  23. F. Chollet, Deep Learning with Python (Manning, 2017).
  24. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
    [Crossref]
  25. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.
  27. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
    [Crossref]
  28. D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986), Vol. 1 Foundations.
  29. F. Chollet, “Keras,” 2015, https://keras.io .
  30. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: large-scale machine learning on heterogeneous systems,” 2015, https://www.tensorflow.org/ .
  31. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).
  32. S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).
  33. J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007).
    [Crossref]
  34. A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
    [Crossref]
  35. J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005).
    [Crossref]
  36. Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
    [Crossref]
  37. F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
    [Crossref]
  38. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015).
    [Crossref]
  39. S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).
  40. C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.
  41. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.
  42. DeepTrack 1.0, https://doi.org/10.6084/m9.figshare.7828208 .

2018 (2)

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26, 15221–15231 (2018).
[Crossref]

2017 (2)

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

2016 (3)

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

T. A. Waigh, “Advances in the microrheology of complex fluids,” Rep. Prog. Phys. 79, 074601 (2016).
[Crossref]

B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
[Crossref]

2015 (2)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015).
[Crossref]

2014 (2)

A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
[Crossref]

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

2012 (3)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
[Crossref]

R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nat. Methods 9, 724–726 (2012).
[Crossref]

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

2009 (1)

2008 (2)

S. B. Andersson, “Localization of a fluorescent source without numerical fitting,” Opt. Express 16, 18714–18724 (2008).
[Crossref]

K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008).
[Crossref]

2007 (2)

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007).
[Crossref]

2005 (2)

I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005).
[Crossref]

J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005).
[Crossref]

2004 (1)

R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
[Crossref]

2002 (1)

R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
[Crossref]

2001 (1)

M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
[Crossref]

1996 (1)

J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996).
[Crossref]

Abadi, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Abdulali, A.

Abraham, A. V.

Akselrod, P.

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Andersson, S. B.

Antonoglou, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Barham, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Baumgartl, J.

J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005).
[Crossref]

Bechinger, C.

J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005).
[Crossref]

Bejnordi, B. E.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Bérut, A.

A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
[Crossref]

Blau, H. M.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Bohren, C. F.

C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).

Cardinale, J.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Carthel, C.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Catanzaro, B.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Celler, K.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Chao, J.

Cheezum, M. K.

M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
[Crossref]

Chen, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Chen, Z.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Chenouard, N.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Chetlur, S.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Chollet, F.

F. Chollet, Deep Learning with Python (Manning, 2017).

Ciliberto, S.

A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
[Crossref]

Ciompi, F.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Cohen, A. R.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Cohen, J.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Coraluppi, S.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Crocker, J. C.

J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007).
[Crossref]

J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996).
[Crossref]

Culurciello, E.

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Dallongeville, S.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Dan, H.-W.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Davis, A.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

De Chaumont, F.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Dean, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Devin, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Dieleman, S.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Dufour, A.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Duncan, J.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Farabet, C.

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Fei-Fei, L.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Forest, M. G.

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Ghafoorian, M.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Ghemawat, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Gillette, J. M.

S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.

Godinez, W. J.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Gong, Y.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Graepel, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Grewe, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Grier, D. G.

Guez, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Guilford, W. H.

M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
[Crossref]

Han, Y.

B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
[Crossref]

Hannel, M. D.

Hassabis, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Hervé, N.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
[Crossref]

Hoffman, B. D.

J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007).
[Crossref]

Huang, A.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Huffman, D. R.

C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).

Inoué, S.

S. Inoué and K. Spring, Video Microscopy (Springer, 1997).

Irving, G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Isard, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Jaldén, J.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Jones, P. H.

P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).

Kalaidzidis, Y.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Kalchbrenner, N.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Karpathy, A.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Kavukcuoglu, K.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Kervrann, C.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Kindermans, P.

S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

Kooi, T.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Koumoutsakos, P.

I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005).
[Crossref]

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
[Crossref]

Kudlur, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Lagache, T.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Lai, S. K.

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Lanctot, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Larson, D. R.

R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
[Crossref]

Le, Q. V.

S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

Le Montagner, Y.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Leach, M.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Lecomte, T.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Lee, P. T.

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Leung, T.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Levenberg, J.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Li, B.

B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
[Crossref]

Liang, L.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Lillicrap, T.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Lin, Y.

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

Lippincott-Schwartz, J.

S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.

Litjens, G.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Lu, J. R.

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

Maddison, C. J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Magnusson, K. E. G.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Manley, S.

S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.

Maragò, O. M.

P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).

Martini, B.

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Maška, M.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

McClelland, J. L.

D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986), Vol. 1 Foundations.

Meas-Yedid, V.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Meijering, E.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Misra, R.

S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.

Monga, R.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Moore, S.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Murray, D. G.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Nagy, A.

K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008).
[Crossref]

Neuman, K. C.

K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008).
[Crossref]

Newby, J. M.

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Nham, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

O’Brien, M.

Ober, R. J.

A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober, “Quantitative study of single molecule location estimation techniques,” Opt. Express 17, 23352–23373 (2009).
[Crossref]

R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
[Crossref]

Olivo-Marin, J.-C.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Ortiz de Solórzano, C.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Pankajakshan, P.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Panneershelvam, V.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Parkhi, O. M.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.

Parthasarathy, R.

R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nat. Methods 9, 724–726 (2012).
[Crossref]

Parveen, S.

S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.

Paul-Gilloteaux, P.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Petrosyan, A.

A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
[Crossref]

Pop, S.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Provoost, T.

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

Ram, S.

A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober, “Quantitative study of single molecule location estimation techniques,” Opt. Express 17, 23352–23373 (2009).
[Crossref]

R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Rogers, S. S.

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

Rohr, K.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Roudot, P.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Rumelhart, D. E.

D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986), Vol. 1 Foundations.

Sahoo, S. K.

S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.

Sánchez, C. I.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Sbalzarini, I. F.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005).
[Crossref]

Schaefer, A. M.

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Scherer, N. F.

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

Schmidhuber, J.

J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015).
[Crossref]

Schrittwieser, J.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Setio, A. A. A.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Shelhamer, E.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Shen, H.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Shetty, S.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Shorte, S. L.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Sifre, L.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Silver, D.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Smal, I.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Smith, S. L.

S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

Spring, K.

S. Inoué and K. Spring, Video Microscopy (Springer, 1997).

Steiner, B.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Sukthankar, R.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Sule, N.

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Sutskever, I.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
[Crossref]

Talay, S.

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

Thompson, R. E.

R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
[Crossref]

Tinevez, J.-Y.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Toderici, G.

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

Tran, J.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Tsai, Y.-S.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Tucker, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Van Den Driessche, G.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Van Der Laak, J. A.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Van Ginneken, B.

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

van Wezel, G. P.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Vandermersch, P.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Vasudevan, V.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Vedaldi, A.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.

Volpe, G.

P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).

Waharte, F.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Waigh, T. A.

T. A. Waigh, “Advances in the microrheology of complex fluids,” Rep. Prog. Phys. 79, 074601 (2016).
[Crossref]

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

Walker, W. F.

M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
[Crossref]

Ward, E. S.

A. V. Abraham, S. Ram, J. Chao, E. S. Ward, and R. J. Ober, “Quantitative study of single molecule location estimation techniques,” Opt. Express 17, 23352–23373 (2009).
[Crossref]

R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
[Crossref]

Warden, P.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Webb, W. W.

R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
[Crossref]

Wicke, M.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Willemse, J.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Winter, M.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Woolley, C.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

Xu, Y.

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Yifat, Y.

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

Ying, C.

S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

Yu, Y.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

Zhao, X.

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

Zheng, X.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

Zhou, D.

B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
[Crossref]

Zisserman, A.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.

Adv. Neural Inf. Process. Syst. (1)

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012).
[Crossref]

Biophys. J. (3)

R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002).
[Crossref]

M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001).
[Crossref]

R. J. Ober, S. Ram, and E. S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004).
[Crossref]

Europhys. Lett. (2)

A. Bérut, A. Petrosyan, and S. Ciliberto, “Energy flow between two hydrodynamically coupled particles kept at different effective temperatures,” Europhys. Lett. 107, 60004 (2014).
[Crossref]

J. Baumgartl and C. Bechinger, “On the limits of digital video microscopy,” Europhys. Lett. 71, 487–493 (2005).
[Crossref]

J. Colloid Interface Sci. (1)

J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179, 298–310 (1996).
[Crossref]

J. Struct. Biol. (1)

I. F. Sbalzarini and P. Koumoutsakos, “Feature point tracking and trajectory analysis for video imaging in cell biology,” J. Struct. Biol. 151, 182–195 (2005).
[Crossref]

Med. Image Anal. (1)

G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Med. Image Anal. 42, 60–88 (2017).
[Crossref]

Meth. Cell Biol. (1)

J. C. Crocker and B. D. Hoffman, “Multiple-particle tracking and two-point microrheology in cells,” Meth. Cell Biol. 83, 141–178 (2007).
[Crossref]

Nat. Methods (4)

F. De Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin, “Icy: an open bioimage informatics platform for extended reproducible research,” Nat. Methods 9, 690–696 (2012).
[Crossref]

K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5, 491–505 (2008).
[Crossref]

R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nat. Methods 9, 724–726 (2012).
[Crossref]

N. Chenouard, I. Smal, F. De Chaumont, M. Maška, I. F. Sbalzarini, Y. Gong, J. Cardinale, C. Carthel, S. Coraluppi, M. Winter, A. R. Cohen, W. J. Godinez, K. Rohr, Y. Kalaidzidis, L. Liang, J. Duncan, H. Shen, Y. Xu, K. E. G. Magnusson, J. Jaldén, H. M. Blau, P. Paul-Gilloteaux, P. Roudot, C. Kervrann, F. Waharte, J.-Y. Tinevez, S. L. Shorte, J. Willemse, K. Celler, G. P. van Wezel, H.-W. Dan, Y.-S. Tsai, C. Ortiz de Solórzano, J.-C. Olivo-Marin, and E. Meijering, “Objective comparison of particle tracking methods,” Nat. Methods 11, 281–289 (2014).
[Crossref]

Nat. Rev. Mater. (1)

B. Li, D. Zhou, and Y. Han, “Assembly and phase transitions of colloidal crystals,” Nat. Rev. Mater. 1, 15011 (2016).
[Crossref]

Nature (2)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521, 436–444 (2015).
[Crossref]

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature 529, 484–489 (2016).
[Crossref]

Neural Netw. (1)

J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Netw. 61, 85–117 (2015).
[Crossref]

Opt. Express (3)

Phys. Biol. (1)

S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight,” Phys. Biol. 4, 220–227 (2007).
[Crossref]

Proc. Natl. Acad. Sci. USA (1)

J. M. Newby, A. M. Schaefer, P. T. Lee, M. G. Forest, and S. K. Lai, “Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D,” Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).
[Crossref]

Rep. Prog. Phys. (1)

T. A. Waigh, “Advances in the microrheology of complex fluids,” Rep. Prog. Phys. 79, 074601 (2016).
[Crossref]

Sci. Rep. (1)

Y. Yifat, N. Sule, Y. Lin, and N. F. Scherer, “Analysis and correction of errors in nanoscale particle tracking using the single-pixel interior filling function (spiff) algorithm,” Sci. Rep. 7, 16553 (2017).
[Crossref]

Other (17)

A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014), pp. 1725–1732.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1026–1034.

S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” arXiv:1410.0759 (2014).

C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay, “Large-scale FPGA-based convolutional networks,” in Scaling up Machine Learning: Parallel and Distributed Approaches (Cambridge Univ. Press, 2011), pp. 399–419.

M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ‘16) (USENIX, 2016), vol. 16, pp. 265–283.

DeepTrack 1.0, https://doi.org/10.6084/m9.figshare.7828208 .

F. Chollet, Deep Learning with Python (Manning, 2017).

D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (MIT Press, 1986), Vol. 1 Foundations.

F. Chollet, “Keras,” 2015, https://keras.io .

M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: large-scale machine learning on heterogeneous systems,” 2015, https://www.tensorflow.org/ .

C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 2008).

S. L. Smith, P. Kindermans, C. Ying, and Q. V. Le, “Don’t decay the learning rate, increase the batch size,” arXiv:1711.00489 (2017).

S. K. Sahoo, R. Misra, and S. Parveen, “Nanoparticles: a boon to drug delivery, therapeutics, diagnostics and imaging,” in Nanomedicine Cancer (Pan Stanford, 2017), pp. 73–124.

S. Inoué and K. Spring, Video Microscopy (Springer, 1997).

S. Manley, J. M. Gillette, and J. Lippincott-Schwartz, “Single-particle tracking photoactivated localization microscopy for mapping single-molecule dynamics,” in Methods in Enzymology (Elsevier, 2010), vol. 475, pp. 109–120.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in Proceedings of the British Machine Vision Conference (BMVC) (BMVA Press, 2015), vol. 1, pp. 41.1–41.12.

P. H. Jones, O. M. Maragò, and G. Volpe, Optical Tweezers: Principles and Applications (Cambridge University, 2015).

Supplementary Material (10)

NameDescription
» Code 1       DeepTrack 1.0
» Supplement 1       Supplementary Document
» Visualization 1       Optically trapped particle and its position obtained by DeepTrack (orange circle) and by the radial symmetry algorithm (gray cross). DeepTrack and standard algorithms lead to the same results when tracking and analyzing the trajectory under ideal illumination conditions.
» Visualization 2       Optically trapped particle and its position obtained by DeepTrack (orange circle) and by the radial symmetry algorithm (gray cross) with noisy illumination. DeepTrack outperforms standard algorithms when the illumination is unsteady.
» Visualization 3       DeepTrack tracks the position (orange circles) of several microspheres (silica, diameter 1.98 µm) diffusing above a coverslip. See also Fig. 3.
» Visualization 4       DeepTrack can be trained to selectively track either Brownian particles (orange dots) while ignoring fluorescent B. subtilis bacteria, or fluorescent bacteria (orange circles) while ignoring the Brownian particles at very low SNR. See also Fig. 4.
» Visualization 5       DeepTrack can also track Brownian particles in different focal planes. See also Fig. 4.
» Visualization 6       Tracked video for low density. See also Fig. 5.
» Visualization 7       Tracked video for medium density. See also Fig. 5.
» Visualization 8       Tracked video for high density. See also Fig. 5.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. DeepTrack neural-network architecture and performance. (a) DeepTrack architecture consists of a convolutional base (three convolutional neural network layers depicted in orange, each followed by a max-pooling layer depicted in gray) followed by a dense top (two fully connected dense layers and a dense output layer). In the convolutional base, the image is iteratively filtered to extract an increasing number of feature maps and downsampled. In the dense top, the feature maps are used to predict the values of the x , y , and r coordinates of the particle. (b) Mean absolute error (MAE) of the position detection as a function of signal-to-noise ratio (SNR) for DeepTrack (orange circles) and the centroid (gray circles) and radial symmetry (gray squares) algorithms. The bordeaux asterisks represent the performance achieved by averaging the coordinates obtained with 100 independently trained neural networks. (c) Same as (b) as a function of the gradient intensity at SNR = 50. For each SNR or gradient intensity value, we used 1000 simulated images; the error bars are contained within the symbols. See also Example 1a of Code 1, Ref. [42], which demonstrates the performance of a pre-trained neural network, and Example 1b of Code 1, Ref. [42], which illustrates the training and operation of the neural network.
Fig. 2.
Fig. 2. Experimental tracking of an optically trapped particle: (a)–(d) DeepTrack and standard algorithms lead to the same results when tracking and analyzing the trajectory of an optically trapped particle (silica microsphere, diameter 1.98 μm) under optimal illumination conditions (see also Visualization 1). (a) Image of the optically trapped particle and its position obtained by DeepTrack (orange circle) and by the radial symmetry algorithm (gray cross); (b) part of the trajectory tracked by DeepTrack (orange line) and by the radial symmetry algorithm (gray line); (c) probability distributions and (d) autocorrelation functions of the particle position obtained from the trajectory tracked by DeepTrack (orange lines) and by the radial symmetry algorithm (gray lines) and corresponding fits to theory (black lines). (e)–(h) DeepTrack outperforms standard algorithms when the illumination is unsteady (here obtained by illuminating the sample with a standard lamp flickering at 100 Hz) and noisy (setting the camera gain to its highest value): (e) image of the same particle in the same optical trap with noisy illumination (see also Visualization 2); (f) the trajectory reconstructed by DeepTrack appears qualitatively more similar to those shown in (b); (g) the probability distribution and (h) the autocorrelation function obtained by DeepTrack (orange lines) agree with those obtained in (c) [the black lines are the same as in (c) and (d)], while the probability distribution from the radial symmetry algorithm [gray symbols in (g)] is widened by the noise, and the autocorrelation function [gray line in (h)] features a large peak at 0, which is the signature of white noise, and some small oscillations at 100 Hz, which are due to the flickering of the illumination. The scale bars in (a) and (e) represent 20 pixels corresponding to 1.4 μm. See also Example 2 of Code 1, Ref. [42], which uses a pre-trained neural network to track these particles.
Fig. 3.
Fig. 3. Tracking of multiple particles. (a) DeepTrack tracks the position (orange circles) of several microspheres (silica, diameter 1.98 μm) diffusing above a coverslip (see also Visualization 3 and Example 3 of Code 1, Ref. [42]). The tracking is performed as follows. First, the frame is divided into overlapping square boxes (for example, the blue, red, and green boxes) whose sides are approximatively twice the particle diameter (here we use 51 × 51 pixel boxes separated by 5 pixels). (b) Then, each box is tracked and the particle x , y , and r coordinates are determined [the blue, red, and green boxes correspond to those shown in (a)], so that each particle is detected multiple times (blue dots in each square). If no particle is present within a box, the network is trained to return a value of the radial distance much larger than the particle radius; importantly, only the particle coordinates for which the radial coordinate is smaller than the particle diameter are retained. (c) Finally, the multiple particle detections are clustered into sets of points whose inter-distance is smaller than the particle radius (blue dots) and the corresponding particle x and y coordinates are then obtained by calculating the centroid of these points (orange circle). All scale bars indicate 20 pixels corresponding to 1.4 μm.
Fig. 4.
Fig. 4. Tracking in poor illumination conditions and in different axial planes. (a) DeepTrack can be trained to selectively track either Brownian particles (orange dots) while ignoring fluorescent B. subtilis bacteria or fluorescent bacteria (orange circles) while ignoring the Brownian particles at very low SNR (see also Visualization 4 and Example 4 of Code 1, Ref. [42]). (b) DeepTrack can also track Brownian particles in different focal planes (see also Visualization 5 and Example 5 of Code 1, Ref. [42].) Both scale bars indicate 20 pixels corresponding to (a) 5.2 μm and (b) 1.4 μm.
Fig. 5.
Fig. 5. Comparison of DeepTrack to other particle-tracking methods. The comparison is made on localization accuracy for image data generated by the open bioimage informatics platform Icy [37]. The images represent fluorescent biological vesicles at three particle densities (low, medium, high) and four SNR levels (1, 2, 4, and 7). Examples of tracked frames are shown for (a) low density and SNR = 2, (b) medium density and SNR = 4, and (c) high density and SNR = 7. (d)–(f) DeepTrack (orange lines) outperforms the other methods (gray lines, see details in Ref. [17]) in all cases (see also Visualization 6, Visualization 7, Visualization 8 and Example 6 of Code 1, Ref. [42]).

Metrics