This article introduces a method to extract the speed and density of microparticles in real time at several kHz using an asynchronous event-based camera mounted on a full-field optical coherence tomography (FF-OCT) setup. These cameras detect significant amplitude changes, allowing scene-driven acquisitions. They are composed of an array of autonomously operating pixels. Events are triggered when an illuminance change at the pixel level is significant at 1μs time precision. The event-driven FF-OCT algorithm relies on a time-based optical flow computation to operate directly on incoming events and updates the estimation of velocity, direction and density while reducing both computation and data load. We show that for fast moving microparticles in a range of 0.4 – 6.5mm/s, the method performs faster and more efficiently than existing techniques in real time. The target application of this work is to evaluate erythrocyte dynamics at the microvascular level in vivo with a high temporal resolution.
© 2017 Optical Society of America
This paper introduces a method to compute the motion and density of particles observed with a Full-Field Optical Coherence Tomography (FF-OCT) setup  at a high temporal precision of 1μs output from an asynchronous event-based camera . Event based cameras challenge the inherent belief that acquiring series of images at a given rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes in the amount of light. The sampling process is no longer governed by a fixed timing source but by the signal to be sampled, and more precisely by the variations of the signal in amplitude. The event-based acquisition paradigm allows us to go beyond the current conventional methods used in biomedical imaging thanks to its high temporal resolution.
Current macroscopic methods such as laser Doppler imaging , laser speckle imaging  or diffuse optical imaging  have a large penetration depth but at the cost of a poor spatial resolution. Their resolution of hundreds of μm to mm is not sufficient to image individual cells. Microscopic techniques such as two-photon microscopy  or confocal microscopy reach micron-scale resolution with a penetration depth of several tens μm, a smaller field of view (200μm × 200μm) and offline image processing. We coupled the asynchronous event-based camera with an FF-OCT setup which performs en-face acquisitions and offers a good compromise between wide field of view, good spatial resolution and penetration depth [1,7].
Previous work on motion estimation using OCT techniques includes Doppler-OCT which records shifts in frequency of a laser radiation scattered by moving particles . OCT angiography aims to suppress static scattering from the tissue sample, in order to keep only the dynamic scattering components . Another technique is dynamic light scattering optical coherence tomography . More recently developped, optical coherence tomography velocity techniques based on Mie scattering aim to study red blood cell speed and flux . However, these dynamic approaches are limited by the frame rate of the camera used which can not go beyond several hundred frames per second (250–500 Hz). The large amount of acquired data generally prevents the estimate of velocities and direction of motion in real time for single particles [8–12]. No technique today can estimate the dynamics of red blood cells at the microscopic level .
Biomimetic event-based cameras are a novel type of vision sensor that are event driven. Event-based vision has previously been used in microscopy to compute optical flow at several kHz  and to track microparticles using an event based Hough transform . However, existing methods have been developed for transmission microscopy to operate on a single depth plane and only on ex-vivo, controlled samples. In vivo caracterisation of the density and velocity of particles such as erythrocytes would allow physicians to adapt treatments and follow patients in cardiogenic or septic shock. The microcirculation is not always linked to the macro-circulation and both parameters influence the local oxygenation of tissues. In addition, this would simplify the daily follow-up of patients with chronic diseases that impair the microcirculation such as diabetes.
2. Experimental methods
2.1. Event-based sensors
An ATIS camera detects relative changes in log pixel luminance over time. As soon as a change is detected, the process of communicating this change event off-chip is initiated. Off-chip communication executes with low latency (on the order of microseconds), ensuring that the time at which a change event is read out from the ATIS inherently represents the time at which the change was detected. This asynchronous low-latency readout scheme provides the high temporal resolution change detection data. Let e(p, t) be an event occuring at time t at the spatial location p = (x, y)T. A positive change of contrast results in an “ON” (polarity=+1) event and a negative change of contrast in an “OFF” (polarity=−1) events. The threshold τ beyond which a change of luminance is high enough to create an event is tuned according to the scene. Smaller intensity fluctuations are not recorded and do not generate any event. Unlike for frame based cameras, the redundant information is not recorded. For low contrasts, the value of parameter τ is decreased in order to detect smaller intensity variations. For our experiments we set parameter τ to be 1% ± 0.5% of the maximum illuminance depending on the sensiblity required. Fig. 2 shows a spatio-temporal representation of events generated by moving particles.
2.2. Principle of FF-OCT with an asynchronous camera
Our system uses a conventional FF-OCT setup which consists of a Linnik interferometer. The incoming signal on the camera is the sum of the intensity of the static background signal, i.e the light that does not interfere, and the interferometric signal. The background signal is time independent, therefore only variations in the interferometric signal at a given depth are recorded in the form of events. A scan can then be performed along the z-axis to image different planes inside a 3D volume.
In standard FF-OCT imaging, a piezoelectric chip is used to move the reference mirror and create a phase modulation to reconstruct images from a mathematical combination of several frames. When studying dynamic movements, such a reconstruction would induce motion blur as objects are not at the same location on two consecutive frames. Our setup tracks only the movements of the pattern of interference with an event-based camera. We tune the value of τ so that the changes of intensity coming from objects outside the coherence volume are below this threshold and will not generate events. The value of τ strongly depends on the imaging depth. This issue will be further discussed in section 4.4.
2.3. Asynchronous optical flow
Optical flow represents the distribution of apparent velocity of objects in an image. We use an event based visual flow algorithm  for which a function Σe maps to each event p its time t:
The surface of active events Σe provides an estimate of the orientation and amplitude of the motion. If we consider a small displacement Δp we can write:
We obtain the inverse pixel velocity of events vs time providing both the rate and the direction for every incoming event in s/pixels:
This information allows us to represent the optical flow of the event at location p + Δp by a vector (vx, vy). The current limitation of this approach is the 2D velocity detection although it may be computed in multiple planes. This method however shows great potential for a 3D absolute velocity reconstruction providing that the vz component of the speed is given. A solution could be the coupling of our setup with Doppler OCT  in a future work.
2.4. Particle tracking
A moving object generates a cloud of events that can be represented by a bivariate normal distribution β(μ), where μ is the current location of the tracker . Each time an event occurs, the probability that it belongs to an existing tracker i is given by:
- inactive : its activity is low and the tracker is not visible
- active : its activity is high, the Gaussian blob is following an object and is visible
- paused : its activity is not high enough to appear visible, however the tracker updates its position with incoming events
We adapted a commercially availiable FF-OCT scanner (LLTech SAS) as shown on Fig. 3. The microscope lenses used are ×10 water immersion lenses with a 0.3 N.A. The resolution of the system is 1.5μm lateral and 1μm axial.
The event-based camera has a resolution of 304 × 240 pixels, which represents a field of view of 0.72 × 0.91mm. A PMMA microfludic chip (provided by MicroFluidic ChipShop) with a channel of section S = 2500μm × 150μm is set under the microscope objective to observe red polyethylene microspheres. Their diameter ranges from 30μm to 45μm and they flow in a 0.1% tween solution. A plastic syringe filled with a solution of water and calibrated microparticles is connected to a controlled syringe pump that provides a groundtruth flow with a maximum error of 0.1ml/h. From this groundtruth we deduce the theoretical average speed in the field of view:
At any depth inside the channel the expected velocity of particles is given by with vm = 2vavg, the maximum velocity at the center of the channel, r the radial coordinate and R the radius of the channel (150μm). The set-up also includes a CMOS camera acquiring conventional images at 150Hz that allows offline estimation of flows less than 1.5ml/h due to low contrast and motion blur. For both experiments we choose to record the optical flow and density of particles at a depth of 50, 70 or 100μm which corresponds to the typical depth of blood capillaries in the human body. The influence of the imaging depth on the results will be further discussed in section 4.4.
4. Results and discussion
4.1. Speed measurement of microparticles
Figure 4 shows data from an event-based acquisition of microparticles moving from right to left in the same direction and at the same speed. The arrows represented on the events are the optical flow vectors (vx, vy) previously described in 2.3. The flow is laminar with a maximum Reynolds number of 4.1 and ranges from 0.1ml/h to 9ml/h. We estimate the optical flow on a set of 20 recordings lasting 1 minute each for various values of the flow at a depth of 50μm and 100μm. The expected speed at these depths is identical and ranges from 0.4mm/s to 6.7mm/s.
Figure 5 shows the theoretical value of particle speed and the estimated optical flow in the horizontal direction. We compute the measurement error from the groundtruth, provided by the syringe pump, and estimate it using the averaged squared deviation for the optical flow. At 50μm deep, the estimated speeds for particles below 6ml/h is close to the groundtruth with a maximum difference of 6%. At 100μm deep, the estimated speed for particles below 4.5ml/h is close to the groundtruth. The influence of depth on the accuracy will be further discussed in part 4.4. In general, the measured speeds are smaller than the groundtruth, as friction forces inside the fluid are not accounted for in the groundtruth. At any time, as particles are free to move in any direction inside the channel, they may enter or leave the coherence volume. Particles moving in the axial direction have a different speed than particles staying within the coherence volume. Indeed their horizontal component is smaller. However these particles cross the coherence volume at high speed thus appering on a tiny time scale in the data and do not affect our computation since the number of occurences detected is lower than 4%. The process reaches its limits at 6.5ml/h where velocities are too high even for the event-based camera to trigger a significant change of contrast.
4.2. Direction estimation
Figure 6 shows a map of the visual optical flow accumulated over two minutes for particles flowing around an air bubble (grey circle) created inside the channel of the microfluidic chip at 50μm in depth. Each arrow represents the optical flow of a single particle at different positions during a two minutes recording. Their color depends on the angle of the optical flow vector which varies between 90 degrees (red) and −90 degrees (blue).
Figure 7 shows the values of the local angle and norm of the optical flow for each square of Fig. 6. The error is computed using the averaged squared deviation. The standard deviation for the norm of the optical flow in square 4 has a higher value due to the low number of vectors inside this region.
4.3. Particle density
The Gaussian blob tracking algorithm allows the estimation of a particle density and therefore counting of the number of particles in a solution (Fig. 8).
We used solutions with different concentrations varying from 9 000 particles/ml to 28 000 particles/ml. The flow rate and the number of particles per ml are known and we assume that, given the laminar flow, the particles are evenly distributed inside the channel of the microfluidic chip . We can therefore compute the theoretical concentration in microparticles within our coherence volume which will be our groundtruth. Figure 9 shows results for five different concentrations. For each experiment the flow is set to 2ml/h, representing a movement of 3mm/s at a depth of 70μm.
Since we know that particles move in the same direction, we can count them using the number of active trackers, previously desribed in 2.4, that cross virtual vertical lines in the field of view. This operation is performed over ten lines across our image, giving a robust estimate of the number of particles. For high concentrations of microspheres i.e. more than 30 000 particles/ml, we tend to miss particles when they overlap with one another and only one tracker becomes active for the pair. We have a difference of 10% or more with the groundtruth.
It is important to emphasize that the chosen speed of 3mm/s at around 70μm corresponds to the natural velocity of blood cells inside capillaries . This is the target application of this work as conventional cameras can not estimate velocities beyond 1.8mm/s for single particles at a microscopic level even with offline processing .
4.4. Influence of the imaging depth on the optical flow and density
The maximum penetration depth of the FF-OCT setup varies between 200 to 1000μm, depending on the sample (specifications provided by LLTech SAS). The resolution degrades with depth for imaging in scattering media because of the coherent detection of multiply scattered light. This is a common phenomenon in all OCT imaging systems [1,7]. The interferometric signal becomes weaker as the imaging depth increases. The value of τ, previously introduced in section 2.2, defines how sensitive to contrast differences the pixels are. When the signal becomes weaker, the value of τ has to decrease in order to generate events when smaller light variations occur. As a consequence, we raise the SNR and results divert from the groundtruth. On Fig. 10 is shown the mean error of our measurements regarding the theoretical value for the flow (blue line) and the density of particles (red line) for a set of 10 recordings at the depths of 0, 50, 100 and 150 μm in a scattering medium (water with a drop of milk). We observe that we have an error above 10% when imaging deeper than 120μm for our experiments. However our aim is to image blood capillaries which can be found within the first 100μm of the human tissues.
5. Computational cost
We set a time interval Δtb containing a certain number of events. Let Δtc be the computational time required to process our algorithms using the content of the first time interval. We can set an efficiency ratio r such as: . If r < 1 the computation can be performed in real time. We cut all the data from the experiments at 4ml/h into time bins of Δtb = 100ms. On Fig. 11 is shown in blue the number of events per bin and in red the corresponding efficiency ratio.
On average, the mean number of events per bin of 100ms is 7440 and the efficiency ratio is equal to 0.231, meaning that we can process data on average 4.3 times faster than real time. From this data we estimate the mean processing time for a single event : 3.1μs. This gives a theoretical upper limit for online processing of 32000 events per bin of 100ms beyond which the information can not be processed in real time any more. However this number of events per bin is way above the typical values obtained with our recordings.
This paper combines for the first time both high spatial resolution provided by FF-OCT and high temporal precision provided by event-based cameras. We have shown that this combination allows the capture of fast moving objects with high accuracy at a depth up to 120μm. The low data rate allows tracking and motion estimation in real time at the native resolution of the sensor at low computational costs for flows as high as 6.5ml/h and velocities of 4.5mm/s, going beyond existing limitations of the FF-OCT and other medical imaging techniques for this application. This work opens new perspectives to image biological samples and estimate erythrocyte and leucocyte dynamics in-vivo inside capillaries. The evaluation of microcirculation is a promising tool to understand the pathophysiology of many diseases and to personalise treatments, particularly among critically ill patients. Currently the camera’s sensitivity to low contrasts limits the observation of smaller structures of interest such as red blood cells in-vivo. Moreover, the ATIS used has a QVGA resolution that is not impacting our precision but our field of view. However this technology is improving drastically and a new version of the ATIS camera with VGA sensors will be available in the near future designed specifically for low light conditions.
European Research Council Helmholtz (ERC) Synergy grant (610 110).
References and links
2. C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433. [CrossRef]
3. R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).
5. T. Durduran, G. Yu, M.G. Burnett, J.A. Detre, J.H. Greenberg, J. Wang, C. Zhou, and A. G. Yodh, “Diffuse optical measurement of blood flow, blood oxygenation, and metabolism in a human brain during sensori- motor cortex activation,” Opt. Lett. 29, 1766–1768 (2004). [CrossRef] [PubMed]
8. R.A. Leitgeb, L. Schmetterer, W. Drexler, A.F. Fercher, R.J. Zawadzki, and T. Bajraszewski, “Real-time assessment of retinal blood flow with ultrafast acquisition by color Doppler Fourier domain optical coherence tomography,” Opt. Express 11, 3116–3121 (2003). [PubMed]
9. Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative oct angiography of optic nerve head blood flow,” Biomed. Opt. Express 3, 3127–3137 (2012). [CrossRef] [PubMed]
11. J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013). [CrossRef] [PubMed]
12. V. J. Srinivasan, H. Radhakrishnan, E. H. Lo, E. T. Mandeville, J. Y. Jiang, S. Barry, and A. E. Cable, “OCT methods for capillary velocimetry,” Biomed. Opt. Express 3, 612–629 (2012). [CrossRef] [PubMed]
14. R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012). [CrossRef]
15. Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012). [CrossRef]
16. R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).
18. X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).
19. T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011). [CrossRef]
20. S.-L. Chen, Z. Xie, P. L. Carson, X. Wang, and L. J. Guo, “In vivo flow speed measurement of capillaries by photoacoustic correlation spectroscopy,” Opt. Lett. 36, 4017–4019 (2011). [CrossRef] [PubMed]