Abstract

This article introduces a method to extract the speed and density of microparticles in real time at several kHz using an asynchronous event-based camera mounted on a full-field optical coherence tomography (FF-OCT) setup. These cameras detect significant amplitude changes, allowing scene-driven acquisitions. They are composed of an array of autonomously operating pixels. Events are triggered when an illuminance change at the pixel level is significant at 1μs time precision. The event-driven FF-OCT algorithm relies on a time-based optical flow computation to operate directly on incoming events and updates the estimation of velocity, direction and density while reducing both computation and data load. We show that for fast moving microparticles in a range of 0.4 – 6.5mm/s, the method performs faster and more efficiently than existing techniques in real time. The target application of this work is to evaluate erythrocyte dynamics at the microvascular level in vivo with a high temporal resolution.

© 2017 Optical Society of America

1. Introduction

This paper introduces a method to compute the motion and density of particles observed with a Full-Field Optical Coherence Tomography (FF-OCT) setup [1] at a high temporal precision of 1μs output from an asynchronous event-based camera [2]. Event based cameras challenge the inherent belief that acquiring series of images at a given rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes in the amount of light. The sampling process is no longer governed by a fixed timing source but by the signal to be sampled, and more precisely by the variations of the signal in amplitude. The event-based acquisition paradigm allows us to go beyond the current conventional methods used in biomedical imaging thanks to its high temporal resolution.

Current macroscopic methods such as laser Doppler imaging [3], laser speckle imaging [4] or diffuse optical imaging [5] have a large penetration depth but at the cost of a poor spatial resolution. Their resolution of hundreds of μm to mm is not sufficient to image individual cells. Microscopic techniques such as two-photon microscopy [6] or confocal microscopy reach micron-scale resolution with a penetration depth of several tens μm, a smaller field of view (200μm × 200μm) and offline image processing. We coupled the asynchronous event-based camera with an FF-OCT setup which performs en-face acquisitions and offers a good compromise between wide field of view, good spatial resolution and penetration depth [1,7].

Previous work on motion estimation using OCT techniques includes Doppler-OCT which records shifts in frequency of a laser radiation scattered by moving particles [8]. OCT angiography aims to suppress static scattering from the tissue sample, in order to keep only the dynamic scattering components [9]. Another technique is dynamic light scattering optical coherence tomography [10]. More recently developped, optical coherence tomography velocity techniques based on Mie scattering aim to study red blood cell speed and flux [11]. However, these dynamic approaches are limited by the frame rate of the camera used which can not go beyond several hundred frames per second (250–500 Hz). The large amount of acquired data generally prevents the estimate of velocities and direction of motion in real time for single particles [8–12]. No technique today can estimate the dynamics of red blood cells at the microscopic level [13].

Biomimetic event-based cameras are a novel type of vision sensor that are event driven. Event-based vision has previously been used in microscopy to compute optical flow at several kHz [14] and to track microparticles using an event based Hough transform [15]. However, existing methods have been developed for transmission microscopy to operate on a single depth plane and only on ex-vivo, controlled samples. In vivo caracterisation of the density and velocity of particles such as erythrocytes would allow physicians to adapt treatments and follow patients in cardiogenic or septic shock. The microcirculation is not always linked to the macro-circulation and both parameters influence the local oxygenation of tissues. In addition, this would simplify the daily follow-up of patients with chronic diseases that impair the microcirculation such as diabetes.

2. Experimental methods

2.1. Event-based sensors

The operating principle of an ATIS (Asynchronous Time-based Image Sensor [2]) is shown in Fig. 1.

 

Fig. 1 Left: event-based encoding of visual information. A change of the logarithmic intensity generates ON and OFF events if the absolute change in log(I) is superior to τ. Right: the ATIS camera.

Download Full Size | PPT Slide | PDF

An ATIS camera detects relative changes in log pixel luminance over time. As soon as a change is detected, the process of communicating this change event off-chip is initiated. Off-chip communication executes with low latency (on the order of microseconds), ensuring that the time at which a change event is read out from the ATIS inherently represents the time at which the change was detected. This asynchronous low-latency readout scheme provides the high temporal resolution change detection data. Let e(p, t) be an event occuring at time t at the spatial location p = (x, y)T. A positive change of contrast results in an “ON” (polarity=+1) event and a negative change of contrast in an “OFF” (polarity=−1) events. The threshold τ beyond which a change of luminance is high enough to create an event is tuned according to the scene. Smaller intensity fluctuations are not recorded and do not generate any event. Unlike for frame based cameras, the redundant information is not recorded. For low contrasts, the value of parameter τ is decreased in order to detect smaller intensity variations. For our experiments we set parameter τ to be 1% ± 0.5% of the maximum illuminance depending on the sensiblity required. Fig. 2 shows a spatio-temporal representation of events generated by moving particles.

 

Fig. 2 3D visualisation of event generated over time by the movement of small particles (grey circles moving from right to left). The vertical axis is the time, the other two are the xy plane. The blue dots are both positive and negative events.

Download Full Size | PPT Slide | PDF

2.2. Principle of FF-OCT with an asynchronous camera

Our system uses a conventional FF-OCT setup which consists of a Linnik interferometer. The incoming signal on the camera is the sum of the intensity of the static background signal, i.e the light that does not interfere, and the interferometric signal. The background signal is time independent, therefore only variations in the interferometric signal at a given depth are recorded in the form of events. A scan can then be performed along the z-axis to image different planes inside a 3D volume.

In standard FF-OCT imaging, a piezoelectric chip is used to move the reference mirror and create a phase modulation to reconstruct images from a mathematical combination of several frames. When studying dynamic movements, such a reconstruction would induce motion blur as objects are not at the same location on two consecutive frames. Our setup tracks only the movements of the pattern of interference with an event-based camera. We tune the value of τ so that the changes of intensity coming from objects outside the coherence volume are below this threshold and will not generate events. The value of τ strongly depends on the imaging depth. This issue will be further discussed in section 4.4.

2.3. Asynchronous optical flow

Optical flow represents the distribution of apparent velocity of objects in an image. We use an event based visual flow algorithm [16] for which a function Σe maps to each event p its time t:

Σe:2pΣe(p)=t

The surface of active events Σe provides an estimate of the orientation and amplitude of the motion. If we consider a small displacement Δp we can write:

Σe(p+Δp)=Σe(p)+ΣeTΔp+(Δp)
with Σe=(Σex,Σey)T. The principle of the event based optical flow algorithm classically uses a continuous formulation of the time surface enveloppe of events. As expected, the camera provides a discrete information of spatial locations, however for better clarity and generalization purposes we will use partial derivatives. Both partial derivatives are functions of a single variable and, as time is a strictly increasing function, the derivatives of the surface Σe are never equal to zero at any point. The inverse function theorem can then be applied around a location p = (x, y)T.
Σex(x,y0)=Σe|y=y0dx(x)=1vx(x,y0)Σey(x0,y)=Σe|x=x0dy(y)=1vy(x0,y)

We obtain the inverse pixel velocity of events vs time providing both the rate and the direction for every incoming event in s/pixels:

Σe=(1vx,1vy)T.

This information allows us to represent the optical flow of the event at location p + Δp by a vector (vx, vy). The current limitation of this approach is the 2D velocity detection although it may be computed in multiple planes. This method however shows great potential for a 3D absolute velocity reconstruction providing that the vz component of the speed is given. A solution could be the coupling of our setup with Doppler OCT [17] in a future work.

2.4. Particle tracking

A moving object generates a cloud of events that can be represented by a bivariate normal distribution β(μ), where μ is the current location of the tracker [18]. Each time an event occurs, the probability that it belongs to an existing tracker i is given by:

Pi(u)=12π|βi|12e12(pμi)Tβ1(pμi)
with p = (x, y)T the spatial position of the event. When the probability is above a predefined threshold, the tracker with the highest probabilty updates its location (μ) as follows:
μ=αμ+(1α)p
where α is an update factor set experimentally according to the mean number of events of a considered scene. We define the activity A of a tracker as:
Ai(t)={Ai(tΔt)eΔtδ+Pi(p),ife(p,t)trackeriAi(tΔt)eΔtδ,otherwise.
with Δt the time difference between current and previous events and δ the temporal activity decrease. A tracker can be in either of these three states:
  • inactive : its activity is low and the tracker is not visible
  • active : its activity is high, the Gaussian blob is following an object and is visible
  • paused : its activity is not high enough to appear visible, however the tracker updates its position with incoming events

3. Experiments

We adapted a commercially availiable FF-OCT scanner (LLTech SAS) as shown on Fig. 3. The microscope lenses used are ×10 water immersion lenses with a 0.3 N.A. The resolution of the system is 1.5μm lateral and 1μm axial.

 

Fig. 3 Schematic representation of the FF-OCT microscope with a microfluidic chip and syringe pump. This setup will be used for all our experiments.

Download Full Size | PPT Slide | PDF

The event-based camera has a resolution of 304 × 240 pixels, which represents a field of view of 0.72 × 0.91mm. A PMMA microfludic chip (provided by MicroFluidic ChipShop) with a channel of section S = 2500μm × 150μm is set under the microscope objective to observe red polyethylene microspheres. Their diameter ranges from 30μm to 45μm and they flow in a 0.1% tween solution. A plastic syringe filled with a solution of water and calibrated microparticles is connected to a controlled syringe pump that provides a groundtruth flow with a maximum error of 0.1ml/h. From this groundtruth we deduce the theoretical average speed in the field of view:

vavg=FlowChannelSection

At any depth inside the channel the expected velocity of particles is given by v=vm(1r2R2) with vm = 2vavg, the maximum velocity at the center of the channel, r the radial coordinate and R the radius of the channel (150μm). The set-up also includes a CMOS camera acquiring conventional images at 150Hz that allows offline estimation of flows less than 1.5ml/h due to low contrast and motion blur. For both experiments we choose to record the optical flow and density of particles at a depth of 50, 70 or 100μm which corresponds to the typical depth of blood capillaries in the human body. The influence of the imaging depth on the results will be further discussed in section 4.4.

4. Results and discussion

4.1. Speed measurement of microparticles

Figure 4 shows data from an event-based acquisition of microparticles moving from right to left in the same direction and at the same speed. The arrows represented on the events are the optical flow vectors (vx, vy) previously described in 2.3. The flow is laminar with a maximum Reynolds number of 4.1 and ranges from 0.1ml/h to 9ml/h. We estimate the optical flow on a set of 20 recordings lasting 1 minute each for various values of the flow at a depth of 50μm and 100μm. The expected speed at these depths is identical and ranges from 0.4mm/s to 6.7mm/s.

 

Fig. 4 Optical flow for particles moving from right to left over a time window of 15ms. Both axes on the left image are the x and y plane. The black and white dots represent negative and positive events respectively. On the right are two zooms on particles with their corresponding optical flows.

Download Full Size | PPT Slide | PDF

Figure 5 shows the theoretical value of particle speed and the estimated optical flow in the horizontal direction. We compute the measurement error from the groundtruth, provided by the syringe pump, and estimate it using the averaged squared deviation for the optical flow. At 50μm deep, the estimated speeds for particles below 6ml/h is close to the groundtruth with a maximum difference of 6%. At 100μm deep, the estimated speed for particles below 4.5ml/h is close to the groundtruth. The influence of depth on the accuracy will be further discussed in part 4.4. In general, the measured speeds are smaller than the groundtruth, as friction forces inside the fluid are not accounted for in the groundtruth. At any time, as particles are free to move in any direction inside the channel, they may enter or leave the coherence volume. Particles moving in the axial direction have a different speed than particles staying within the coherence volume. Indeed their horizontal component is smaller. However these particles cross the coherence volume at high speed thus appering on a tiny time scale in the data and do not affect our computation since the number of occurences detected is lower than 4%. The process reaches its limits at 6.5ml/h where velocities are too high even for the event-based camera to trigger a significant change of contrast.

 

Fig. 5 Speed of microparticles according to the flow at 50μm on the left and 100μm on the right. In blue, the theoretical speed according to the rate of the syringe pump, in red the estimated speed. The shaded areas on both curves correspond to the measurement error. Results divert from the groundtruth above 6.5ml/h at 50μm and above 4.5ml/h at 100μm.

Download Full Size | PPT Slide | PDF

4.2. Direction estimation

Figure 6 shows a map of the visual optical flow accumulated over two minutes for particles flowing around an air bubble (grey circle) created inside the channel of the microfluidic chip at 50μm in depth. Each arrow represents the optical flow of a single particle at different positions during a two minutes recording. Their color depends on the angle of the optical flow vector which varies between 90 degrees (red) and −90 degrees (blue).

 

Fig. 6 Map of the optical flow around an air bubble (grey circle). The two axes are the x and y plane. Each arrow represents the mean velocity of a particle at a given time accumulated over a time window of two minutes. The color bar represents the angle of the optical flow vector from 90 degrees to −90 degrees.

Download Full Size | PPT Slide | PDF

Figure 7 shows the values of the local angle and norm of the optical flow for each square of Fig. 6. The error is computed using the averaged squared deviation. The standard deviation for the norm of the optical flow in square 4 has a higher value due to the low number of vectors inside this region.

 

Fig. 7 On the left, the mean angle value of the optical flow inside each square. On the right, the mean norm of the optical flow inside each square.

Download Full Size | PPT Slide | PDF

4.3. Particle density

The Gaussian blob tracking algorithm allows the estimation of a particle density and therefore counting of the number of particles in a solution (Fig. 8).

 

Fig. 8 On the left, Gaussian blob tracking for microparticles over time window of 20ms. The axes are the x and y plane. The black and white dots represent negative and positive events respectively. On the right are zooms on two particles. The blue circles represent the active blobs which are tracking microspheres.

Download Full Size | PPT Slide | PDF

We used solutions with different concentrations varying from 9 000 particles/ml to 28 000 particles/ml. The flow rate and the number of particles per ml are known and we assume that, given the laminar flow, the particles are evenly distributed inside the channel of the microfluidic chip [19]. We can therefore compute the theoretical concentration in microparticles within our coherence volume which will be our groundtruth. Figure 9 shows results for five different concentrations. For each experiment the flow is set to 2ml/h, representing a movement of 3mm/s at a depth of 70μm.

 

Fig. 9 Density of -particles measured for different concentrations. In blue is the measured density of particles per ml. The red lines represent the error to the true value.

Download Full Size | PPT Slide | PDF

Since we know that particles move in the same direction, we can count them using the number of active trackers, previously desribed in 2.4, that cross virtual vertical lines in the field of view. This operation is performed over ten lines across our image, giving a robust estimate of the number of particles. For high concentrations of microspheres i.e. more than 30 000 particles/ml, we tend to miss particles when they overlap with one another and only one tracker becomes active for the pair. We have a difference of 10% or more with the groundtruth.

It is important to emphasize that the chosen speed of 3mm/s at around 70μm corresponds to the natural velocity of blood cells inside capillaries [20]. This is the target application of this work as conventional cameras can not estimate velocities beyond 1.8mm/s for single particles at a microscopic level even with offline processing [11].

4.4. Influence of the imaging depth on the optical flow and density

The maximum penetration depth of the FF-OCT setup varies between 200 to 1000μm, depending on the sample (specifications provided by LLTech SAS). The resolution degrades with depth for imaging in scattering media because of the coherent detection of multiply scattered light. This is a common phenomenon in all OCT imaging systems [1,7]. The interferometric signal becomes weaker as the imaging depth increases. The value of τ, previously introduced in section 2.2, defines how sensitive to contrast differences the pixels are. When the signal becomes weaker, the value of τ has to decrease in order to generate events when smaller light variations occur. As a consequence, we raise the SNR and results divert from the groundtruth. On Fig. 10 is shown the mean error of our measurements regarding the theoretical value for the flow (blue line) and the density of particles (red line) for a set of 10 recordings at the depths of 0, 50, 100 and 150 μm in a scattering medium (water with a drop of milk). We observe that we have an error above 10% when imaging deeper than 120μm for our experiments. However our aim is to image blood capillaries which can be found within the first 100μm of the human tissues.

 

Fig. 10 Error regarding the reference value for the optical flow (blue line) and the particle density (red line) depending on the imaging depth. For depths greater than 120μm this error exceeds 10% of the reference value.

Download Full Size | PPT Slide | PDF

5. Computational cost

We set a time interval Δtb containing a certain number of events. Let Δtc be the computational time required to process our algorithms using the content of the first time interval. We can set an efficiency ratio r such as: r=ΔtcΔtb. If r < 1 the computation can be performed in real time. We cut all the data from the experiments at 4ml/h into time bins of Δtb = 100ms. On Fig. 11 is shown in blue the number of events per bin and in red the corresponding efficiency ratio.

 

Fig. 11 Top: in blue is the number of event per bin for the experimental data at 4ml/h. Bottom: in red is the computational ratio r. The maximum value is 0.45 and the mean value is 0.23 indicating a real time computation.

Download Full Size | PPT Slide | PDF

On average, the mean number of events per bin of 100ms is 7440 and the efficiency ratio is equal to 0.231, meaning that we can process data on average 4.3 times faster than real time. From this data we estimate the mean processing time for a single event : 3.1μs. This gives a theoretical upper limit for online processing of 32000 events per bin of 100ms beyond which the information can not be processed in real time any more. However this number of events per bin is way above the typical values obtained with our recordings.

6. Conclusion

This paper combines for the first time both high spatial resolution provided by FF-OCT and high temporal precision provided by event-based cameras. We have shown that this combination allows the capture of fast moving objects with high accuracy at a depth up to 120μm. The low data rate allows tracking and motion estimation in real time at the native resolution of the sensor at low computational costs for flows as high as 6.5ml/h and velocities of 4.5mm/s, going beyond existing limitations of the FF-OCT and other medical imaging techniques for this application. This work opens new perspectives to image biological samples and estimate erythrocyte and leucocyte dynamics in-vivo inside capillaries. The evaluation of microcirculation is a promising tool to understand the pathophysiology of many diseases and to personalise treatments, particularly among critically ill patients. Currently the camera’s sensitivity to low contrasts limits the observation of smaller structures of interest such as red blood cells in-vivo. Moreover, the ATIS used has a QVGA resolution that is not impacting our precision but our field of view. However this technology is improving drastically and a new version of the ATIS camera with VGA sensors will be available in the near future designed specifically for low light conditions.

Funding

European Research Council Helmholtz (ERC) Synergy grant (610 110).

References and links

1. A. Dubois, K. Grieve, G. Moneron, R. Lecaque, L. Vabre, and C. Boccara, “Ultrahigh-resolution full-field optical coherence tomography,” Appl. Opt. 43, 2874–2883 (2004). [CrossRef]   [PubMed]  

2. C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433. [CrossRef]  

3. R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

4. P. Li, S. Ni, L. Zhang, S. Zeng, and Q. Luo, “Imaging cerebral blood flow through the intact rat skull with temporal laser speckle imaging,” Opt. Lett. 31, 1824–1826 (2006). [CrossRef]   [PubMed]  

5. T. Durduran, G. Yu, M.G. Burnett, J.A. Detre, J.H. Greenberg, J. Wang, C. Zhou, and A. G. Yodh, “Diffuse optical measurement of blood flow, blood oxygenation, and metabolism in a human brain during sensori- motor cortex activation,” Opt. Lett. 29, 1766–1768 (2004). [CrossRef]   [PubMed]  

6. E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003). [CrossRef]   [PubMed]  

7. A. Dubois, L. Vabre, A.-C. Boccara, and E. Beaurepaire, “High-resolution full-field optical coherence tomography with a linnik microscope,” Appl. Opt. 41, 805–812 (2002). [CrossRef]   [PubMed]  

8. R.A. Leitgeb, L. Schmetterer, W. Drexler, A.F. Fercher, R.J. Zawadzki, and T. Bajraszewski, “Real-time assessment of retinal blood flow with ultrafast acquisition by color Doppler Fourier domain optical coherence tomography,” Opt. Express 11, 3116–3121 (2003). [PubMed]  

9. Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative oct angiography of optic nerve head blood flow,” Biomed. Opt. Express 3, 3127–3137 (2012). [CrossRef]   [PubMed]  

10. J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012). [CrossRef]   [PubMed]  

11. J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013). [CrossRef]   [PubMed]  

12. V. J. Srinivasan, H. Radhakrishnan, E. H. Lo, E. T. Mandeville, J. Y. Jiang, S. Barry, and A. E. Cable, “OCT methods for capillary velocimetry,” Biomed. Opt. Express 3, 612–629 (2012). [CrossRef]   [PubMed]  

13. E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012). [CrossRef]   [PubMed]  

14. R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012). [CrossRef]  

15. Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012). [CrossRef]  

16. R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

17. J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017). [CrossRef]   [PubMed]  

18. X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

19. T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011). [CrossRef]  

20. S.-L. Chen, Z. Xie, P. L. Carson, X. Wang, and L. J. Guo, “In vivo flow speed measurement of capillaries by photoacoustic correlation spectroscopy,” Opt. Lett. 36, 4017–4019 (2011). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. A. Dubois, K. Grieve, G. Moneron, R. Lecaque, L. Vabre, and C. Boccara, “Ultrahigh-resolution full-field optical coherence tomography,” Appl. Opt. 43, 2874–2883 (2004).
    [Crossref] [PubMed]
  2. C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433.
    [Crossref]
  3. R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).
  4. P. Li, S. Ni, L. Zhang, S. Zeng, and Q. Luo, “Imaging cerebral blood flow through the intact rat skull with temporal laser speckle imaging,” Opt. Lett. 31, 1824–1826 (2006).
    [Crossref] [PubMed]
  5. T. Durduran, G. Yu, M.G. Burnett, J.A. Detre, J.H. Greenberg, J. Wang, C. Zhou, and A. G. Yodh, “Diffuse optical measurement of blood flow, blood oxygenation, and metabolism in a human brain during sensori- motor cortex activation,” Opt. Lett. 29, 1766–1768 (2004).
    [Crossref] [PubMed]
  6. E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
    [Crossref] [PubMed]
  7. A. Dubois, L. Vabre, A.-C. Boccara, and E. Beaurepaire, “High-resolution full-field optical coherence tomography with a linnik microscope,” Appl. Opt. 41, 805–812 (2002).
    [Crossref] [PubMed]
  8. R.A. Leitgeb, L. Schmetterer, W. Drexler, A.F. Fercher, R.J. Zawadzki, and T. Bajraszewski, “Real-time assessment of retinal blood flow with ultrafast acquisition by color Doppler Fourier domain optical coherence tomography,” Opt. Express 11, 3116–3121 (2003).
    [PubMed]
  9. Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative oct angiography of optic nerve head blood flow,” Biomed. Opt. Express 3, 3127–3137 (2012).
    [Crossref] [PubMed]
  10. J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012).
    [Crossref] [PubMed]
  11. J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
    [Crossref] [PubMed]
  12. V. J. Srinivasan, H. Radhakrishnan, E. H. Lo, E. T. Mandeville, J. Y. Jiang, S. Barry, and A. E. Cable, “OCT methods for capillary velocimetry,” Biomed. Opt. Express 3, 612–629 (2012).
    [Crossref] [PubMed]
  13. E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
    [Crossref] [PubMed]
  14. R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
    [Crossref]
  15. Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
    [Crossref]
  16. R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).
  17. J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
    [Crossref] [PubMed]
  18. X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).
  19. T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
    [Crossref]
  20. S.-L. Chen, Z. Xie, P. L. Carson, X. Wang, and L. J. Guo, “In vivo flow speed measurement of capillaries by photoacoustic correlation spectroscopy,” Opt. Lett. 36, 4017–4019 (2011).
    [Crossref] [PubMed]

2017 (1)

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

2015 (1)

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

2014 (1)

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

2013 (1)

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

2012 (6)

V. J. Srinivasan, H. Radhakrishnan, E. H. Lo, E. T. Mandeville, J. Y. Jiang, S. Barry, and A. E. Cable, “OCT methods for capillary velocimetry,” Biomed. Opt. Express 3, 612–629 (2012).
[Crossref] [PubMed]

Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative oct angiography of optic nerve head blood flow,” Biomed. Opt. Express 3, 3127–3137 (2012).
[Crossref] [PubMed]

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012).
[Crossref] [PubMed]

E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
[Crossref] [PubMed]

2011 (2)

S.-L. Chen, Z. Xie, P. L. Carson, X. Wang, and L. J. Guo, “In vivo flow speed measurement of capillaries by photoacoustic correlation spectroscopy,” Opt. Lett. 36, 4017–4019 (2011).
[Crossref] [PubMed]

T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
[Crossref]

2006 (1)

2004 (2)

2003 (2)

2002 (1)

Audinat, E.

E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
[Crossref] [PubMed]

Bajraszewski, T.

Barry, S.

Bartolozzi, C.

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Baumann, B.

Y. Jia, J. C. Morrison, J. Tokayer, O. Tan, L. Lombardi, B. Baumann, C. D. Lu, W. Choi, J. G. Fujimoto, and D. Huang, “Quantitative oct angiography of optic nerve head blood flow,” Biomed. Opt. Express 3, 3127–3137 (2012).
[Crossref] [PubMed]

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Beaurepaire, E.

Benosman, R.

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Boas, D. A.

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012).
[Crossref] [PubMed]

Boccara, A.-C.

Boccara, C.

Burnett, M.G.

Cable, A. E.

Carson, P. L.

Chaigneau, E.

E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
[Crossref] [PubMed]

Charpak, S.

E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
[Crossref] [PubMed]

Chen, S.-L.

Cheng, R.

T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
[Crossref]

Choi, W.

Clercq, C.

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Detre, J.A.

Drexler, W.

Du, C.

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

Dubois, A.

Durduran, T.

Dzyubachyk, O.

E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
[Crossref] [PubMed]

Fercher, A.F.

Filliat, D.

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

Fujimoto, J. G.

Greenberg, J.H.

Grieve, K.

Guo, L. J.

Haindl, R.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Hitzenberger, C. K.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Holzer, S.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Huang, D.

Ieng, S.

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

Ieng, S. H.

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

Ieng, S.-H.

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Jia, Y.

Jiang, J. Y.

Lagorce, X.

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

Lecaque, R.

Lee, J.

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012).
[Crossref] [PubMed]

Leitgeb, R.A.

Lesage, F.

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

Li, A.

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

Li, P.

Lo, E. H.

Lombardi, L.

Lu, C. D.

Luo, Q.

Mandeville, E. T.

Mao, L.

T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
[Crossref]

Matolin, D.

C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433.
[Crossref]

Meijering, E.

E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
[Crossref] [PubMed]

Meyer, C.

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

Moneron, G.

Morrison, J. C.

Ni, S.

Ni, Z.

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

Oheim, M.

E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
[Crossref] [PubMed]

Pacoret, C.

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

Pan, Y.

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

Pircher, M.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Posch, C.

C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433.
[Crossref]

Radhakrishnan, H.

Regnier, S.

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

Schmetterer, L.

Smal, I.

E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
[Crossref] [PubMed]

Srinivasan, M.

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Srinivasan, V. J.

Tan, O.

Tokayer, J.

Trasischker, W.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Vabre, L.

Vass, C.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Wang, J.

Wang, X.

Wartak, A.

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Wohlgenannt, R.

C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433.
[Crossref]

Wu, W.

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20, 22262–22277 (2012).
[Crossref] [PubMed]

Xie, Z.

Yodh, A. G.

You, J.

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

Yu, G.

Zawadzki, R.J.

Zeng, S.

Zhang, L.

Zhou, C.

Zhu, B.

Zhu, T.

T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
[Crossref]

Appl. Opt. (2)

Appl. Phys. Lett. (1)

J. You, A. Li, C. Du, and Y. Pan, “Volumetric Doppler angle correction for ultrahigh-resolution optical coherence Doppler tomography,” Appl. Phys. Lett. 110, 011102 (2017).
[Crossref] [PubMed]

Biomed. Opt. Express (2)

IEEE T. Neural Netw. (2)

X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” IEEE T. Neural Netw. 26, 1710–1720 (2015).

R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE T. Neural Netw. 25, 407–417 (2014).

J. Cereb. Blood Flow Metab. (1)

J. Lee, W. Wu, F. Lesage, and D. A. Boas, “Multiple-capillary measure- ment of rbc speed, flux, and density with optical coherence tomography,” J. Cereb. Blood Flow Metab. 33, 1707–1710 (2013).
[Crossref] [PubMed]

J. Microsc. (1)

Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Regnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245, 236–244 (2012).
[Crossref]

Methods Enzymol. (1)

E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods Enzymol. 504, 183–200 (2012).
[Crossref] [PubMed]

Microfluid. Nanofluid. (1)

T. Zhu, R. Cheng, and L. Mao, “Focusing microparticles in a microfluidic channel with ferrofluids,” Microfluid. Nanofluid. 11, 695–701 (2011).
[Crossref]

Neural Netw. (1)

R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” Neural Netw. 27, 32–37 (2012).
[Crossref]

Opt. Express (2)

Opt. Lett. (3)

Proc. Natl. Acad. Sci. (1)

E. Chaigneau, M. Oheim, E. Audinat, and S. Charpak, “Two-photon imaging of capillary blood flow in olfactory bulb glomeruli,” Proc. Natl. Acad. Sci. 100, 13081–13086 (2003).
[Crossref] [PubMed]

Other (2)

C. Posch, D. Matolin, and R. Wohlgenannt, “High-dr frame-free pwm imaging with asynchronous aer intensity encoding and focal-plane temporal redundancy suppression,” in Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS,2010), pp. 2430–2433.
[Crossref]

R. Haindl, A. Wartak, W. Trasischker, S. Holzer, B. Baumann, M. Pircher, C. Vass, and C. K. Hitzenberger, “Total retinal blood flow in healthy and glaucomatous human eyes measured with three beam doppler optical coherence tomography,” OSA Technical Digest (online) (Optical Society of America, 2016), paper TTh1B.2 (2016).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Left: event-based encoding of visual information. A change of the logarithmic intensity generates ON and OFF events if the absolute change in log(I) is superior to τ. Right: the ATIS camera.
Fig. 2
Fig. 2 3D visualisation of event generated over time by the movement of small particles (grey circles moving from right to left). The vertical axis is the time, the other two are the xy plane. The blue dots are both positive and negative events.
Fig. 3
Fig. 3 Schematic representation of the FF-OCT microscope with a microfluidic chip and syringe pump. This setup will be used for all our experiments.
Fig. 4
Fig. 4 Optical flow for particles moving from right to left over a time window of 15ms. Both axes on the left image are the x and y plane. The black and white dots represent negative and positive events respectively. On the right are two zooms on particles with their corresponding optical flows.
Fig. 5
Fig. 5 Speed of microparticles according to the flow at 50μm on the left and 100μm on the right. In blue, the theoretical speed according to the rate of the syringe pump, in red the estimated speed. The shaded areas on both curves correspond to the measurement error. Results divert from the groundtruth above 6.5ml/h at 50μm and above 4.5ml/h at 100μm.
Fig. 6
Fig. 6 Map of the optical flow around an air bubble (grey circle). The two axes are the x and y plane. Each arrow represents the mean velocity of a particle at a given time accumulated over a time window of two minutes. The color bar represents the angle of the optical flow vector from 90 degrees to −90 degrees.
Fig. 7
Fig. 7 On the left, the mean angle value of the optical flow inside each square. On the right, the mean norm of the optical flow inside each square.
Fig. 8
Fig. 8 On the left, Gaussian blob tracking for microparticles over time window of 20ms. The axes are the x and y plane. The black and white dots represent negative and positive events respectively. On the right are zooms on two particles. The blue circles represent the active blobs which are tracking microspheres.
Fig. 9
Fig. 9 Density of -particles measured for different concentrations. In blue is the measured density of particles per ml. The red lines represent the error to the true value.
Fig. 10
Fig. 10 Error regarding the reference value for the optical flow (blue line) and the particle density (red line) depending on the imaging depth. For depths greater than 120μm this error exceeds 10% of the reference value.
Fig. 11
Fig. 11 Top: in blue is the number of event per bin for the experimental data at 4ml/h. Bottom: in red is the computational ratio r. The maximum value is 0.45 and the mean value is 0.23 indicating a real time computation.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Σ e : 2 p Σ e ( p ) = t
Σ e ( p + Δ p ) = Σ e ( p ) + Σ e T Δ p + ( Δ p )
Σ e x ( x , y 0 ) = Σ e | y = y 0 d x ( x ) = 1 v x ( x , y 0 ) Σ e y ( x 0 , y ) = Σ e | x = x 0 d y ( y ) = 1 v y ( x 0 , y )
Σ e = ( 1 v x , 1 v y ) T .
P i ( u ) = 1 2 π | β i | 1 2 e 1 2 ( p μ i ) T β 1 ( p μ i )
μ = α μ + ( 1 α ) p
A i ( t ) = { A i ( t Δ t ) e Δ t δ + P i ( p ) , if e ( p , t ) tracker i A i ( t Δ t ) e Δ t δ , otherwise .
v avg = Flow ChannelSection

Metrics