Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Lens-free motion analysis via neuromorphic laser speckle imaging

Open Access Open Access

Abstract

Laser speckle imaging (LSI) is a powerful tool for motion analysis owing to the high sensitivity of laser speckles. Traditional LSI techniques rely on identifying changes from the sequential intensity speckle patterns, where each pixel performs synchronous measurements. However, a lot of redundant data of the static speckles without motion information in the scene will also be recorded, resulting in considerable resources consumption for data processing and storage. Moreover, the motion cues are inevitably lost during the “blind” time interval between successive frames. To tackle such challenges, we propose neuromorphic laser speckle imaging (NLSI) as an efficient alternative approach for motion analysis. Our method preserves the motion information while excluding the redundant data by exploring the use of the neuromorphic event sensor, which acquires only the relevant information of the moving parts and responds asynchronously with a much higher sampling rate. This neuromorphic data acquisition mechanism captures fast-moving objects on the order of microseconds. In the proposed NLSI method, the moving object is illuminated using a coherent light source, and the reflected high frequency laser speckle patterns are captured with a bare neuromorphic event sensor. We present the data processing strategy to analyze motion from event-based laser speckles, and the experimental results demonstrate the feasibility of our method at different motion speeds.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Motion analysis aims to identify changes in the physical position of an object with respect to fixed coordinates. Traditional methods require physically attached sensors, such as inertial measurement units (IMU) and accelerometers [1]. However, they have low spatial resolution and often suffer from large drift errors. Optical motion analysis is a non-contact approach to estimate the object movement through detecting features in sequential image frames [24]. In recent years, lens-free motion estimation attracts much research interest due to its simplicity and low-cost structure without using any lens element. It has facilitated various innovative applications in user-interfaces, auto-navigation and structural engineering [58].

Lens-free motion analysis generally requires coherent illumination. When coherent light, such as laser, is reflected by a moving object with an optically rough surface, a noise-like pattern, called laser speckle, can be observed using a bare image sensor due to the interference of the scattered coherent light [9]. The statistical properties of laser speckle patterns from a random medium are independent, and object motion or flow results in characteristic fluctuations of the speckle intensity with time. The laser speckles have extreme motion sensitivity due to their high frequency. Small motions of the object on the order of micrometers could result in large movements in the corresponding speckles, which can be used to estimate the surface motion [8,10,11]. Many approaches have been developed to retrieve motion information from a series of speckle patterns, such as dynamic light scattering (DLS) and dynamic speckle analysis (DSA) [12,13].

However, these methods often rely on intensity images that are captured by frame-based cameras with charge-coupled device (CCD) or complementary metal oxide semi-conductor (CMOS) sensors. These sensors measure the intensity values of each pixel synchronously at a low sampling rate, which leads to a loss in dynamic scene information. Yet, advances in high-speed camera technology have enabled the capturing of fast-moving objects with higher frame rates, though at the expense of costly storage space and bandwidth. Moreover, the frame-based sensors generate a large amount of redundant data for static background, a situation that is more severe with high-speed cameras. High computation cost is also needed for processing the entire frame sequence, including those constant pixels without any additional motion information in the successive frames.

In contrast to standard frame-based sensors that encode dynamic visual scenes with static images at a fixed rate, neuromorphic event sensors are inspired by biological retinas and only report local intensity changes as spiking events. These output events are recorded and transmitted asynchronously by each pixel as an event stream. Such unique mechanisms of the neuromorphic event sensor bring several advantages, including high temporal resolution, reduced data redundancy, and low power consumption. In recent years, this type of sensing technology has facilitated many applications including particle tracking [14,15], object recognition [16,17], and optical flow estimation [1820]. Neuromorphic event sensors are well suited to capture dynamic scenes. Inspired by motion-compensated event images [21], Stoffregen and Kleeman [22] first proposed a greedy algorithm where motion-compensated images with fine edges and contours of the object are generated by clustering events. Further improvement was made in the clustering algorithm to achieve a more accurate joint estimation of the objects and their motion [23]. Incorporating deep learning and neuromorphic event sensors, Mitrokhin et al. [24] tackled the problem that moving objects are challenging to be detected and masked in harsh conditions (e.g., fast motion and poor lighting). Nevertheless, motion analysis is fairly unexplored in the field of event-based vision.

In this work, we propose the neuromorphic laser speckle imaging (NLSI) as an efficient alternative approach for lens-free motion analysis. The paper is organized as follows: first, existing methods of dynamic laser speckle imaging are reviewed in Section 2. Then, the working principle of neuromorphic event sensor and event-based data processing strategy are introduced in Section 3. Afterwards, NLSI algorithm for motion analysis is explained in Section 4. The experimental setup and results are presented and evaluated in Section 5. The paper concludes in Section 6.

2. Existing approaches

2.1 Frame-based methods

Many frame-based approaches have been proposed for laser speckle imaging including time history speckle pattern (THSP) [25], co-occurrence matrix (COM) [26], inertial moment (IM) [27] and absolute value of the differences (AVD) [28]. The THSP method analyzes the dynamic speckles by selecting $M$ pixels from $N$ successive frames and reconstructing a new image of size $M \times N$. Each line in the reconstructed image represents the intensity variations and the THSP image becomes more random along with a higher dynamic level. The COM method statistically summarizes the intensity transition in THSP and is defined as

$$\operatorname{COM}[i,j]=\sum_{m=1}^{M} \sum_{n=1}^{N-1}\left\{\begin{array}{ll} 1, & \text{ if } \operatorname{THSP}[m,n]=i \\ & \text{ and } \operatorname{THSP}[m,n+1]=j, \\ 0, & \text{ otherwise } \end{array}\right.$$
where $\operatorname {THSP}[m,n]$ represents the pixel $(m,n)$ in the THSP image and $i,j$ stand for the intensity value range varying from 0 to 255 for 8-bit images. The IM method quantifies activity from COM by calculating the second-order statistics and is defined as
$$\mathrm{IM}=\sum_{i} \sum_{j}\{\mathrm{COM}[i,j]*(i-j)^2\}.$$

Meanwhile, the AVD method is similar, though it quantifies activity from COM through calculating the first-order statistics and is defined as

$$\mathrm{AVD}=\sum_{i} \sum_{j}\{\mathrm{COM}[i,j]*|i-j|\}.$$

The computational complexity of these frame-based methods depends on the bit depth of the image intensities. If intensity images are captured with bit depth of $b$, the range for $i,j$ will be $2^b$. For THSP image with size $M \times N$, both IM and AVD have computational complexity of $\mathcal {O}\left (MN(2^b)^2\right )$.

2.2 Event-based methods

Event-based approaches mainly analyze the dynamic laser speckles based on the 2D images integrated from the event stream. Some techniques proposed in our previous work include event time history speckle pattern (ETHSP), event co-occurrence matrix (ECOM), and event absolute summation (EAS) [29,30]. The ETHSP method analyzes the dynamic speckles from the event data by randomly selecting $M$ pixels from $N$ successive integrated event images and reconstructing a new image of size $M \times N$. Every pixel in the ETHSP only has three possible states $\in$ {positive, negative, none} representing whether events are detected and their polarities. The ETHSP image has higher density of detected events along with a higher dynamic level. The ECOM method statistically summarizes the states in ETHSP and is defined as

$$\operatorname{ECOM[s]}=\sum_{m=1}^{M} \sum_{n=1}^{N}\left\{\begin{array}{ll} 1, & \text{ if } \operatorname{ETHSP}[m,n] = positive \\ & \text{ or } \operatorname{ETHSP}[m,n] = negative \\ 0, & \text{ if } \operatorname{ETHSP}[m,n] = none \end{array}\right.$$
where $s$ represents the discrete event states $\in$ {positive, negative}. The EAS method quantifies activity from ECOM through calculating the first-order statistics and is defined as
$$\mathrm{EAS}=\sum_{s} \{\mathrm{ECOM}[s]\}.$$

For ETHSP image of size $M \times N$, event-based EAS method has computational complexity of $\mathcal {O}(MN)$. The computational complexity drops considerably compared with the intensity-based methods owing to the intensity change detection mechanism and binary output.

3. Neuromorphic event sensor

3.1 Working principle

Neuromorphic event sensors, also called dynamic vision sensors, are biologically inspired to capture pixel-wise brightness changes in an asynchronous and sparse data stream [31]. Each pixel detects logarithmic intensity changes and sends out an event with its location, timestamp, and polarity. An independent event is generated once an intensity change that exceeds the threshold is detected at a pixel. Therefore, high temporal resolution can be achieved by the neuromorphic event sensors to capture fast motions without motion blurs.

The differences in the working principles of conventional frame-based sensor and neuromorphic event sensor are compared in Fig. 1(a) and Fig. 1(b), where a time-varying intensity signal is sampled by the two types of sensors. As shown in Fig. 1(a), the frame-based sensor takes synchronous measurements of intensity value regardless of how the signal changes. For example, the frame-based sensor makes two measurements at time $t_5$ and $t_6$ while the intensity values does not change too much within the time interval. Such sampling principle results in a waste of data storage and power consumption. In comparison, as shown in Fig. 1(b), the neuromorphic event sensor detects logarithmic intensity changes at a much higher sampling rate. As long as the logarithmic intensity signal changes above the threshold, the sensor generates an asynchronous output named event. On the other hand, no events are generated when the logarithmic intensity only has small changes within the threshold. The corresponding outputs are shown in Fig. 1(c) with binary polarities that denote logarithmic intensity increase or decrease. Moreover, in the frame-based sensor, all the pixels work simultaneously and every pixel value needs to be sampled and recorded. Hence, the temporal resolution of the frame-based sensor is quite low and normally $20$$30$ frames can be sampled per second. On the other hand, in the neuromorphic event sensor, every pixel works independently and only the active pixels that detect brightness changes generate outputs. As a result, it has a much higher sampling rate on the order of microseconds.

 figure: Fig. 1.

Fig. 1. (a) A time-varying intensity signal is recorded by a conventional frame-based sensor. The timestamps for capturing each frame are indicated by red dots. (b) The neuromorphic event sensor records logarithmic intensity changes. The timestamps for detecting each event are marked by blue step curves. (c) The corresponding event stream only contains binary polarities that denote intensity increase or decrease. Faster changes in the logarithmic intensity leads to a higher rate of generating events.

Download Full Size | PDF

3.2 Event data processing

In order to take advantage of the neuromorphic event sensor and extract useful information from the asynchronous event stream, it is necessary to develop specific methods to process such event data. Current processing approaches can be divided into two categories. The first is to convert the event stream into a sequence that is compatible with algorithms such as long short-term memory (LSTM) [32,33] or neural ordinary differential equations (NODE) [17,34]. These methods exploit the temporal attribute and can perform real-time tasks with very low latency, but at the expense of retaining little information to reflect the dynamics in the global scene. The second class maps the event stream to a dense representation (e.g., 2D matrix) compatible with image-based methods such as convolutional neural network (CNN) [35,36]. These approaches have simple structures and are widely used for various tasks [18,29,3740].

In the neuromorphic event sensor, each pixel detects logarithmic intensity changes asynchronously and an event will be generated as soon as the intensity change exceeds the threshold [31]. The event generation process can be described mathematically as

$$\log \left[I(x,y,t)) - \log (I(x,y,t+\tau)\right] > \xi ,$$
where $I(x,y,t)$ represents the intensity value of the pixel $(x,y)$ at time $t$, and $\tau$ is the time interval between two measurements on the order of microseconds. The variable $\xi$ is a user-defined threshold. Each event is encapsulated into a four-dimensional tuple
$$e_k = (x_k,y_k,p_k,t_k),$$
where $p_k \in \{-1, 1\}$ is the polarity representing intensity decrease or increase. Here, we define a two-dimensional image $E(x, y)$ to represent the integration of events during a small time window $\Delta t$
$$E(x, y)=\sum_{e_k \in e'} \delta(x-x_k, y-y_k) \delta(t-t_k)$$
where $\delta$ is a Kronecker delta function, and $e'$ contains all the events within the time interval $(t, t+\Delta t)$. The motion of the sample can be obtained through analyzing the amount of intensity changes in each pixel during the time window. If the time window $\Delta t$ is small enough, the scene can be regarded as static and the intensity changes are triggered only by the moving texture [38].

4. Neuromorphic laser speckle imaging

4.1 Event-based laser speckles

Conventional LSI methods require continuous monitoring of high-resolution intensity-based speckle patterns. An example of the frame-based laser speckle sequence is shown in Fig. 2(a) with a spatial resolution of $512 \times 512$. It is captured using a normal CMOS sensor at 20 fps, and each frame has a time interval of 50 ms. Only 5 frames can be recorded during the 200 ms time interval and the information between two successive frames is inevitably lost. Also, the requirements for the data acquisition are very strict: the exposure time should be small enough to avoid averaging of speckles and the frame rate should also be carefully set to keep the time speckle patterns correlated [41]. However, such restrictions can be relaxed in our proposed NLSI method. Taking advantage of the neuromorphic event sensor, high temporal resolution can be achieved without setting the exposure time and frame rate.

 figure: Fig. 2.

Fig. 2. Dynamic laser speckles captured by a conventional frame-based CMOS sensor and event-based neuromorphic sensor. (a) The CMOS sensor with 20 fps records the absolute intensity of the dynamic laser speckles at a low sampling rate. (b) The neuromorphic event sensor records logarithmic intensity changes as binary events with a much faster sampling rate. The red dots and blue dots represent positive and negative events, respectively.

Download Full Size | PDF

The recorded event-based laser speckles in the same 200 ms time interval are plotted in Fig. 2(b), where the events are marked as red rots and blue dots representing ON event (intensity increase) and OFF events (intensity decrease), respectively. The spatial resolution is $512 \times 512$, and each event is recorded asynchronously according to the nature of the neuromorphic event sensor. It records events at a temporal resolution of a few microseconds, which is thousands of times faster compared with conventional CMOS sensors.

4.2 Neuromorphic laser speckle correlation

In the proposed NLSI, the moving object is illuminated by a coherent light source and every point on its rough surface can be regarded as a secondary light source. The laser speckle pattern captured by the bare sensor is the superposition of all spherical wavefronts. The speckle imaging can be represented as a convolution process between the object and the point spread function (PSF) of the optical system

$$E^N(x,y)= E^O(x,y) * P(x,y),$$
where $E^N(x,y)$ denotes the value at pixel $(x,y)$ in the captured neuromorphic laser speckle pattern, $E^O(x,y)$ denotes the value at pixel $(x,y)$ in the moving object, $P(x,y)$ stands for the PSF, and $*$ is the convolution operation. The auto-correlation of the speckle pattern is given as
$$\begin{aligned} E^N(x,y) \star E^N(x,y) & = \left[E^O(x,y) * P(x,y)\right] \star \left[E^O(x,y) * P(x,y)\right]\\ & = \left[E^O(x,y) \star E^O(x,y)\right] * \left[P(x,y) \star P(x,y)\right]\\ & = E^O(x,y) \star E^O(x,y) + C, \end{aligned}$$
where $\star$ represents the correlation operation, and $C$ is the constant background term caused by the residual statistical speckle noise [42]. An example of neuromorphic laser speckle pattern is shown in Fig. 3(a) with a spatial resolution of $512 \times 512$ and 30 ms integration time. Positive events account for 45% and negative events account for 55%. Its auto-correlation result is shown in Fig. 3(c), which is approximately a Kronecker delta function with a sharp peak.

 figure: Fig. 3.

Fig. 3. Motion analysis via neuromorphic laser speckle imaging. (a) The original neuromorphic laser speckle pattern. (b) The neuromorphic laser speckle pattern after a small displacement. (c) The auto-correlation map of the speckle pattern. (d) The cross-correlation map of the two speckle patterns with a small displacement. (e) The results of the auto-correlation and cross-correlation are delta functions with peaks indicating the relative movement.

Download Full Size | PDF

When the object is shifted by a small displacement $\Delta l$ in the lateral plane (e.g., moving direction parallel to the sensor plane), it causes a global translation of the corresponding speckle pattern. This phenomenon is known as the optical memory effect (OME). As long as the moving displacement $\Delta l$ is within its OME range, the two laser speckle patterns captured at time $t_1$ and $t_2$ have the same distribution except for a local shift [4345]. The PSF can be regarded as shift-invariant within this range. The cross-correlation between the two neuromorphic laser speckles captured at timestamp $t_1$ and $t_2$ can be derived from Eq. (10) as

$$\begin{aligned} E^N_{t_1}(x,y) \star E^N_{t_2}(x,y) & =\left[E^O_{t_1}(x,y) * P(x,y)\right] \star \left[E^O_{t_2}(x,y) * P(x,y)\right]\\ & =\left[E^O_{t_1}(x,y) \star E^O_{t_2}(x,y)\right] * \left[ P(x,y) \star P(x,y)\right]\\ & =E^O_{t_1}(x,y) \star E^O_{t_2}(x,y) + C. \end{aligned}$$

The second neuromorphic laser speckle pattern after a small displacement along the $x$-axis is shown in Fig. 3(b), and the cross-correlation result in Fig. 3(d). The latter has the same pattern compared with the auto-correlation result before object motion, but with a shifted peak. In order to obtain the pixel offset, we plot the horizontal line across the central point of the auto-correlation map and cross-correlation map in Fig. 3(e). The pixel offset is the distance between the peak locations of the cross-correlation and auto-correlation. Then, the motion of the object can be estimated through

$$\Delta u^{\prime} =\beta \cdot \Delta L \cdot p,$$
where $\beta$ is a coefficient that compensates for the misalignment, $\Delta L$ is the pixels offset between the peaks of auto-correlation and cross-correlation shown in Fig. 3(e), and $p$ is the pixel size of the imaging sensor.

5. Experiments and results

The schematic of the experimental setup is shown in Fig. 4. The 532 nm laser source (LSR532NL, CivilLaser) is positioned 50 cm in front of the object. The output power is set to 50 mW using a circular neutral density filter (GCO-0704M, Daheng Optics) and the laser beam is collimated using a beam expander (BE10M-A, Thorlabs). The moving target is a piece of white paper that is mounted on a motorized translation stage (WN262TA20, Winner Optics). The sample is controlled to shift in uniform motion ranging from 0.5 mm/s to 2.5 mm/s with a step size of 0.5 mm/s. The neuromorphic event sensor (CeleX5, OmniVision) has $1280 \times 800$ pixels spatial resolution and 9.8 µm pixel size. It is positioned 40 cm away from the sample without using any lens element between the sample and the sensor. We use the default settings of the sensor with the threshold value set to be 171, and record the reflected dynamic laser speckles for 5 seconds. We repeat the experiment using a conventional CMOS sensor (MV-GE134GM-T, MindVision) with $1280 \times 1024$ pixels spatial resolution and 4.8 µm pixel size for comparison. The frame rate is 20 fps and exposure time is 50 µs.

 figure: Fig. 4.

Fig. 4. Schematic of the laser speckle imaging setup. The sample is mounted on a motorized translation stage and its rough surface is illuminated by the coherent laser source. The reflected speckle patterns are captured using a bare event sensor without a lens. BE stands for beam expander and CNDF stands for circular neutral density filter.

Download Full Size | PDF

To compare the data usage of the two types of sensors, each frame image set (20 fps, 5 seconds, $1280 \times 1024$ spatial resolution) has an uncompressed file size of 105 MB, containing 131,072,000 intensity values with 8-bit grayscale regardless of motion speeds. On the contrary, the data storage of the event data sets depends on the motion speeds. The uncompressed file sizes for the event stream (5 seconds, $1280 \times 800$ spatial resolution) vary from 29 MB to 156 MB, containing event numbers that range from 4,732,628 to 26,397,155 with binary polarities. Although the saving of data storage is not significant here, but it should be noted that when using a high frame-rate camera to achieve similar sampling rate as the neuromorphic event sensor, the data consumption for storing the intensity frames will become tremendous, as the data usage is proportional to the frame rate.

The laser speckles captured using the CMOS sensor at different speeds are shown in the first row of Fig. 5(a). Since the frame-based sensor only records the absolute intensities, it is hard to detect different speckle dynamics in one single image. The laser speckles captured using the neuromorphic event sensor at different speeds are presented in the bottom row of Fig. 5(a) with 30 ms integration time. In comparison, the intensity changes can be easily detected by the neuromorphic event sensor and we can notice that the event density increases with a faster motion speed.

 figure: Fig. 5.

Fig. 5. (a) The laser speckles captured using the CMOS sensor and the neuromorphic event sensor at different speeds varying from 0.5 mm/s to 2.5 mm/s with a step size of 0.5 mm/s. The exposure time of the CMOS sensor is 50 µs and the integration time of the event data is 30 ms. (b) THSP images of the frame-based intensity speckle patterns. (c) ETHSP images of the event-based neuromorphic speckle patterns.

Download Full Size | PDF

Next, we evaluate the dynamic laser speckles using the THSP and ETHSP methods that are introduced in Section 2. 512 pixels are selected from 50 successive frame-based speckle patterns to form the THSP, and the results are shown in Fig. 5(b). Similarly, 512 pixels are selected from 50 successive neuromorphic speckle patterns to form the ETHSP, and the results are shown in Fig. 5(b). Compared their results and we can find that the correlation among the lines only exist in the first two THSP images and the other three THSP images are very noisy and cannot distinguish their differences due to the low frame rate and high-speed object motion. On the other hand, the neuromorphic method indicates the higher dynamic levels with a larger number of detected events in all scenarios. The experimental results using numerical descriptors including IM, AVD and EAS are summarized in Table 1. The frame-based IM and AVD methods grow with higher motion speeds at the beginning, but reach their maximum at 1.5 mm/s, where the intensity frames are not correlated anymore due to the low sampling rate. On the contrary, the event-based EAS method can still indicate the speckle dynamics correctly.

Tables Icon

Table 1. Experimental results of IM, AVD and EAS for dynamic speckle analysis. All numerical descriptors indicate higher dynamic levels with larger values.

Then, we estimate the actual object motion using the proposed NLSI method. Two successive laser speckles with the time interval of 200 ms are analyzed for a large pixel offset. The correlation is calculated using Eq. (11) and the computational complexity can be reduced through the cross-correlation theorem

$$E^N_{t_1}(x,y) \star E^N_{t_2}(x,y)=\mathcal{F}^{{-}1}\left(\mathcal{F}\left(E^N_{t_1}(x,y)\right)^* \cdot \mathcal{F}\left(E^N_{t_2}(x,y)\right)\right),$$
where $\mathcal {F}$ and $\mathcal {F}^{-1}$ stand for the Fourier transform and inverse Fourier transform, $^*$ denotes the complex conjugate and $\cdot$ represents element-wise multiplication.

The detected speckle size on the imaging sensor mainly depends on the sensor pixel size, and is also affected by the imaging geometry, including the distance from the sensor to the sample and the angle between the incident light and reflected light. In order to calibrate the imaging setup and determine the mapping between the speckle size and sensor pixel size, we shift the object for 1 mm and use a CMOS sensor with 9.8 µm pixel size to capture the two speckle patterns before and after motion. The pixel offset of one speckle is 128 pixels, hence $\beta$ is set to be 0.8 according to Eq. (12). The experimental results are summarized in Table 2 by taking 3 successive measurements. The experimental results show the validity and robustness of our proposed method in a wide range of motion speeds, whereas both the relative error rate and the total error rate are less than 2%.

Tables Icon

Table 2. Experimental results of NLSI at different motion speeds.

6. Conclusion

In this work, we propose a new speckle imaging method to analyze motion with a concise lens-free implementation. Unlike conventional LSI methods that rely on the intensity-based speckle images with low frame rates, our approach utilizes the neuromorphic event sensor which captures binary information of the intensity changes with a much higher temporal resolution. The data processing strategy is presented to analyze motion from event-based laser speckles and the experimental results demonstrate the feasibility of our method in different motion speeds. This work will be beneficial for detecting fast-moving objects in various fields, such as biomedical imaging and material science.

Funding

University of Hong Kong (104005864); University Grants Committee (17200019, 17201620, GRF 17201818).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. H. Migueles, C. Cadenas-Sanchez, U. Ekelund, C. D. Nyström, J. Mora-Gonzalez, M. Löf, I. Labayen, J. R. Ruiz, and F. B. Ortega, “Accelerometer data collection and processing criteria to assess physical activity and other outcomes: A systematic review and practical considerations,” Sports Med. 47(9), 1821–1845 (2017). [CrossRef]  

2. D. Cremers and S. Soatto, “Motion competition: A variational approach to piecewise parametric motion segmentation,” Int. J. Comput. Vis. 62(3), 249–265 (2005). [CrossRef]  

3. Z. Wang, K. Liao, J. Xiong, and Q. Zhang, “Moving object detection based on temporal information,” IEEE Signal Process. Lett. 21(11), 1403–1407 (2014). [CrossRef]  

4. Y. Yang, A. Loquercio, D. Scaramuzza, and S. Soatto, “Unsupervised moving object detection via contextual information separation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019), pp. 879–888.

5. T. Zeng and E. Y. Lam, “Robust reconstruction with deep learning to handle model mismatch in lensless imaging,” IEEE Trans. Comput. Imaging 7, 1080–1092 (2021). [CrossRef]  

6. T. Zeng and E. Y. Lam, “Model-based network architecture for image reconstruction in lensless imaging,” Proc. SPIE 11551, 115510B (2020). [CrossRef]  

7. J. Zizka, A. Olwal, and R. Raskar, “SpeckleSense: Fast, precise, low-cost and compact motion sensing using laser speckle,” in Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, (2011), pp. 489–498.

8. B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3D micro-motion analysis using speckle imaging,” ACM Trans. Graph. 36(4), 1–12 (2017). [CrossRef]  

9. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, 2 Edition (SPIE Press, 2020).

10. R. Ma, H. H. Zhang, E. Manuylovich, S. Sugavanam, H. Wu, W. L. Zhang, V. Dvoyrin, T. P. Hu, Z. J. Hu, Y. J. Rao, and S. K. Turitsyn, “Tailoring of spatial coherence in a multimode fiber by selectively exciting groups of eigenmodes,” Opt. Express 28(14), 20587–20597 (2020). [CrossRef]  

11. N. Wu and S. Haruyama, “Real-time audio detection and regeneration of moving sound source based on optical flow algorithm of laser speckle images,” Opt. Express 28(4), 4475–4488 (2020). [CrossRef]  

12. D. D. Postnov, J. Tang, S. E. Erdener, K. Kılıç, and D. A. Boas, “Dynamic light scattering imaging,” Sci. Adv. 6(45), 1 (2020). [CrossRef]  

13. H. J. Rabal and R. A. Braga Jr, Dynamic Laser Speckle and Applications (CRC Press, 2008).

14. Z. Ni, C. Pacoret, R. Benosman, S. Ieng, and S. Régnier, “Asynchronous event-based high speed vision for microparticle tracking,” J. Microsc. 245(3), 236–244 (2012). [CrossRef]  

15. J. Howell, T. C. Hammarton, Y. Altmann, and M. Jimenez, “High-speed particle detection and tracking in microfluidic devices using event-based sensing,” Lab Chip 20(16), 3024–3035 (2020). [CrossRef]  

16. Y. Bi, A. Chadha, A. Abbas, E. Bourtsoulatze, and Y. Andreopoulos, “Graph-based object classification for neuromorphic vision sensing,” in IEEE Conference on Computer Vision and Pattern Recognition, (2019), pp. 491–501.

17. G. Giannone, A. Anoosheh, A. Quaglino, P. D’Oro, M. Gallieri, and J. Masci, “Real-time classification from short event-camera streams using input-filtering neural ODEs,” arXiv preprint arXiv:2004.03156 (2020).

18. A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “EV-FlowNet: Self-supervised optical flow estimation for event-based cameras,” arXiv preprint arXiv:1802.06898 (2018).

19. M. Cannici, M. Ciccone, A. Romanoni, and M. Matteucci, “A differentiable recurrent surface for asynchronous event-based data,” in The European Conference on Computer Vision, (2020), pp. 136–152.

20. Z. Ge, Y. Zhu, Y. Zhang, and E. Y. Lam, “Dynamic speckle analysis using the event-based block matching algorithm,” in Advanced Sensor Systems and Applications XI, vol. 11901 (2021), pp. 131–136.

21. G. Gallego and D. Scaramuzza, “Accurate angular velocity estimation with an event camera,” IEEE Robot. Autom. Lett. 2(2), 632–639 (2017). [CrossRef]  

22. T. Stoffregen and L. Kleeman, “Simultaneous optical flow and segmentation (SOFAS) using dynamic vision sensor,” arXiv preprint arXiv:1805.12326 (2018).

23. T. Stoffregen, G. Gallego, T. Drummond, L. Kleeman, and D. Scaramuzza, “Event-based motion segmentation by motion compensation,” in Proceedings of the IEEE International Conference on Computer Vision, (2019), pp. 7244–7253.

24. A. Mitrokhin, C. Ye, C. Fermüller, Y. Aloimonos, and T. Delbruck, “EV-IMO: Motion segmentation dataset and learning pipeline for event cameras,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, (2019), pp. 6105–6112.

25. R. Braga, W. Silva, T. Sáfadi, and C. Nobre, “Time history speckle pattern under statistical view,” Opt. Commun. 281(9), 2443–2448 (2008). [CrossRef]  

26. R. Arizaga, M. Trivi, and H. Rabal, “Speckle time evolution characterization by the co-occurrence matrix analysis,” Opt. Laser Technol. 31(2), 163–169 (1999). [CrossRef]  

27. C. Nobre, R. Braga Jr, A. Costa, R. Cardoso, W. Da Silva, and T. Sáfadi, “Biospeckle laser spectral analysis under inertia moment, entropy and cross-spectrum methods,” Opt. Commun. 282(11), 2236–2242 (2009). [CrossRef]  

28. R. Braga, C. Nobre, A. Costa, T. Sáfadi, and F. Da Costa, “Evaluation of activity through dynamic laser speckle using the absolute value of the differences,” Opt. Commun. 284(2), 646–650 (2011). [CrossRef]  

29. Z. Ge, N. Meng, L. Song, and E. Y. Lam, “Dynamic laser speckle analysis using the event sensor,” Appl. Opt. 60(1), 172–178 (2021). [CrossRef]  

30. Z. Ge, T. Zeng, and E. Y. Lam, “Lensless sensing using the event sensor,” in OSA Imaging and Applied Optics Congress, (2021), pp. ITu6B–5.

31. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits 43(2), 566–576 (2008). [CrossRef]  

32. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput. 9(8), 1735–1780 (1997). [CrossRef]  

33. D. Neil, M. Pfeiffer, and S.-C. Liu, “Phased LSTM: Accelerating recurrent network training for long or event-based sequences,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, (2016), pp. 3889–3897.

34. R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, “Neural ordinary differential equations,” arXiv preprint arXiv:1806.07366 (2018).

35. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

36. A. I. Maqueda, A. Loquercio, G. Gallego, N. García, and D. Scaramuzza, “Event-based vision meets deep learning on steering prediction for self-driving cars,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 5419–5427.

37. Z. Ge, Y. Gao, H. K.-H. So, and E. Y. Lam, “Event-based laser speckle correlation for micro motion estimation,” Opt. Lett. 46(16), 3885–3888 (2021). [CrossRef]  

38. D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza, “EKLT: Asynchronous photometric feature tracking using events and frames,” Int. J. Comput. Vis. 128(3), 601–618 (2020). [CrossRef]  

39. H. Kim, A. Handa, R. Benosman, S.-H. Ieng, and A. Davison, “Simultaneous mosaicing and tracking with an event camera,” in Proceedings of the British Machine Vision Conference, (2014).

40. S. Tulyakov, F. Fleuret, M. Kiefel, P. Gehler, and M. Hirsch, “Learning an event sequence embedding for event-based deep stereo,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), pp. 1527–1537.

41. E. Stoykova, D. Nazarova, L. Nedelchev, B. Ivanov, B. Blagoeva, K.-J. Oh, and J. Park, “Dynamic speckle analysis with coarse quantization of the raw data,” Appl. Opt. 59(9), 2810–2819 (2020). [CrossRef]  

42. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

43. G. Osnabrugge, R. Horstmeyer, I. N. Papadopoulos, B. Judkewitz, and I. M. Vellekoop, “Generalized optical memory effect,” Optica 4(8), 886–892 (2017). [CrossRef]  

44. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–689 (2015). [CrossRef]  

45. S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23(10), 13505–13516 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) A time-varying intensity signal is recorded by a conventional frame-based sensor. The timestamps for capturing each frame are indicated by red dots. (b) The neuromorphic event sensor records logarithmic intensity changes. The timestamps for detecting each event are marked by blue step curves. (c) The corresponding event stream only contains binary polarities that denote intensity increase or decrease. Faster changes in the logarithmic intensity leads to a higher rate of generating events.
Fig. 2.
Fig. 2. Dynamic laser speckles captured by a conventional frame-based CMOS sensor and event-based neuromorphic sensor. (a) The CMOS sensor with 20 fps records the absolute intensity of the dynamic laser speckles at a low sampling rate. (b) The neuromorphic event sensor records logarithmic intensity changes as binary events with a much faster sampling rate. The red dots and blue dots represent positive and negative events, respectively.
Fig. 3.
Fig. 3. Motion analysis via neuromorphic laser speckle imaging. (a) The original neuromorphic laser speckle pattern. (b) The neuromorphic laser speckle pattern after a small displacement. (c) The auto-correlation map of the speckle pattern. (d) The cross-correlation map of the two speckle patterns with a small displacement. (e) The results of the auto-correlation and cross-correlation are delta functions with peaks indicating the relative movement.
Fig. 4.
Fig. 4. Schematic of the laser speckle imaging setup. The sample is mounted on a motorized translation stage and its rough surface is illuminated by the coherent laser source. The reflected speckle patterns are captured using a bare event sensor without a lens. BE stands for beam expander and CNDF stands for circular neutral density filter.
Fig. 5.
Fig. 5. (a) The laser speckles captured using the CMOS sensor and the neuromorphic event sensor at different speeds varying from 0.5 mm/s to 2.5 mm/s with a step size of 0.5 mm/s. The exposure time of the CMOS sensor is 50 µs and the integration time of the event data is 30 ms. (b) THSP images of the frame-based intensity speckle patterns. (c) ETHSP images of the event-based neuromorphic speckle patterns.

Tables (2)

Tables Icon

Table 1. Experimental results of IM, AVD and EAS for dynamic speckle analysis. All numerical descriptors indicate higher dynamic levels with larger values.

Tables Icon

Table 2. Experimental results of NLSI at different motion speeds.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

COM [ i , j ] = m = 1 M n = 1 N 1 { 1 ,  if  THSP [ m , n ] = i  and  THSP [ m , n + 1 ] = j , 0 ,  otherwise 
I M = i j { C O M [ i , j ] ( i j ) 2 } .
A V D = i j { C O M [ i , j ] | i j | } .
E C O M [ s ] = m = 1 M n = 1 N { 1 ,  if  ETHSP [ m , n ] = p o s i t i v e  or  ETHSP [ m , n ] = n e g a t i v e 0 ,  if  ETHSP [ m , n ] = n o n e
E A S = s { E C O M [ s ] } .
log [ I ( x , y , t ) ) log ( I ( x , y , t + τ ) ] > ξ ,
e k = ( x k , y k , p k , t k ) ,
E ( x , y ) = e k e δ ( x x k , y y k ) δ ( t t k )
E N ( x , y ) = E O ( x , y ) P ( x , y ) ,
E N ( x , y ) E N ( x , y ) = [ E O ( x , y ) P ( x , y ) ] [ E O ( x , y ) P ( x , y ) ] = [ E O ( x , y ) E O ( x , y ) ] [ P ( x , y ) P ( x , y ) ] = E O ( x , y ) E O ( x , y ) + C ,
E t 1 N ( x , y ) E t 2 N ( x , y ) = [ E t 1 O ( x , y ) P ( x , y ) ] [ E t 2 O ( x , y ) P ( x , y ) ] = [ E t 1 O ( x , y ) E t 2 O ( x , y ) ] [ P ( x , y ) P ( x , y ) ] = E t 1 O ( x , y ) E t 2 O ( x , y ) + C .
Δ u = β Δ L p ,
E t 1 N ( x , y ) E t 2 N ( x , y ) = F 1 ( F ( E t 1 N ( x , y ) ) F ( E t 2 N ( x , y ) ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.