Abstract

Sound detection with optical means is an appealing research topic. In this manuscript, we proposed a laser microphone system allowing simultaneous detection and regeneration of the audio signal by observing the movement of secondary speckle patterns. In the proposed system, optical flow method, along with some denoising algorithms are employed to obtain the motion information of the speckle sequence with high speed. Owing to this, audio signal can be regenerated in real time with simple optical setup even the sound source is moving. Experiments have been conducted and the results show that the proposed system can restore high quality audio signal in real time under various conditions.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Sound detection with optical means is an appealing research topic due to its broad application prospects, such as remote monitoring, rescue, and so on [1,2]. One of the approaches is detecting sound with laser speckle images. The principle of laser speckle method is simple: when a coherent light is reflected by an optically rough surface, a high-contrast grainy speckle pattern can be observed with an image device due to the interferometry of the multiple reflection light waves [3]. A major property of speckle pattern is that the speckle motion is very sensitive to the motion of the object [4,5]. The captured speckle pattern shows significant displacement even the object is moved slightly. Based on this property, sound vibrations can be detected with speckle images, and the audio signal can be recovered by extracting information from the movement of the captured speckle image sequence. Compared with other methods, like the interferometric or holographic measurement method [6,7], the laser speckle detection method has a simple structure and low hardware cost and can achieve remote sound detection. Previously there were several researches on recovering sound with laser speckle, mainly focused on the applications in remote monitoring. In [8], the authors proposed a remote sound extraction system based on laser speckle. The result shows they can record the speech or heart beats with a distance up to 100 meters. In [9], the authors proposed an intensity variance-based method for sound recovery via the appropriate pixels’ gray-value variations from the laser speckle patterns. In these researches, people usually take a short video and then analyze the video to restore the audio signal. Although these works successfully achieved sound regeneration with laser speckle images, still real-time sound detection and regeneration have not been considered, nor has it been considered for detection under moving sound source situation, which greatly limits the potential applications of this technology.

In this manuscript, a real-time sound detection and regeneration system based on laser speckle image is proposed. Different from the previous researches, the proposed system for the first time took the real-time processing and regeneration of audio signal with moving sound source into consideration. In our system, after capturing speckle images, high-speed calculation is conducted immediately to obtain the displacement of the captured speckle images instead of storing the pattern in the computer. Thus, the system can output audio signals in real time while sampling. To achieve this, only a small part of the imaging sensor is used to capture the speckle patterns. In this way, not only high camera sampling rates can be achieved even with a common industrial camera, but also the computation time can be reduced because of the small image size. Moreover, optical flow algorithm is adopted to obtain the displacement between two frames in a short time. These two points enable a real-time processing speed and sub-pixel level accuracy. In addition, some denoising algorithms are proposed to correct the calculation noise in real time. This not only improves the accuracy of the results, but also enables the system to regenerate audio signal with moving sound sources. Compared with the previous systems, our system works more like a microphone rather than a recorder, which enables our system to have a wider range of potential applications, such as a meeting scenario.

The structure of this paper is as follows: the flowchart of our system is intially introduced in Section 2, where the optical flow method, along with the denoising algorithms of the sampling signal are explained. Then the experiment results of our system are shown in Section 3, including the results under different signal amplitude and the camera defocusing and the results of moving sound source detection. Finally, the conclusion of the paper is given in Section 4.

2. Methodologies

2.1 Farneback optical flow algorithm

According to the results of previous research, when we illuminate a vibrating object with a coherent laser source, the captured speckle is periodically vibrated in one direction [10]. In the past there were several studies that recover the audio signal via the gray value variation of selected pixels [11]. The advantage is that it does not require too much workload of calculation, thus this method is possible to achieve real time calculation speed. However, the gray value method requires linear distribution of gray value within a certain pixel range in the direction of vibration. Therefore, the quality of result cannot be guaranteed when the amplitude of the audio signal changes. Therefore, we decided to regenerate the audio signal according to the motion information of the speckle sequence. In the past, cross-correlation between images was widely used to calculate speckle motion [12,13]. However, it is difficult for cross correlation method to achieve a high-speed calculation and a sub-pixel accuracy at the same time. Besides, in our system, the speckle image size is settled to be very small, which causes reduction of the available image information. This makes most feature points method [14] unavailable with our situation.

For these reasons stated above, the Farneback optical flow algorithm, which is proposed by Gunnar Farneback in 2003, is employed to analysis the speckle motion [15]. In the algorithm, each image is regarded as a 2D function $f(x, y)$. Specifically, by fitting the gray value of each pixel and its neighbors, a quadratic polynomial expansion based on the coordinate $(x, y)$ of the interested pixel can be expressed as:

$$f(\textbf{x}) = {\textbf{x}^T}\textbf{Ax} + {\textbf{b}^T}\textbf{x} + c. $$
Where $\textbf{x}$ represents the coordinate $(x, y)$, $\textbf{A} = \left( {\begin{array}{cc} {{r_4}}&{\frac{{{r_6}}}{2}}\\ {\frac{{{r_6}}}{2}}&{{r_5}} \end{array}} \right)$, $\textbf{b} = \left( {\begin{array}{c} {{r_2}}\\ {{r_3}} \end{array}} \right)$, $c = {r_1}$, ${r_1}$${r_6}$ are the coefficients of the quadratic polynomial fitting. When the image undergoes a global shifting $\textbf{d}$, the new signal can be expressed as:
$$\begin{aligned} f^{\prime}(\textbf{x}) &= f(\textbf{x} - \textbf{d}) \\ &= {(\textbf{x} - \textbf{d})^T} \textbf{A}(\textbf{x} - \textbf{d}) + {\textbf{b}^T}(\textbf{x} - \textbf{d}) + c \\ &= {\textbf{x}^T} \textbf{Ax} + {(\textbf{b} - 2\textbf{Ad})^T} \textbf{x} + {\textbf{d}^T} \textbf{Ad} - {\textbf{b}^T} \textbf{d} + c \\ &= {\textbf{x}^{T}} \textbf{A}^{\prime} \textbf{x} + {\textbf{b}^{{\prime}T}} \textbf{x} + c \end{aligned}$$
Optical flow method assumes that the brightness in the same pixel of the two images does not change, thus we have:
$$\textbf{A}^{\prime} = \textbf{A}. $$
$$\textbf{b}^{\prime} = \textbf{b} - 2\textbf{Ad}. $$
$$c^{\prime} = {\textbf{d}^T}\textbf{Ad} - {\textbf{b}^T}\textbf{d} + c. $$
According to Eq. (4), the displacement $\textbf{d}$ can be solved as:
$$\textbf{d} ={-} \frac{1}{2}{\textbf{A}^{ - 1}}(\textbf{b}^{\prime} - \textbf{b}). $$
The above description is the basic idea of the Farneback optical flow algorithm. In practical considerations, a weighted estimation over a neighborhood of the interested pixel is performed to reduce noise and obtain a reliable calculation result. Figure 1 shows two speckle images and the optical flow result between them. Since the algorithm calculates the displacement pixelwise, a dense optical flow that represents the displacement between two frames can be obtained even the image size is very small, as shown in Fig. 1(c).

 figure: Fig. 1.

Fig. 1. Two speckle images and the optical flow field between them. (a) Former frame. (b) Later frame. (c) Optical flow.

Download Full Size | PPT Slide | PDF

2.2 Real-time signal processing

The algorithm’s flowchart of the proposed system is shown in Fig. 2. The specific description of the whole process is as follows.

 figure: Fig. 2.

Fig. 2. Flowchart of the real-time signal processing algorithm of our system.

Download Full Size | PPT Slide | PDF

Capture Images: The frame rate of the camera directly determines the detectable frequency range of the laser microphone system. In order to reduce computational costs and improve transmission speed, we use a common USB3.0 camera to capture the speckle images. The window size is set to be 32 × 32 pixels. Under this resolution the camera can reach the frame rate of 2300fps.

Calculate displacement vector of frames: After getting frame sequence, the Farneback optical flow algorithm is adopted to obtain the motion between frames. As shown in Fig. 1(c), the dense optical flow algorithm computes the motion vector for every pixel between two images. Here one parameter “Window Size” in the algorithm is settled to be 32, and in this way the results of each pixel are approximately the same. The average of all vectors is taken as the global shifting $\textbf{d}$ between two frames.

Calculate sampling value: After getting vector of displacement $\textbf{d}$ between adjacent frames, the displacement $|\textbf{d} |$ between the two frames can be easily obtained as:

$$|\textbf{d} |= \sqrt {{x^2} + {y^2}}. $$
The angle $\alpha $ of the global vector is expressed as:
$$\alpha = {\tan ^{ - 1}}\frac{y}{x}. $$
According to the speckle motion model, the speckle sequence shows nearly linear vibration when the object of interest vibrates. Figure 3 is the statistical histogram that shows angle of displacement of every two frames in 10,000 pictures. From Fig. 3 we can see that the angles are clearly distributed in two different intervals when the speckle sequence linearly reciprocates. Therefore, the displacement $|\textbf{d} |$ is superimposed according to the direction angle of the displacement vector to obtain a sinusoidal waveform representing the original signal.

 figure: Fig. 3.

Fig. 3. Angle statistical histogram of 10 thousand vectors.

Download Full Size | PPT Slide | PDF

Fix accumulated deviation: Because of the noise of the image sequence, each calculation of displacement brings a tiny deviation. Especially the high sampling rate of the system causes the deviation to be accumulated very quickly. Let the deviation be expressed as ${e_i}$, the displacement between frame i and frame $i + 1$ be ${|\textbf{d} |_i}$, the accumulated displacement be ${s_i}$, which can be expressed as:

$${s_i} = \sum\limits_i^{i - 1} {{{|\textbf{d} |}_i} + } \sum\limits_i^{i - 1} {{e_i}}$$
Taking a single frequency audio signal as an example. Ideally, the accumulated result $\sum\limits_1^{i - 1} {{{|\textbf{d} |}_i}}$ shows a sinusoidal waveform. However, due to the accumulation deviation $\sum\limits_1^{i - 1} {{e_i}}$, the regenerated waveform will constantly drift. Figure 4 shows a waveform of regenerated 50Hz audio signal. As shown with the black line, the regenerated sinusoidal waveform drifts drastically in only 10 seconds.

 figure: Fig. 4.

Fig. 4. Waveform before and after fixing of accumulated drift.

Download Full Size | PPT Slide | PDF

Fixing the drift problem is crucial to the system. In our system, we always take the latest 100 points, in other words, the sampling data of the latest 0.1 seconds, as sample to estimate the real-time drift slope $k$. Every time after ${s_i}$ is obtained, the data from ${s_{i - 100}}$ to ${s_{i - 1}}$ are used to calculate the real-time drift slope k to help fixing the drift and obtain the fixed data ${S_i}$, which can be expressed as:

$${S_i} = {s_i} - k \times i. $$
The fixed waveform is shown with the red line in Fig. 4. From the result we can see that although the accumulated deviation will cause drift, it can be fixed, and the flat waveform can be obtained with the denoising algorithm. Besides, real-time fixing of drift makes it possible to detect the audio signal with moving sound source, which will be illustrated in the later section.

Uniform sampling estimation: With real-time fixing, the drift problem can be solved. However, because the time consumption of capturing image and calculating optical flow for each sampling is not exactly equal, the sampling rate is not uniform. This will lead to noise if the non-uniform sample data is replayed.

To deal with this problem, here an estimation algorithm is proposed to obtain uniform sampling value. Since the sampling rate is fast, the variety between each two sample points can be approximated as a linear function. As shown in Fig. 5, every time after obtaining two sample values ${S_i}$ and ${S_{i + 1}}$ at the time of ${t_i}$ and ${t_{i + 1}}$, we estimate the sample value ${S_{i\_u}}$ corresponding to all ${T_i}$ times in the $[{{t_i},{t_{i + 1}}} ]$ interval, where ${T_i}$ is a uniform time with the interval of 1ms. In this way uniform sampling value ${S_{i\_u}}$ can be obtained.

 figure: Fig. 5.

Fig. 5. Part of the waveform of uniform sampling date estimation.

Download Full Size | PPT Slide | PDF

The above is the processing of each sampling of the system. For our system, it takes about 1ms from step I to step VI to obtain one sampling data. According to Nyquist sampling theorem, it means audio signal under 500Hz cam be regenerated in real-time.

3. Experiment result

3.1 Single frequency test

In the first experiment we tried to regenerate single frequency audio signal with our system. The setup of our laser microphone system is shown in Fig. 6, and the schematic of the whole system is shown in Fig. 7. An expanded laser beam with the output power of 100mW at the wavelength of 650nm is illuminated on the membrane of the speaker. The PointGray GS3U3-32S4C-C camera with a $f = -25{\textrm{mm}}$ lens is used to capture the speckle images. As mentioned above, the image resolution is settled to be 32 × 32 pixels, and the frame rate of the camera is 2300fps. The camera is connected with a desktop that controlled with python code. Both the laser and the camera are positioned around 1m away from the speaker.

 figure: Fig. 6.

Fig. 6. Experiment scenario of the proposed laser microphone system.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Schematic diagram of the proposed laser microphone system.

Download Full Size | PPT Slide | PDF

The audio signals with the frequencies of 50Hz, 100Hz, 150Hz and 200Hz are tested respectively. The waveforms of the regenerated audio signal are shown in Fig. 8, and Fig. 9 shows the spectrum after the Fourier transform of each result. From the results we can see that the information extracted from the speckle motion can correctly represent the frequency of the signal source. Especially in the low-frequency region, the regenerated waveforms are very clear. With the frequency increases, the quality of the regenerated audio signal becomes worse. This is mainly caused by the limited sampling rate of the camera. It is foreseeable that if a higher speed camera is employed, the system will be able to regenerate audio signals with higher frequency.

 figure: Fig. 8.

Fig. 8. Regenerated waveform of different frequency audio signal. (a) 50 Hz audio signal. (b) 100 Hz audio signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.

Download Full Size | PPT Slide | PDF

 figure: Fig. 9.

Fig. 9. Frequency domain diagram of each result. (a) 50 Hz audio signal. (b) 100 Hz audio signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.

Download Full Size | PPT Slide | PDF

In the next experiment we present that music can also be regenerated in real-time with the proposed system. We used MATLAB to edit and generate the music “Moonlight Sonata No. 14” for the first forty seconds and played it. Figure 10 shows the spectrogram of the created music, and Fig. 11 shows the spectrogram of the regenerated music. The experiment shows that the music can be regenerated with high quality in real-time by the laser speckle and the proposed algorithm. For reference the audio file of both original music and regenerated music has been provided as the result of this experiment.

 figure: Fig. 10.

Fig. 10. Spectrogram of the original audio signal (see also Visualization 1).

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. Spectrogram of the regenerated audio signal (see also Visualization 2).

Download Full Size | PPT Slide | PDF

3.2 Effect of amplitude and defocusing on the result

For the proposed laser speckle detection system, it was found that the amplitude of sound source brings challenge to signal regeneration. The increasing amplitude of the object vibration causes the displacement of the speckle to become larger, which in turn leads to a decrease in the degree of correlation between adjacent frames. When the displacement between adjacent frames is large enough, it cannot be correctly calculated since the correlation between two frames are too small. As mentioned above, the image size is set to be 32 × 32 pixels in our system. Small window size certainly increases the sampling speed of the system. However, on the other hand, it also weakened the ability to observe large speckle motion.

Here the amplitude of the audio signal is gradually increased, and the performance of the system is investigated. The 50Hz audio signal is played by the speaker, the amplitude of the audio signal is adjusted by the volume of the computer which is 48.3dB, 54.3dB, 57.9dB and 61.0dB respectively. Correspondingly, the signal to noise ratio (SNR) of the result is 28.76dB, 31.01dB, 16.83dB, and 1.38dB. Figure 12 shows the part of the regenerated waveforms under different amplitude. The result shows that it is easier to recover a clear sinusoidal waveform when the speckle motion is small. However, as the speckle motion becomes larger, the recovered waveform is gradually distorted and the SNR of the regenerated signal decreases.

 figure: Fig. 12.

Fig. 12. Regenerated audio signal with different amplitude. (a) Sound volume is 48.3 dB. (b) Sound volume is 54.3 dB. (c) Sound volume is 57.9 dB. (d) Sound volume is 61.0 dB.

Download Full Size | PPT Slide | PDF

One way to deal with this problem is using a higher speed camera to get a denser sequence of sampled images so that the distance between every two frames is not too large. On the other hand, adjusting the defocusing amount of the imaging system can also solve this problem. When the camera focuses on the speckle at the near field from the investigated object (i.e. location ① in Fig. 6), the amount of defocusing is small. At this situation, the speckles will overlap on the image, and the speckle motion is not sensitive to object motion. Conversely, when the camera focuses on the speckle at the far field (i.e. location ② in Fig. 6), the amount of defocus is large. In this situation the bright and dark speckle are distributed on the image clearly, and the speckle motion is sensitive to object motion.

In the next experiment, the distance between camera and object is set to be 1m, and the amount of camera defocusing is set to be 0 (focus), 0.3m, 0.5m, 0.6m, 0.7m, 0.75m and 0.85m respectively. Figure 13 shows the image captured under different camera defocusing. In our system, the wavelength of the laser source is 650nm, and a color image sensor is used to capture the speckle pattern. Thus, the captured image shows red grain pattern, as shown in Fig. 13(g). When the image system is focused, the light intensity received by a single pixel becomes stronger. Because the color pixel has four channels (red, green, green, blue), some pixels of the image become colorful. The speaker plays a 50Hz sine wave with different amplitude (48.3dB, 54.3dB, 57.9dB and 61.0dB respectively). The SNR of the result with different situations is shown in Fig. 14.

 figure: Fig. 13.

Fig. 13. Speckle image captured under different amount of camera defocusing $L$. (a) $L = 0$. (b) $L = 0.3m$. (c) $L = 0.5m$. (d) $L = 0.6m$. (e) $L = 0.7m$. (f) $L = 0.75m$. (g) $L = 0.85m$.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14.

Fig. 14. SNR of the result with different amount of defocusing under different amplitude of audio signal.

Download Full Size | PPT Slide | PDF

First, when the camera focuses on the object (amount of defocus equals zero), speckles overlap together and form a featureless bright spot, as shown in Fig. 13(a). In this situation the SNR of the result is meaningless because object vibration cannot be observed through speckle motion. With the aspect of defocusing situation, the SNR of results can always keep a high level (over 20dB) when the amplitude of audio signal is small. While in case of large amplitude of audio signal, reducing the amount of defocusing can make the motion of the speckle smaller, which in turn makes the result better. For instance, with large amplitude of 61.0dB, if the camera is focused on the speckle field at 0.5m away from object, the SNR of result reaches the optimal value (30.30dB). Figure 15 shows the regenerated waveform in this situation. Compared with Fig. 12(d), the distortion of the result is fixed due to the adjustment of the camera defocusing.

 figure: Fig. 15.

Fig. 15. Regenerated waveform with 0.5 m camera defocusing under the amplitude of 61.0 dB.

Download Full Size | PPT Slide | PDF

3.3 Detection of moving sound source

In the actual situation, usually the sound source cannot maintain absolute stillness. For example, when a person is talking, the body will show a slight movement. Therefore, the detection of moving sound source is investigated. First, we will explain the speckle motion model. The six-degree spatial motion of object can be divided into three categories: transverse, axial, and tilt. According to the previous research [16], the transverse and tilt motion cause two-dimensional displacement of captured speckle pattern, while axis motion causes scaling variation of captured speckle pattern, as shown in Fig. 16.

 figure: Fig. 16.

Fig. 16. Corresponding speckle motion caused by object motion.

Download Full Size | PPT Slide | PDF

Therefore, when the sound source undergoes transverse or tilt motion, the motion of the captured speckle consists of two parts: sinusoidal vibration and translational motion, and the calculated waveform will be sine wave with drift. The drift caused by object motion can be fixed in real time by the algorithm mentioned in Section 2.

In the next experiment, the speaker playing the 50Hz audio signal is placed on the linear motor, and motor translates by 30mm at a speed of 5mm/s in transverse direction and then returns to the original position. The black line in Fig. 17 shows the obtained waveform. The waveform clearly reflects the superposition of sinusoidal vibration and horizontal movement that corresponding to the object motion. Meanwhile the red line shows the fixed waveform with our proposed algorithms, which proves that our algorithm can output the clear sinusoidal waveform in real time without being affected by the motion of the object.

 figure: Fig. 17.

Fig. 17. Test result of transverse moving sound source.

Download Full Size | PPT Slide | PDF

Next the speaker is placed on the rotation motor, and motor rotates by 5° at a speed of 0.5°/s, and then returns to the original position. The black line in Fig. 18 shows the obtained waveform, and the red line shows the fixed waveform. The result also proves that our system can continuously output a clear audio signal during the tilt motion of sound source.

 figure: Fig. 18.

Fig. 18. Test result of tilt moving sound source.

Download Full Size | PPT Slide | PDF

Finally, the axial motion is investigated. The motor translates by 30mm at a speed of 5mm/s in z-axial direction and then returns to the original position. The black line in Fig. 18 shows the obtained waveform. Different from the other two motion, axial motion has little effect on speckle motion under defocusing situation. It can be seen from Fig. 19 that the waveform drifts slightly. The drift can also be fixed with our algorithm and the system can output a clear waveform continuously.

 figure: Fig. 19.

Fig. 19. Test result of axial moving sound source.

Download Full Size | PPT Slide | PDF

4. Conclusion

In this paper a laser-speckle-based sound detection system has been proposed. In the proposed system, laser speckle image approach is adopted to detect the vibration of sound source, and optical flow method, along with some denoising algorithms proposed by the authors are employed to fulfill the high accuracy and real time signal processing. The main advantages of the proposed system are real-time audio signal regeneration with high quality and the ability of audio signal regeneration of moving sound source. All these contributions make this technology more widely used in areas, such as laser microphone function. The results of experiments proved that our proposed method is an efficient way to regenerate audio signal under different situations.

Currently the effective real-time sampling rate of the system is around 1kHz, and the result shows it can perform well in the low-frequency region. In the future, there is still room for further improvement of the system sampling rat. Using of faster imaging sensors and optimization of the algorithm can promote the sampling rate of the system so that the human speech can be regenerated with this system.

Funding

Keio University.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000). [CrossRef]  

2. Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.

3. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).

4. B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017). [CrossRef]  

5. B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.

6. O. Matoba, H. Inokuchi, K. Nitta, and Y. Awatsuji, “Optical voice recorder by off-axis digital holography,” Opt. Lett. 39(22), 6549–6552 (2014). [CrossRef]  

7. K. Ishikawa, R. Tanigawa, K. Yatabe, Y. Oikawa, T. Onuma, and H. Niwa, “Simultaneous imaging of flow and sound using high-speed parallel phase-shifting interferometry,” Opt. Lett. 43(5), 991–994 (2018). [CrossRef]  

8. Z. Zalevsky, Y. Beiderman, I. Margalit, S. Gingold, M. Teicher, V. Mico, and J. Garcia, “Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern,” Opt. Express 17(24), 21566–21580 (2009). [CrossRef]  

9. G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018). [CrossRef]  

10. L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

11. Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014). [CrossRef]  

12. E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970). [CrossRef]  

13. D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003). [CrossRef]  

14. T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017). [CrossRef]  

15. G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of Scandinavian conference on Image analysis. (Springer, 2003), pp 363–370.

16. J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

References

  • View by:
  • |
  • |
  • |

  1. M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
    [Crossref]
  2. Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.
  3. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).
  4. B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
    [Crossref]
  5. B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.
  6. O. Matoba, H. Inokuchi, K. Nitta, and Y. Awatsuji, “Optical voice recorder by off-axis digital holography,” Opt. Lett. 39(22), 6549–6552 (2014).
    [Crossref]
  7. K. Ishikawa, R. Tanigawa, K. Yatabe, Y. Oikawa, T. Onuma, and H. Niwa, “Simultaneous imaging of flow and sound using high-speed parallel phase-shifting interferometry,” Opt. Lett. 43(5), 991–994 (2018).
    [Crossref]
  8. Z. Zalevsky, Y. Beiderman, I. Margalit, S. Gingold, M. Teicher, V. Mico, and J. Garcia, “Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern,” Opt. Express 17(24), 21566–21580 (2009).
    [Crossref]
  9. G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
    [Crossref]
  10. L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.
  11. Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
    [Crossref]
  12. E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
    [Crossref]
  13. D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
    [Crossref]
  14. T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
    [Crossref]
  15. G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of Scandinavian conference on Image analysis. (Springer, 2003), pp 363–370.
  16. J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

2018 (2)

K. Ishikawa, R. Tanigawa, K. Yatabe, Y. Oikawa, T. Onuma, and H. Niwa, “Simultaneous imaging of flow and sound using high-speed parallel phase-shifting interferometry,” Opt. Lett. 43(5), 991–994 (2018).
[Crossref]

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

2017 (2)

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
[Crossref]

2014 (2)

O. Matoba, H. Inokuchi, K. Nitta, and Y. Awatsuji, “Optical voice recorder by off-axis digital holography,” Opt. Lett. 39(22), 6549–6552 (2014).
[Crossref]

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

2009 (1)

2003 (1)

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

2000 (1)

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

1970 (1)

E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
[Crossref]

Agarwal, V.

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

Amodio, D.

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

Archbold, E.

E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
[Crossref]

Awatsuji, Y.

Beiderman, Y.

Bloshkina, A. I.

L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

Broggiato, G. B.

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

Brutti, A.

Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.

Burch, J. M.

E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
[Crossref]

Campana, F.

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

Campbell, M.

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Charrett, T. O. H.

T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
[Crossref]

Chen, Z.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Christian, Z.

Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.

Cosgrove, J. A.

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Desai, P.

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

Ennos, A. E.

E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
[Crossref]

Farnebäck, G.

G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of Scandinavian conference on Image analysis. (Springer, 2003), pp 363–370.

Fu, H.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Garcia, J.

Gingold, S.

Goodman, J. W.

J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).

Greated, C. A.

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Gubarev, F. A.

L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

Gupta, M.

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.

J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

Huang, C.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Inokuchi, H.

Ishikawa, K.

Jack, S.

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Kensei, J.

J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

Klenovskii, M. S.

L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

Kotowski, K.

T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
[Crossref]

Li, L.

L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

Luo, H.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Mahmood, W.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Margalit, I.

Matoba, O.

Mico, V.

Nayar, S. K.

J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

Newaz, G. M.

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

Nitta, K.

Niwa, H.

O’Toole, M.

B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.

Oikawa, Y.

Onuma, T.

Qiu, P.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Rockliff, D.

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Smith, B. M.

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.

Sun, Z.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Svaizer, P.

Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.

Tanigawa, R.

Tatam, R. P.

T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
[Crossref]

Teicher, M.

Wang, C.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Wang, H.

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Yao, X.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Yatabe, K.

Yu, W.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Zalevsky, Z.

Zhai, G.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Zhao, Q.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Zhu, G.

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

ACM Trans. Graph. (TOG) (1)

B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
[Crossref]

Exp. Mech. (1)

D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by image analysis,” Exp. Mech. 43(4), 396–402 (2003).
[Crossref]

Opt. Acta (1)

E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle photography,” Opt. Acta 17(12), 883–898 (1970).
[Crossref]

Opt. Commun. (1)

Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
[Crossref]

Opt. Eng. (1)

G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
[Crossref]

Opt. Express (1)

Opt. Laser Technol. (1)

M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
[Crossref]

Opt. Lett. (2)

Proc. SPIE (1)

T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE 10231, 102310L (2017).
[Crossref]

Other (6)

G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of Scandinavian conference on Image analysis. (Springer, 2003), pp 363–370.

J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp. 1–5.

Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.

J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).

B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,” in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.

Supplementary Material (2)

NameDescription
» Visualization 1       Original audio file
» Visualization 2       Regenerated audio file

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. Two speckle images and the optical flow field between them. (a) Former frame. (b) Later frame. (c) Optical flow.
Fig. 2.
Fig. 2. Flowchart of the real-time signal processing algorithm of our system.
Fig. 3.
Fig. 3. Angle statistical histogram of 10 thousand vectors.
Fig. 4.
Fig. 4. Waveform before and after fixing of accumulated drift.
Fig. 5.
Fig. 5. Part of the waveform of uniform sampling date estimation.
Fig. 6.
Fig. 6. Experiment scenario of the proposed laser microphone system.
Fig. 7.
Fig. 7. Schematic diagram of the proposed laser microphone system.
Fig. 8.
Fig. 8. Regenerated waveform of different frequency audio signal. (a) 50 Hz audio signal. (b) 100 Hz audio signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.
Fig. 9.
Fig. 9. Frequency domain diagram of each result. (a) 50 Hz audio signal. (b) 100 Hz audio signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.
Fig. 10.
Fig. 10. Spectrogram of the original audio signal (see also Visualization 1).
Fig. 11.
Fig. 11. Spectrogram of the regenerated audio signal (see also Visualization 2).
Fig. 12.
Fig. 12. Regenerated audio signal with different amplitude. (a) Sound volume is 48.3 dB. (b) Sound volume is 54.3 dB. (c) Sound volume is 57.9 dB. (d) Sound volume is 61.0 dB.
Fig. 13.
Fig. 13. Speckle image captured under different amount of camera defocusing $L$. (a) $L = 0$. (b) $L = 0.3m$. (c) $L = 0.5m$. (d) $L = 0.6m$. (e) $L = 0.7m$. (f) $L = 0.75m$. (g) $L = 0.85m$.
Fig. 14.
Fig. 14. SNR of the result with different amount of defocusing under different amplitude of audio signal.
Fig. 15.
Fig. 15. Regenerated waveform with 0.5 m camera defocusing under the amplitude of 61.0 dB.
Fig. 16.
Fig. 16. Corresponding speckle motion caused by object motion.
Fig. 17.
Fig. 17. Test result of transverse moving sound source.
Fig. 18.
Fig. 18. Test result of tilt moving sound source.
Fig. 19.
Fig. 19. Test result of axial moving sound source.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = x T Ax + b T x + c .
f ( x ) = f ( x d ) = ( x d ) T A ( x d ) + b T ( x d ) + c = x T Ax + ( b 2 Ad ) T x + d T Ad b T d + c = x T A x + b T x + c
A = A .
b = b 2 Ad .
c = d T Ad b T d + c .
d = 1 2 A 1 ( b b ) .
| d | = x 2 + y 2 .
α = tan 1 y x .
s i = i i 1 | d | i + i i 1 e i
S i = s i k × i .

Metrics