Optical imaging and tracking moving objects through scattering media is a challenge with important applications. However, previous works suffer from time-consuming recovery process, object complexity limit, or object information lost. Here we present a method based on the speckle rotation decorrelation property. The rotational speckles detected at short intervals are uncorrelated and multiplexed in a single-shot camera image. Object frames of the video are recovered by cross-correlation deconvolution of the camera image with a computationally rotated point spread function. The near real-time recovery provides sharp object image frames with accurate object relative positions, exact movement velocity, and continuous motion trails. This multiplexing technique has important implications for a wide range of real-world imaging scenarios.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Optical imaging is a prime means of collecting information about the inhomogeneity of complex samples such as biological tissues, atmospheric turbulence, fog, etc. Unfortunately, these samples scatter the light field in many directions and ruin the image quality . A host of techniques extract information from multiply scattered light, such as wavefront shaping [2–5], optical phase conjugation [6,7], and transmission matrix measurement [8,9]. Some recent works based on the inherent angular correlations in the speckle patterns, namely ‘memory effects’, provide novel imaging solutions . Based on this, speckle correlation method retrieves the Fourier-amplitude of object from the autocorrelation of detected speckles and recovers the lost Fourier-phase of object via an iterative phase-retrieval algorithm [11–22]. This ingenious method allows for imaging through highly scattering media without the need for detailed knowledge of the scattering sample by ignoring the point spread function (PSF). However, it suffers from the limit of object complexity and time-consuming iteration. Alternatively, as long as the PSF is measured, the object information can be reconstructed by deconvolution [23–28]. Notably, the deconvolution method is able to rapidly recover objects with exact relative positions only if the PSFs are correlated. If the PSFs are uncorrelated, interestingly, the PSFs can only deconvolute specific information and ignore others, act like filters. Recently, several works based on decorrelation are proposed [18,27,28]. One of them exploits the spectral decorrelation property of PSFs, and recovers multispectral images with corresponding spectral PSFs from one monochromatic speckle . Another one utilizes the decorrelation property of PSFs in different memory effect ranges, and enlarges the imaging range . The last one generates uncorrelated PSFs with a coded aperture, and provides a single-shot video of objects through scattering media . The first two works can realize fast and sharp imaging but require measurements of PSFs more than once since the PSFs are unrelated. The last work suffers from the recovery time since the uncorrelated speckles are selected with a dictionary learning approach and the images are retrieved with the iterative phase-retrieval algorithm, which are both time-consuming.
Imaging and tracking moving objects through scattering media is significant in many applications. Four goals are required to achieve, including sharp imaging of objects, exact locating object position, continuous recoding motion trail, and fast or real-time recovery. Recently, several methods have been demonstrated to be able to track moving objects through scattering media [18,29–32]. However, these methods are either incapable of imaging  and quantitatively measuring the trajectory , or suffer from the time-consuming iteration algorithm and the limit of the complexity of object shape [18,30,31].
Here, we present a method for single-shot video of moving objects through scattering media being able to provide sharp image frames with accurate object relative positions, exact movement velocity, and continuous motion trails in near real-time. This method is based on the speckle rotation decorrelation property, that is, speckle rotation around principal optic axis gives rise to fast decorrelation of speckles. The rotational speckles detected at intervals are not correlated, therefore, they are multiplexed in a single-shot camera image. Only a single PSF at initial position is required to measure, and the PSFs at different angular position may be obtained computationally by rotation manipulation. Object frames of the video are recovered by cross-correlation deconvolution (CD) of the camera image with the computationally rotated PSFs. The processes of selecting the specific speckle and retrieving the object information are incorporated, so that the object frames can be reconstructed near real-time.
2. Speckle rotation decorrelation
Speckle motion is considered as a difficulty to get over for a long time, since speckle rotation and speckle shift cause decorrelation and thus decrease the imaging quality [18,24,30,33,34]. In particular, speckle rotation gives rise to decorrelation more easily than speckle shift. Theoretically, a simple shift of a real function does not affect the maximum value of the cross-correlation between the functions before and after the shift, but only changes its position. Nonetheless, in the case of imaging through scattering media, the change of the angle between the incident light and the scattering medium not only gives rise to the speckle shift, but also slowly changes the structure of the speckle, which finally results in the decorrelation. Since the structure change of speckle is slow, the decorrelation process is usually subdued, bringing a ‘memory effect’ range. However, the rotation of an image around the center greatly affects the cross-correlation between the images before and after the rotation, and the cross-correlation falls off if there is no rotation symmetry of the image. In the cases of imaging through scattering media, speckles are generally random and asymmetrical. Therefore, the speckle rotation results in an extremely fast decorrelation.
To demonstrate, a collimated beam of 532nm coherent light with 8 mm beam diameter is incident into a scattering sample (Newport, 10DKIT-C3-40°) and emerged as speckle. Then the rotational speckles are produced by rotating the scattering sample around principal optic axis (the inset in Fig. 1(a)), then they are recorded at intervals by a 2/3” camera, which is placed at a distance of 80 mm in front of the scattering medium. The red solid line in Fig. 1(a) presents the maximum value of the intensity cross-correlation between the rotational speckles and the reference speckle detected at a fixed angle. Besides, the camera rotation around principal optic axis can also generate rotational speckles, and their corresponding cross-correlations are plotted as blue dotted line in Fig. 1(a). It can be seen that the two lines overlap with each other, which is because the rotation of the scattering sample and the camera are relative motions. It is shown that the correlation drops to 1/2 when the rotational angle is around 0.12 degree, which can be called the rotational decorrelation angle (RDA). It indicates that slight change of speckle rotation causes decorrelation. We call this property speckle rotation decorrelation (SRD).
As a contrast, a sequence of speckles are obtained by rotating the scattering sample around the vertical axis (the inset in Fig. 1(b)), which is the same as the way of measuring the memory effect angular range. The speckle retains its structure and undergoes a slight shift for small rotation angles. Figure 1(b) shows the maximum value of the cross-correlation between the shifted speckles and the reference speckle taken at its initial position. The measured graph is fitted with the theoretical memory effect curve, which is given by Feng et al.  The corresponding decorrelation angle in Fig. 1(b), i.e., the memory effect angle, is about 3.6 degree, which is almost 30 times of the RDA in Fig. 1(a) under the same conditions. Therefore, the speckle rotation causes decorrelation much more easily than the speckle shift.
It’s worth mentioning that the value of RDA caused by speckle rotation is not affected by the lateral position shift of the camera and the replacement of different scattering samples. Whereas, it slightly increases with the reduction of the camera’s sensing area and the extension of the distance between camera and scattering sample.
From a different perspective, the SRD property can be exploited for specific applications instead of being regarded as a difficulty. Here, we take advantage of SRD for single-shot video of moving objects hidden behind scattering layers. In this application, speckles detected at different time are required not to correlate with each other. Thus the SRD is suitable and effective, since the speckle rotation can cause decorrelation quickly and easily.
3. Principle of single-shot video of a moving object
The principle of the experiments for SRD based single-shot video of moving objects through scattering media is presented in Fig. 2. A dynamic object moving within the memory effect range is hidden behind a scattering medium and illuminated by a spatially incoherent and narrowband source. Then the scattered rotational speckles are produced by rotating the recording camera around principal optic axis, as it is better for the scattering sample not to be artificially altered in practical application. Since the object is located within the memory-effect range, each point on the object generates a shift-invariant random speckle pattern on the camera, which can be regarded as the PSF of the imaging system in Fig. 2(a). Thus, the momentary camera image is the convolution of the PSF and the moving object at moment t1, which can be expressed as:
Two PSFs with a rotational interval angle smaller than are detected in nearly the same angle and correlated, so their correlation is a strongly peaked function. On the contrary, the correlation of two PSFs with an interval angle bigger than is rapidly disappeared. So we reach:Eq. (3), with the PSF generated by a point on the object plane at position r0, the cross-correlation between the computationally rotated PSFs and the camera image I yields:Fig. 2(b)). Notably, the CD recovers image with higher quality than the normal deconvolution. In the normal deconvolution with the known PSF, the division by the PSF in Fourier domain and the inverse Fourier transform amplifies the noises, which leads to the low quality of the recovered images. Moreover, the corresponding time of each recovered frame tj can be calculated from the rotational angle of PSF:Eq. (4). Thus, the motion trail of the moving object can be accurately obtained after retrieving the object image, corresponding time, and relative position of each frame. Meanwhile, the single-shot video of the moving object hidden behind the scattering medium is reconstructed.
4. Experiments and results
4.1 Deconvolution of rotational speckles with angle-matching PSF
In this experiment, a stationary object “5” generated from a spatial light modulator (SLM, RealLight, RL-SLM-R1) is placed at a distance of 472 mm behind the scattering medium (Thorlabs, DG100 × 100-120) and illuminated by a 532nm spatially incoherent pseudothermal source. The imaging resolution of the system is 23.6μm. The CMOS camera (Point Grey, GS3-U3-51S5M-C, 2448 × 2048 pixels, 3.45μm pixel size) fixed on a motorized rotation stage (Zolix, MC600) is placed at a distance of 80 mm in front of the scattering medium. The central point of the camera and rotation stage is overlapped to avoid displacement during the rotation. In this system condition, the RDA is around 0.12 degree. Five spaced speckle patterns are detected in sequence by rotating the camera from −1 to 1 degree with an interval of 0.5 degree. The PSF is detected at 0 degree by substituting a point for the object on the SLM. Then the PSF is cross-correlated with each speckle pattern detected at different angles respectively. As shown in Fig. 3(a), object information is recovered via CD only at 0 degree, but totally blurred at other rotational angles of camera. It proved once again that the rotational decorrelation angle is so small that the SRD is very suitable for single-shot video of hidden objects.
In the experiment of single-shot video of hidden objects, multiple speckles detected from different angles are multiplexed. The unmatched speckles have little effect on the reconstructed images, which only leads to a slight fall in PSNR. To prove, two speckles are respectively detected at 0 degree (Speckle1) and 30 degree (Speckle2) by rotating the camera, then the two speckles are added together (Speckle1 + 2). PSF detected at 0 degree is cross-correlated with these three speckles in turn. Figure 3(b) shows that the object images can be reconstructed only when the rotational angles of the speckle and PSF are matched. The right object image is almost the same as the left one with about 8% decrease of PSNR. This decrease is caused by the background noise from the unmatched Speckle2. The conditions in Fig. 3(c) are identical with that in Fig. 3(b) except that PSF is computationally rotated to 30 degrees. Object images are recovered only with the angle-matching PSF (middle and right images in Fig. 3(c)). Since the speckle rotation is caused by rotating the camera, the object is also rotated in the view of camera. Therefore, the objects recovered by deconvolution are also rotated for 30 degrees (top-right insets in Fig. 3(c)). The final object images are reconstructed by a reverse rotation of the recovered objects for 30 degrees (Fig. 3(c)).
4.2 Single-shot video of moving objects
The system of the experiment for SRD based single-shot video of hidden object is the same as Chapter 4.1 except for changing the object into a moving “5” with a height of about 650μm on the SLM (Fig. 4(f)). The object translates from left to right within the memory effect range at a rate of 45.0μm/s. The camera rotates at a speed of 20 degree/s and pauses for 1.1s at intervals of 5 degrees. A single camera image is shot and the continuous exposure time is set as 20s.
Each frame of the video is directly recovered by the CD of the camera image with the computationally rotated PSF according to Eq. (4), without a time-consuming process of selection. Notably, a single measurement of the PSF is enough for the recovery of all frames, which makes the detection process simple and convenient. The edges of the PSF and the camera image are cut after the PSF intensity signal is computationally rotated, and 1449 × 1449 pixels in the middle are retained so as to avoid the lack of edge information caused by rotation. This operation turns the RDA into about 0.19 degree. In our method, the number of the reserved pixels for compute is approximately 100 times of that in the iterative phase-retrieval algorithm based methods, while our reconstruction time is just about 1/40 of them running on the same computer.
Figures 4(a)-4(d) are frames 1, 4, 7, 10 of the recovered video, which is composed of 11 frames. The video’s playback speed is set as about 3.5 times of realistic. As a comparison, when the camera rotation is not implemented during the detection, the recovered object image is blurred (Fig. 4(e)). The displacement distance between the retrieved objects in each two adjacent frames is 10.35μm on camera image plane. Therefore, with the system conditions mentioned in Chapter 4.1, the translational absolute velocity of the object is calculated to be 45.2μm/s, which is in good agreement with the actual speed 45.0μm/s.
4.3 Single-shot video of complicated dynamic objects
Firstly, objects “Op”, “ti”, “cs”, and “Optics” are generated on the SLM in sequence (Fig. 5(f)). The system conditions and the retrieving processes are just the same as Chapter 4.2. The recovered video contains 11 frames, and frames 3, 6, 8, 11 are shown in Figs. 5(a)-5(d). The reconstructed dynamic objects are sharp. Secondly, our method is also effective for the objects with various motion trails. To prove, a letter “O” generated on the SLM is cut into four parts and shifted in four different directions (Fig. 5(l)). Figures 5(g)-5(j) are frames 1, 4, 7, 10 of the 11 frames from the video. The objects’ motion trails in every direction are clearly retrieved at every moment. As a comparison, the corresponding recovered objects are blurred when the camera is motionless during detection (Figs. 5(e) and 5(k)).
It is worth mentioning that, in our method, the object consists of 6 letters in a single frame, such as the “Optics” in Fig. 5(d), can be recovered with the same processing time as the object composed of a simple letter, such as the “O” in Fig. 5(g). Moreover, their imaging qualities are almost the same, with just a slight PSNR less in Fig. 5(d). As a contrast, in phase-retrieval algorithm based speckle correlation method, the retrieval time of “Optics” is much longer than “O”, and both of them take dozens of times longer than ours because of the time-consuming iteration. Besides, the “Optics” is recovered with much lower imaging quality in comparing with “O”, and sometimes it even cannot be correctly recovered.
The frame number limitation of a single-shot video is measured by calculating the PSNR of a series of videos with different frame numbers. As shown in Fig. 6, the PSNR decreases with the increase of the frame number. Since each frame is recovered from one of the uncorrelated speckles, the remaining uncorrelated speckles inherently cause background noise. The larger the frame number, the more uncorrelated speckles remain, and the lower the PSNR will be. On the basis of the variation trend of the curve in Fig. 6, the PSNR decreases to half of the maximum when the frame number increases to around 20 in our experiment condition.
Several conditions in the camera rotation process should be explained. Firstly, the minimal pause time is limited by the mechanical pause of the motorized rotation stage rather than the integration time for imaging. In fact, more than 10ms is enough for imaging. Thus, the acquisition speed can be improved by replacing the motorized rotation stage by a high performance one. Secondly, the frame image can be recovered with any interval angle bigger than RDA. However, the image of higher PSNR is recovered with a bigger interval angle, such as above 3 degrees in our experiment condition. Thirdly, faster rotation speed of camera is better for imaging, and the acquisition speed as well as the imaging quality can both be improved.
The speckle motion is used for a decorrelation tool in our experiment, but its impact on imaging is still necessary to be discussed. Firstly, if the scattering sample in our system is naturally rotated, all object frames can still be recovered with correct lateral relative distance from the point where PSF was generated. However, the time sequence of the frames will be lost, because simultaneous rotations of the scattering sample and the camera give rise to a resultant speckle rotation with unknown speed and direction. In this case, the rotational angle of the speckle is not equal to the negative rotational angle of the camera, thus the corresponding time of each recovered frame can no longer be determined from Eq. (5). Another consequence is that the recovered object rotation caused by camera rotation cannot be compensated correctly. Secondly, if the scattering sample is naturally translated within the memory effect range, different from the above, the reconstruction of the video is unaffected.
Notably, there is no requirement for the objects. The object can either be dynamic or static. In addition, the object has the potential to extend to three dimensions by shifting the point which generates the PSF in axial direction.
There are some challenges that this method may address, which contain the increase of the frame number limitation, the enlargement of the imaging field of view, and the detection of the PSF. In practical applications, the PSF of the system can be obtained by using guide-stars embedded in the medium at the object plane, speckle pattern estimation, or spatial correlation [23–28].
We have shown that a video of moving objects hidden behind scattering media can be derived from a single-shot camera image based on the SRD property. The imaging system is so simple that only a spatially-incoherent light source and a camera fixed on a motorized rotation stage are required. The camera image is the superposition of the rotational speckles at intervals. Object frames of the video are selected and recovered by cross-correlating a computationally rotated PSF with the camera image. This deconvolution process is fast and simple, without any iteration or selection procedures. The near real-time recovery provides sharp image frames with accurate object relative positions, proper time sequence, exact movement velocity, and continuous motion trails. The presented single-shot video recovery method is fast and robust, which will benefit the real-time observation and the quantitative analysis of dynamic objects as well as their motion trails. This multiplexing technique is expected to increase the information storage density in biomedical and astronomical applications.
Graduate Innovation Foundation of Jiangsu (KYCX17_0247); National Natural Foundation of China (Grant No.61675095).
1. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).
4. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2012). [CrossRef]
5. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]
7. C. L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18(20), 20723–20731 (2010). [CrossRef] [PubMed]
9. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef] [PubMed]
12. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]
15. A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24(15), 16835–16855 (2016). [CrossRef] [PubMed]
16. Y. Shi, Y. Liu, J. Wang, and T. Wu, “Non-invasive depth-resolved imaging through scattering layers via speckle correlations and parallax,” Appl. Phys. Lett. 110(23), 231101 (2017). [CrossRef]
17. M. Hofer, C. Soeller, S. Brasselet, and J. Bertolotti, “Wide field fluorescence epi-microscopy behind a scattering medium enabled by speckle correlations,” Opt. Express 26(8), 9866–9881 (2018). [CrossRef] [PubMed]
19. Q. Chen, H. He, X. Xu, X. Xie, H. Zhuang, J. Ye, and Y. Guan, “Memory Effect Based Filter to Improve Imaging Quality Through Scattering Layers,” IEEE Photonics J. 10(5), 1–10 (2018). [CrossRef]
20. C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019). [CrossRef]
22. B. Das, N. S. Bisht, R. V. Vinu, and R. K. Singh, “Lensless complex amplitude image retrieval through a visually opaque scattering medium,” Appl. Opt. 56(16), 4591–4597 (2017). [CrossRef] [PubMed]
26. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25(26), 32829–32840 (2017). [CrossRef]
27. L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. 43(8), 1670–1673 (2018). [CrossRef] [PubMed]
28. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]
32. M. I. Akhlaghi and A. Dogariu, “Tracking hidden objects using stochastic probing,” Optica 4(4), 447–453 (2017). [CrossRef]
33. S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, and H. Ramachandran, “Real-time imaging through strongly scattering media: seeing through turbid media, instantly,” Sci. Rep. 6(1), 25033 (2016). [CrossRef] [PubMed]