Incoherently illuminated or luminescent objects give rise to a low-contrast speckle-like pattern when observed through a thin diffusive medium, as such a medium effectively convolves their shape with a speckle-like point spread function (PSF). This point spread function can be extracted in the presence of a reference object of known shape. Here it is shown that reference objects that are both spatially and spectrally separated from the object of interest can be used to obtain an approximation of the point spread function. The crucial observation, corroborated by analytical calculations, is that the spectrally shifted point spread function is strongly correlated to a spatially scaled one. With the approximate point spread function thus obtained, the speckle-like pattern is deconvolved to produce a clear and sharp image of the object on a speckle-like background of low intensity.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Imaging through scattering media [1–3] is a challenge as the wavefront of the incident light is distorted by scattering due to the inhomogeneous distribution of the refractive index of the media. Several methods have been put forward to compensate for the wavefront distortion and facilitate imaging. Wavefront shaping [4–6] and optical phase conjugation [7–10] are widely used to compensate the heavy distortion caused by multiple scattering. Transmission matrix methods [11–13] have been applied to obtain an image through scattering media. Speckle correlation  appears to be a very promising method relying on the so-called memory effect (ME), which is the angular tilt-invariant property of the speckle pattern when the incident light is rotated with a small angle [15–17]. Within the small ME range, the speckle pattern can be considered as the convolution of the point spread function (PSF) with the object’s response. Methods based on speckle autocorrelation retrieval [18–25] can be exploited to reconstruct objects non-invasively. Coherent speckle-correlation [26, 27] methods provide field-based information through thick scattering media. Deconvolution methods [28–32] are also exploited to demonstrate fast, 3D, high-resolution and large field-of-view imaging. Among them, color imaging  is realized when R, G, B wavelength components of the object are retrieved from their corresponding PSFs captured by a color camera. Single-shot multi-spectral imaging has also been demonstrated  with a monochromatic camera. It was found that an image of an object can be reconstructed by deconvolution with the PSF obtained using different illumination, but only if the illumination spectra overlap. To realize multi-spectral imaging, Sahoo et al.  recorded PSFs at different wavelengths in advance and assumed them to be uncorrelated. Hyperspectral imaging has been realized based on a nanowire scattering medium . We have recently reported a deconvolution method to image objects at different depth by scaling the reference PSF  according to the focal length of the imaging system . Several other recent imaging techniques [35–37] have shown retrieval of depth information or even full 3D images using various methods to obtain the PSF.
In this paper, we introduce a method to retrieve the PSF of a scattering layer from the transmission pattern of a reference object that emits light at a different wavelength than the object of interest. The crucial ingredient is the presence of a strong correlation between the PSF with shifted wavelength, and the PSF with scaled spatial coordinates. The wavelength scaling is combined with the spatial scaling methods of  to image incoherently illuminated or luminescent objects through a thin scattering layer, using a reference object that can have a non-overlapping spectrum and be in a different depth plane than the object being imaged. Furthermore, the reference object can be physically extended, which is more robust and provides a higher SNR than the use of a point object in the case of incoherent illumination. The gain in signal strength obtained from an extended object is more important than the loss of contrast in many cases . We demonstrate the use of a color CCD to capture a spectrally separated object and reference in a single shot, which is promising for imaging through dynamic scattering media. The principle we demonstrate here may enable fluorescence imaging through turbid layers using reference objects emitting at a different wavelength.
This paper is structured as follows: In Sec. 2 we describe imaging through a diffuser using a PSF retrieved from a reference object that is separated in depth but not in wavelength, then in Sec. 3 we consider objects that are separated in wavelength but not depth. In Sec. 4 we demonstrate a single shot method to capture the object and reference speckle patterns and apply this to an object and reference that are separated both in depth and in wavelength.
2. Depth correlation and PSF scaling at the same wavelength
We first describe image reconstruction using a reference object at the same wavelength but at a different depth, which we achieve by combining the methods described in  and . The setup is shown in Fig. 1. Light is emitted by a light emitting diode (LED) source (Daheng Optics,1W). The light passes through two diffusers (diffuser 1, Newport 10-degree light shaping diffuser; diffuser 2, 5-degree) and the object is placed between them. This setup models the situation of an object surrounded by a scattering medium so that all incident and outgoing light is scattered. In contrast to earlier work  we use an extended reference object instead of a point source to recover the PSF.
A reference object with intensity distribution OR [a cutout ‘H’, Fig. 2(a)] is inserted between the diffusers. Its speckle pattern IR [Fig. 2(c)] is recorded by the CCD (Basler ACA2040-90UM; pixel size, 5.5 µm). As the object is much larger than the transversal coherence length of the illumination, the speckle contrast degrades to about 0.1 [30,38].
Within the memory effect range, IR = OR*PSF (where * represents convolution). Thus, the PSF can be retrieved from the speckle pattern by Wiener deconvolution . Without scaling, this PSF can only be used to reconstruct unknown objects that are at the same depth plane. When an unknown object OU [Fig. 2(b)] is inserted at a different depth plane, we capture a speckle pattern IU = OU*PSFU, where PSFU corresponds to the unknown object’s depth plane.
As shown in , for a thin scattering layer PSFU is approximated by scaling as PSFU(x,y) ≈mz2 PSF(mz x, mz y) where mz = (di/do + 1)/(di/do′ + 1). In our setup the distance from the reference object to diffuser 2, do = 117 mm and that from the unknown object plane to diffuser 2, do′ = 107 mm. The CCD is at di = 22 mm from diffuser 2, leading to mz = 1/1.015. In Figs. 2(a) and 2(b) we show the reference and unknown objects, respectively, and in Figs. 2(c) and 2(d) we show the respective speckle patterns. We deconvolved the PSF [Fig. 2(e)] from the reference object’s speckle pattern and rescaled its coordinates to obtain PSFU [Fig. 2(f)]. A blurry image on a relatively high background [Fig. 2(g)] is reconstructed when the unscaled PSF is used to deconvolve the unknown object’s speckle pattern. As expected, a clear and sharp image on a much lower background [Fig. 2(h)] is recovered when PSFU is used.
3. Spectral correlation and PSF scaling
In most previous work based on speckle deconvolution [28–30], the PSF and the speckle pattern of the unknown object are obtained under the same illumination source, which means that the spectrum is identical. Recently, it was reported that even though the PSF and the speckle pattern are obtained with two different broad band sources, deconvolution still works provided there is some spectral overlap, which was referred to as the cross-talk effect . However, when using spectrally separated narrow-band light sources, objects cannot be reconstructed with the PSF corresponding to a different wavelength. Here we show that even in the spectrally separated case a reconstruction is possible. In this case the spatial coordinates of the reference PSF are scaled to correspond to a peak of the correlation function,Fig. 3. A tunable monochromatic source, at z = -do, with do = 180 mm, illuminates the diffuser. A camera at the image plane, di = 58.5 mm, captures the resulting speckle pattern. In the experiment the point source is a 200-µm pinhole illuminated with halogen lamp light through a grating monochromator with a slit-defined bandwidth of 15 nm.
A calculation of the correlation function is presented in the Appendix. In Fig. 4(a) we show the measured correlation functions for different wavelength ratios, for the case where the beam size on the diffuser is defined by a 1-mm radius aperture. We also show the analytical result obtained in the Appendix, Eq. (13), which has no free parameters. For clarity the theoretical curves have been scaled down by a factor 2 as the experimental correlations are weaker than the analytical theory predicts, possibly due to polarization effects which are not included in the theory. For wavelength shifts of a few percent there is a strong peak in the correlation function. In Fig. 4(b) we show the experimental peak positions versus wavelength ratio, and the corresponding result from the Appendix, Eq. (14). For the data taken with an aperture defining the beam size at the diffuser, the agreement with the zero-parameter theory is excellent. The theoretical line has a slope of 0.93. Without an aperture, the beam on the diffuser is roughly Gaussian with a radius of 1.75 mm at 1/e2 intensity. The corresponding theory is a line with a slope of 0.77, whereas the data deviates slightly with a slope of 0.63. We have varied the spectral width of the source and observed the same peak positions as long as the spectra do not overlap.
Thanks to the presence of the correlation peak, we can approximate PSFλ2 by scaling the coordinates of PSFλ1. An object which emits at wavelength λ2 can be reconstructed even if the spectra have no overlap. The reconstruction is again a Wiener deconvolution with PSFλ2.
The experimental setup to demonstrate imaging using spectral correlation and PSF scaling is shown in Fig. 5. Two objects (cutout letters “H” and “G”) are illuminated by two LEDs (Daheng Optics, 1W) with different but overlapping spectral profile [Fig. 6(a)]. The distance from the objects to diffuser do1 = do2 = 180 mm and that from the diffuser to the chip of the CCD, di = 58.5 mm. In the first experiment, we use the unfiltered, overlapping spectra of the LEDs. Speckle patterns from the reference object [Fig. 6(b)] and the unknown object [Fig. 6(c)] are acquired by the CCD sequentially and the object is reconstructed by the deconvolution method with the PSF retrieved from the reference speckle pattern, without scaling. In the second experiment, we placed narrowband filters before the LEDs so their spectra do not overlap [Fig. 6(e)]. The central wavelengths of the filtered light sources are 596nm and 620nm, respectively. According to Eq. (14) the correlation maximum is expected for our experimental conditions at m = 0.98. The speckle patterns of the reference and unknown objects are shown in Figs. 6(f) and 6(g), respectively. When the deconvolution method is used without scaling the PSF, only a blurry image is reconstructed [Fig. 6(h)]. However, upon scaling the retrieved PSF, a clear image is reconstructed [Fig. 6(i)]. This confirms that the strong correlation between the spectrally shifted and the coordinate-scaled PSF enables us to obtain a clear image of the object.
4. Spectral speckle pattern separation and single-shot imaging
In Sec. 2, the depth dependent PSF is obtained by scaling the PSF according to the reference depth whereas in Sec. 3, the wavelength dependence of the PSF scaling is studies. When both depth and spectral differences are considered, the total scaling factor is given byFig. 5 towards the diffuser by 20 mm., The two objects are now at different depths and illuminated by spectrum-separated sources with narrowband filters. In this case, 1/mz is 1.03 according to . The reference PSF is retrieved by deconvolution from the reference speckle pattern. The PSF corresponding to the object plane and wavelength is obtained by the appropriate coordinate stretch.
A very convenient method to obtain reference and object speckle patterns in a real time single-shot measurement is to use a color CCD. The monochromatic CCD in Fig. 5 is replaced by a color one (acA2040-90uc - Basler ace). There is a Bayer filter  inside the camera in front of the photosensitive chip, which codes the wavelength response of each pixel. Using the spectral response curve of the CCD, different wavelength speckle patterns can be separated from one mixed speckle pattern. Suppose that there are two illuminating lights with different wavelength and their intensities are Iλ1 and Iλ2. For any pixel of the color CCD, R, G and B components of the captured speckle image [Fig. 7(a)] can be written as,Eq. (3), so Iλ1 and Iλ2 can be found using pseudo-inverse methods. For our orange-red wavelength range simply using the first two equations suffices as the response of the B coded pixels is very low. Once Iλ1 and Iλ2 are obtained, a reference PSF is obtained by deconvolution from Iλ1, and the PSF used to deconvolve the unknown object is obtained by rescaling the reference PSF with the total scaling factor mtotal. The spectral sensitivity of the coded pixels is obtained from the sensor manufacturer. The center wavelengths of the filtered light sources are λ1 = 596 nm and λ2 = 620 nm and the corresponding (normalized) coefficients are R1 = 0.67, G1 = 0.29, B1 = 0.04, R2 = 0.88, G2 = 0.08, and B2 = 0.04. The speckle patterns are calculated according to Eq. (3) and shown in Figs. 7(b) and 7(c). The reconstructed result is shown in Fig. 7(e). In order to test the scaling behavior, another object letter “F” [Fig. 7(d)] is used as unknown object and is reconstructed [Fig. 7(f)] from a different mixed speckle pattern.
We now turn to the effect of the spectral width of the sources on the recovery of the PSF. The sensor response for a spectrally broadened source is:Figs. 7(g) and 7(h). These reconstructions have a higher signal to background compared to Figs. 7(e) and 7(f) which are reconstructed with coefficients for the central wavelength. The Bayer filter thus enables imaging of an object while simultaneously acquiring a spectrally separated reference in a single shot.
We have considered two scaling factors to resize the reference PSF for different depths and different wavelengths and presented a single-shot speckle imaging method. The single-shot method is fast, robust and practical since the reference object can exist simultaneously with the unknown object. In the previous deconvolution-based works, the PSFs are formed by a point source and the unknown object is removed for capturing the PSF. Therefore, those techniques are not suitable to be used in dynamic scattering media as the PSF loses its correlation in time quickly.
We now discuss the field of view (FOV) attainable with our method. The FOV is limited by the optical memory effect, which restricts the FOV to about do λ/πL [15–17], where L is the thickness of the scattering medium. For a holographic diffuser L is effectively very small, so that the memory effect can be used to image extended objects. In biological tissues, typically the thickness of the scattering layer is much larger, so that only small objects can be imaged. New insights in the scattering properties of tissue promise to yield a potentially useful imaging range in some situations [14,17]. Given a minimum range, the FOV can be enlarged by scanning methods  or by using an axial lens system to collect speckle patterns . For large differences in depth and wavelength, the peak in the scaled correlation coefficient becomes much smaller than 1, as shown in Fig. 4(a), and as a result the deconvolution becomes noisy. This limits the DOF and wavelength range, depending on the SNR of the raw data.
Non-invasive autocorrelation-based speckle scanning methods [18,19] can retrieve both the object shape and the PSF without the need for a reference object. However these methods require a high SNR, depending on the complexity of the object. For low SNR, the iterative phase retrieval algorithms  that these methods are based on do not converge. Deconvolution-based methods such as ours require a reference object, and given that have no threshold in SNR. As a very interesting prospect, one could combine our method with autocorrelation-based imaging to realize non-invasive depth and spectral imaging. If the speckle-autocorrelation method has sufficient SNR to reconstruct a single unknown object, this object can then be used as a reference object and other objects of interest can be retrieved by deconvolution with the scaled PSF.
We have shown that objects can be imaged through a thin diffusing layer using a reference PSF, which can be extracted from a known object that is located in a different plane and/or emits at a different wavelength, than the object of interest. The key finding is that for a thin diffuser the wavelength-shifted PSF correlates strongly with a coordinate-scaled PSF. Analytical calculation yields the shape of the correlation function and the peak position, which depend on the correlation length of the scattering medium and on the size of the illuminated spot. The use of a spectrally separated reference object allows for single-shot imaging where the reference and object are recorded in a single exposure. This method has been implemented in an especially convenient way using the Bayer filter of a color camera to separate the reference and object patterns. The presented single-shot reconstruction method is fast and robust, which opens prospects of imaging objects through thin but highly dynamic scattering media.
Here we present an analytical calculation of the correlation function between PSFs at different wavelengths λ1 and λ2, with a spatial scaling factor m, as defined in Eq. (1). We consider the setup of Fig. 3, where the diffuser is in the plane z=0, and the point source is at location (x, y, z) = -(do, 0, 0). The field incident on the diffuser is h(x, y, do) S, where h is the propagation function of vacuum, and S is the strength of the point emitter. We use the Fresnel approximation to the propagation function can be used, which reads42]. We take into account the finite size of the beams and eventual apertures by introducing a Gaussian aperture function with 1/e2 radius w at the diffuser. A Gaussian aperture allows us to evaluate the integrals analytically, however, solid disk or other apertures can be numerically evaluated to give similar results. The field just behind the diffuser is then given byEq. (8) into Eq. (1) and performing the relevant integrations, noting that the integration over gives rise to a Dirac delta function,42]Fig. 4(a), where we have filled in the parameters of our experiment, w = 1 mm, f = 44 mm, λ = 600 nm, ℓc = 2.6 µm. By taking the derivative of Eq. (13) we find that the correlation function peaks at
Netherlands Organization for Scientific Research (Vici 68047618); Chinese National Natural Science Foundation (11534017 & 61575223); China Scholarship Council (201606380037).
The authors acknowledge S. Faez, J. Bosch and P. Pai for discussions.
References and links
1. I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).
2. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
3. M. Gu, X. Gan, and X. Deng, Microscopic imaging through turbid media (Springer Berlin Heidelberg, 2015).
4. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007).
5. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
6. H. He, Y. Guan, and J. Zhou, “Image restoration through thin turbid layers by correlation with a known object,” Opt. Express 21, 12539–12545 (2013).
7. C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18, 20723–20731 (2010).
8. K. Si, R. Fiolka, and M. Cui, “Fluorescence imaging beyond the ballistic regime by ultrasound pulse guided digital phase conjugation,” Nat. Photonics 6, 657–661 (2012).
9. Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3, 928 (2012).
10. C. Ma, X. Xu, Y. Liu, and L. Wang, “Time-reversed adapted-perturbation (TRAP) optical focusing onto dynamic objects inside scattering media,” Nat. Photonics 8, 931–936 (2014).
11. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
12. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
13. Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-Free and Wide-Field Endoscopic Imaging by Using a Single Multimode Optical Fiber,” Phys. Rev. Lett. 109, 203901 (2012).
14. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11, 684–689 (2015).
15. I. Freund, M. Rosenbluh, and S. Feng, “Memory Effects in Propagation of Optical Waves through Disordered Media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
16. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and Fluctuations of Coherent Wave Transmission through Disordered Media,” Phys. Rev. Lett. 61, 834–837 (1988).
17. S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
18. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
19. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
20. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22, 31426–31433 (2014).
21. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3, 71–74 (2016).
22. Y. Shi, Y. Liu, J. Wang, and T. Wu, “Non-invasive depth-resolved imaging through scattering layers via speckle correlations and parallax,” Appl. Phys. Lett. 110, 231101 (2017).
23. M. Cua, E. Zhou, and C. Yang, “Imaging moving targets through scattering media,” Opt. Express 25(12), 3935–3945 (2017).
24. T. Wu, J. Dong, X. Shao, and S. Gigan, “Imaging through a thin scattering layer and jointly retrieving the point-spread-function using phase-diversity,” Opt. Express 25, 27182–27194 (2017).
25. M. Hofer, C. Soeller, S. Brasselet, and J. Bertolotti, “Wide field fluorescence epi-microscopy behind a scattering medium enabled by speckle correlations,” Opt. Express 26, 9866–9881 (2018).
26. J. A. Newman and K. J. Webb, “Imaging optical fields through heavily scattering media,” Phys. Rev. Lett. 113, 263903 (2014).
27. J. A. Newman, Q. Luo, and K. J. Webb, “Imaging Hidden Objects with Spatial Speckle Intensity Correlations over Object Position,” Phys. Rev. Lett. 116, 073902 (2016).
28. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
29. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
30. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25, 32829–32840 (2017).
31. S. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017).
32. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017).
33. X. Xie, Y. Chen, K. Yang, and J. Zhou, “Harnessing the point-spread function for high-resolution far-field optical microscopy,” Phys. Rev. Lett. 113, 263901 (2014).
34. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
35. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
36. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
37. S. Mukherjee, A. Vijayakumar, M. Kumar, and J. Rosen, “3D imaging through scatterers with interferenceless optical system,” Sci. Rep. 8, 1134 (2018).
38. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Co., 2007).
39. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, Inc., 2006).
40. B. E. Bayer, U.S. Patent No. 3,971,065. Washington, DC: U.S. Patent and Trademark Office (1976).
41. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
42. H. G. Booker, J. A. Ratcliffe, and D. H. Shinn, “Diffraction from an irregular screen with applications to ionospheric problems,” Philos. Trans. R. Soc. Lond. A 242, 579–607 (1950).