Multispectral imaging plays an important role in many applications, from astronomical imaging and earth observation to biomedical imaging. However, current technologies are complex with multiple alignment-sensitive components and spatial and spectral parameters predetermined by manufacturers. Here, we demonstrate a single-shot multispectral imaging technique that gives flexibility to end users with a very simple optical setup, thanks to spatial correlation and spectral decorrelation of speckle patterns. These seemingly random speckle patterns are point spread functions (PSFs) generated by light from point sources propagating through a strongly scattering medium. The spatial correlation of PSFs allows image recovery with deconvolution techniques, while the spectral decorrelation allows them to play the role of tunable spectral filters in the deconvolution process. Our demonstrations utilizing optical physics of strongly scattering media and computational imaging present a cost-effective approach for multispectral imaging with many advantages.
© 2017 Optical Society of America
24 October 2017: A correction was made to the author affiliations.
Multispectral imaging has been developed rapidly [1,2] and become an important technology for various applications [3–5] because, in addition to the spatial domain, the spectral domain contains a significant amount of information about objects [6,7]. One can accomplish multispectral imaging simply by taking multiple shots (time multiplexing) with multiple filters in front of a monochromatic camera . With development of high-resolution cameras, it is practical to trade-off spatial resolution to gain spectral information in a single-shot imaging technique (space multiplexing). One version of multispectral imaging devices is the color camera in which the multiple spectral filters are spatially distributed on a 2D detector array. One can even get rid of the spatial resolution to achieve higher spectral resolution with a large number of spectral filters to use as a compact spectrometer . Fabricating these different spectral filters is difficult and, therefore, high spectral resolution is challenging. Utilizing optical diffractive or refractive components with computational techniques can achieve higher spectral resolution [10–12]. However, the trade-off between spectral resolution and spectral range can limit their performance, especially when the spectrum is just a few narrow lines in a broad range. These technologies all require special optical devices and critical alignment with a camera in a complex optical system in which the spatial and spectral information trade-off as well as spectral resolution and spectral range trade-off are pre-set at manufacture [13,14]. In this article, we demonstrate a simple method that utilizes a strongly scattering medium to retrieve both spatial and spectral information from a single 2D image. The trade-off between spatial and spectral information including spectral resolution and spectral range is up to analyzer’s choices to utilize the full capacity of the 2D imager.
Light passing through a scattering medium produces a random speckle pattern , which has seemingly no spatial or spectral information of the underlying object [Figs. 1(a) and 2(b)]. For imaging through scattering media, a number of techniques have been introduced to reduce or eliminate the scattering effect [16,17], such as adaptive optics , wave-front shaping , multiphoton fluorescent imaging , or optical coherence tomography (OCT) . Recently, the transfer matrix inversion method, which maps inputs to outputs of a scattering medium by a series of measurements, utilizes the scattering medium as a scattering lens for imaging [22,23]. The ability of a scattering lens in capturing light with large transverse momentum allows very high-resolution imaging . There are too many propagation modes for light transmitting through a scattering medium. Therefore, good characterization of these modes for just a single wavelength in the mapping process is time consuming with a large amount of data, and then heavy computation is required for the image reconstruction process.
However, the light transmitted through a scattering medium is not completely random; in fact, it “remembers” the original direction in a limited range [15,25,26]. This memory effect implies that when the light incident angle changes within the memory effect range, the speckle pattern at a distance from the scattering medium shifts linearly: . In other words, the speckle pattern generated by a point source [i.e., the point spread function (PSF)] shifts if the point source shifts transversely, and is the distance from the point source to the scattering medium (). The shift-invariance of PSFs, or spatial correlation of speckle patterns, allows a phase retrieval algorithm to recover the object through a scattering medium without medium characterization [27–30]. Spatial correlation of PSFs or PSF shift-invariance is the key point in these demonstrations, but the wavelength-dependent response of a scattering medium produces de-correlated PSFs with respect to wavelength, and largely degrades the image reconstruction quality. Therefore, a monochromatic pseudo-incoherent light source (i.e., dynamically diffused laser light) was preferred in these experiments. With only one PSF measurement, the object, , behind the scattering medium can be recovered from its speckle pattern intensity, , by the deconvolution method [31,32]. In this case, a broadband image of an object can be reconstructed from its speckle pattern, in which the scattering medium plays the role of an imaging lens. Multiple shots with multiple optical filters or a single shot with integrated multiple filters as in color cameras could be used for spectral imaging as conventional technologies. Here, we do single-shot multispectral imaging with just a monochromatic camera and no optical filter by utilizing the spectral decorrelation effect in scattering media.
Within the memory effect region of a scattering medium, the PSF is linear shift-invariant. Therefore, the captured speckle pattern after the scattering medium is the convolution (denoted as ) of an object with the PSF of the optical system. We can express it as . The object can be recovered from its speckle pattern by deconvolution: , if we know the PSF at wavelength of the optical system. As a result of spectral decorrelation in strongly scattering media, the speckle patterns are wavelength dependent, and spectrally separated light sources produce uncorrelated speckle patterns. Using this spectrum-dependent behavior of strongly scattering media, spectrometers with high spectral resolution have been demonstrated [33–35]. This spectral decorrelation can be mathematically expressed as1(a)] as . Using the aforementioned spectral decorrelation property [Eq. (1)], the following relationship can be deduced: (details of this principle are presented in Section 1, Supplement 1). Therefore, each spectral band of the object can be reconstructed from a single monochromatic image as follows:
It is worth noting that the single wavelength presented in this principle can be extended to be a spectral band, and different values of present different disjointed bands.
In essence, each or spectral PSF [Fig. 1(b)] is not only for image reconstruction by deconvolution, but also plays the role of a spectral filter. More importantly, the single speckle image of an object essentially contains both the spatial and spectral information, which are multiplexed orthogonally via multispectral PSFs [Fig. 1(a)]. The deconvolution algorithm will de-multiplex and recover the hidden spectral images. Figure 1(c) presents nine individual spectral images recovered from nine corresponding spectral PSFs [Fig. 1(b)] with very high fidelity. A full-spectrum image of the object is presented in Fig. 1(d) by superimposing these multispectral images. An ideal noiseless scenario is taken for the simulation, where nine random PSFs for the nine distinct spectral components were generated independently from Rayleigh distribution. The final speckle pattern is formed by superposition of the spectral speckles produced via convolution of spectral object with their respective spectral PSFs. The details pertaining to the practical scenario of the proposed technique are described in Sections 7–10, Supplement 1.
For demonstration, we generate various 2D multispectral objects [Fig. 2(a)] with three spectral bands corresponding to three primary colors (RGB) of the projector display technology (details about the experiments are presented in Section 2, Supplement 1). Three letters, NTU, in RGB colors, respectively, are chosen for the first experiment, and their raw speckle pattern as captured by the monochromatic camera is presented in Fig. 2(b). The size of this three-letter object is 1.5 by 0.4 mm, which is well inside the field of view (3.2 mm) for our optical imaging system defined by the memory effect of the scattering medium (Fig. S1, Supplement 1). We display four different colors (RGB and white) on the central pixel of the projector while shutting down all the other pixels to generate the effective multispectral point sources. Then the camera takes their speckle images as multispectral PSFs for the imaging system. Figure 2(c) (upper) shows the recovered multispectral images by deconvolving the speckle pattern in Fig. 2(b) with corresponding spectral PSFs. Each spectral PSF successfully reconstructs the respective letter. The broadest spectral PSF (white PSF) reconstructs all three letters NTU. This is expected because, in projector display technology, the white pattern and the white PSF are a composition of three individual RGB patterns and RGB PSFs, respectively. We note that a dim letter appears in the green spectral image, and a dim letter appears in the blue spectral image. This is confirmed in the intensity of the central row [Fig. 2(c)—lower] and in Figs. S2(a)–S2(c), Supplement 1. We superimpose the three RGB spectral images and achieve the “full color” image in Fig. 2(d), successfully recovering the original multispectral object NTU with magnification of .
The cross-talk effect between the green and blue channels in fact comes from a relatively high cross-correlation coefficient of 0.1941 between these blue and green spectral PSFs [Fig. 3(a)]. On the other hand, the small cross-correlation coefficients of the red spectral PSF with the green and blue ones (0.0691 and 0.0255, respectively) result only in the background noise of the spectral images. From our mathematical principle in Eq. (1), spectral PSFs are playing the role of spectral filters in which the cross-correlation coefficient between two spectral PSFs presents the transmission coefficient of a spectral band via the other spectral band filter. This principle explains the observations in the recovered white spectral image. The brightest letter corresponds to the highest cross-correlation coefficient between the green and white spectral PSFs (0.8660). Similar cross-correlation coefficients of the white spectral PSF with the red and blue ones (0.4536 and 0.4374, respectively) result in similar intensities for reconstructed letters and .
To understand the nature of cross talk in our PSFs, we analyze the spectra of different bands produced by our projector [Fig. 3(b)]. The white spectrum is indeed the sum of the three RGB spectral bands. However, the RGB bands are not spectrally disjointed. The blue band has a significant spectral overlap with the green band, justifying their high PSF correlation. There is no spectral overlap of the blue and red spectral PSFs, making their cross-correlation coefficient negligible. The green spectrum contributes the most intensity to the white spectrum, and, therefore, the green PSF has the highest cross-correlation coefficient with the white PSF. The appearance of dim letters and in the green and blue recovered spectral images, respectively, illustrates the ability of our technique to spatially and spectrally resolve a weak signal from a multispectral object. To demonstrate the spectral filtering role of the PSFs further, we used narrower RGB bandpass filters in front of the camera when measuring PSFs to avoid any spectral overlap [Fig. 3(b)]. These narrow spectral PSFs show negligible cross-correlation coefficients [Fig. 3(c)], as expected from their disjointed spectra and spectral decorrelation effect in our strongly scattering medium. Figure 4(a) shows the reconstructed RGB spectral images, which are de-convolved from the same speckle image in Fig. 2(b) with the new narrow spectral PSFs. Three clean letters NTU appear exactly at their corresponding spectral images without any cross talk, as illustrated in their intensity in the central row [Fig. 4(b)] as well as in their heat map version [Figs. S2(d)–S2(f), Supplement 1]. More importantly, this demonstration shows that multispectral images with any bands are embedded in the raw speckle image [Fig. 2(b)]; we just need multispectral PSFs to retrieve the desired multispectral images. Each optical configuration (an optical diffuser and a monochromatic camera) has unique and unchanged spectral PSFs, which need to be measured only once for all samples and analysis. The spectral response of the imaging system is essentially defined by the camera and absorption coefficient of the scattering medium. Using our current camera, we can do single-shot multispectral imaging of any spectral band within a broad range from UV (250 nm) to near IR (1100 nm) just by adding a glass diffuser. These advantages differentiate our technique from existing spectral imaging technologies. Live reconstruction of the multispectral objects from their single-shot grayscale speckle images are shown in Visualization 1. We also demonstrate our multispectral imaging with a fluorescent sample in Section 11, Supplement 1.
Our approach provides an additional dimension of spectrum to a 2D imager, which has an upper bound limitation on information that can be captured and then segregated. Within this limit, we have to trade-off between spatial and spectral information, the spectral range and spectral resolution. However, the scattering media give us the choices for flexible implementation to utilize the full capacity of the 2D imager. We can limit the spectral information to a single narrow line and get maximum spatial information, i.e., maximum number of spatial pixels and dynamic range. The number of active points and the number of de-correlated spectral bands in the object determine the speckle contrast and therefore the recovery quality of our technique. As a result, we can easily retrieve a large amount of spatial information with high spectral resolution for sparse spectral lines in a broad spectral range. On the other extreme, we can limit our spatial information to a single point object, and then retrieve the highest spectral resolution of the point source. We numerically simulate and present this spectrometer application of our optical system in Fig. S4, Supplement 1. The reconstructed spectra of the white-light emitting diode (LED) at different samplings of and 10 nm illustrate a potential spectroscopy application in our flexible multispectral imaging device. It is important to note that our multispectral imaging technique is insensitive to optical alignment because there are no moving parts and no focusing optics. Because of the memory effect, a small shift in the optical setup causes only a small shift of reconstructed multispectral images, and we do not need to retake the multi-spectral PSFs. Our prototype with a diffuser, an iris mounted to the camera through a lens tube, has been working very well for more than a month with one time measurement of multispectral PSFs at the beginning. The level of spectral decorrelation and the spatial correlation effect together with the resolution (pixel size and pixel number), dynamic ranges, and noise level of the monochromatic camera define the performance of our technique (Sections 7–10, Supplement 1). More investigations of new scattering media with stronger spectral decorrelation and spatial correlation will improve this cost-effective technology significantly and create great impacts in many important applications.
We demonstrate a multispectral imaging technique by just adding a strongly scattering medium in front of a monochromatic camera. A single-shot speckle pattern essentially contains spatial and spectral information of the object. Multispectral images with any spectral bands can be retrieved with the corresponding spectral PSFs. The respective spectral PSFs are fixed and need to be recorded only once for all time use. The deconvolution algorithm utilizes the shift invariance (i.e., spatial correlation) of the PSFs for image reconstruction, while the orthogonality between the spectral PSFs (i.e., spectral decorrelation) makes them play the role of spectral filters. Our demonstrations present a simple technique for spectroscopy and multispectral imaging that allows flexible trade-offs between spatial and spectral information, as well as spectral range and spectral resolution, to be captured and retrieved in a single gray-scale image.
Singapore Ministry of Health’s National Medical Research Council (NMRC) (CBRG-NIG (NMRC/BNIG/2039/2015)); Ministry of Education—Singapore (MOE) (MOE-AcRF Tier-1 (RG70/15)); Nanyang Technological University (NTU).
We would like to thank Yu Tian, Vinh Tran, and Dr. Balasubramanian Padmanabhan for fruitful discussions and useful feedback.
See Supplement 1 for supporting content.
1. H. Park and K. B. Crozier, “Multispectral imaging with vertical silicon nanowires,” Sci. Rep. 3, 2460 (2013). [CrossRef]
2. J. W. Stewart, G. M. Akselrod, D. R. Smith, and M. H. Mikkelsen, “Toward multispectral imaging with colloidal metasurface pixels,” Adv. Mater. 29, 1602971 (2017). [CrossRef]
3. M. J. S. Belton, R. Greeley, R. Greenberg, P. Geissler, A. McEwen, K. P. Klaasen, C. Heffernan, H. Breneman, T. V. Johnson, J. W. Head, C. Pieters, G. Neukum, C. R. Chapman, C. Anger, M. H. Carr, M. E. Davies, F. P. Fanale, P. J. Gierasch, W. R. Thompson, J. Veverka, C. Sagan, A. P. Ingersoll, and C. B. Pilcher, “Galileo multispectral imaging of the north polar and eastern limb regions of the moon,” Science 264, 1112–1115 (1994). [CrossRef]
4. S. Rapinel, L. Hubert-Moy, and B. Clément, “Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping,” Int. J. Appl. Earth Observ. Geoinf. 37, 56–64 (2015). [CrossRef]
5. W. Li, W. Mo, X. Zhang, J. J. Squiers, Y. Lu, E. W. Sellke, W. Fan, J. M. DiMaio, and J. E. Thatcher, “Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging,” J. Biomed. Opt. 20, 121305 (2015). [CrossRef]
6. L. E. MacKenzie, T. R. Choudhary, A. I. McNaught, and A. R. Harvey, “In vivo oximetry of human bulbar conjunctival and episcleral microvasculature using snapshot multispectral imaging,” Exp. Eye Res. 149, 48–58 (2016). [CrossRef]
7. D.-W. Sun, Computer Vision Technology for Food Quality Evaluation (Academic, 2016).
8. C. Zhang, M. Rosenberger, A. Breitbarth, and G. Notni, “A novel 3D multispectral vision system based on filter wheel cameras,” in IEEE International Conference on Imaging Systems and Techniques (IST) (2016), pp. 267–272.
9. J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature 523, 67–70 (2015). [CrossRef]
10. A. Orth, M. J. Tomaszewski, R. N. Ghosh, and E. Schonbrun, “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). [CrossRef]
11. L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot image mapping spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express 18, 14330–14344 (2010). [CrossRef]
12. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]
13. W. Huang, J. Li, Q. Wang, and L. Chen, “Development of a multispectral imaging system for online detection of bruises on apples,” J. Food Eng. 146, 62–71 (2015). [CrossRef]
14. C. D. Tran, “Development and analytical applications of multispectral imaging techniques: an overview,” Fresenius’ J. Anal. Chem. 369, 313–319 (2001). [CrossRef]
15. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).
16. M. Gu, X. Gan, and X. Deng, Microscopic Imaging Through Turbid Media (Springer, 2015).
17. J. A. Newman, Q. Luo, and K. J. Webb, “Imaging hidden objects with spatial speckle intensity correlations over object position,” Phys. Rev. Lett. 116, 073902 (2016). [CrossRef]
18. P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive optics retinal imaging: emerging clinical applications,” Optom. Vis. Sci. 87, 930–941 (2010). [CrossRef]
19. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]
20. D. R. Larson, W. R. Zipfel, R. M. Williams, S. W. Clark, M. P. Bruchez, F. W. Wise, and W. W. Webb, “Water-soluble quantum dots for multiphoton fluorescence imaging in vivo,” Science 300, 1434–1436 (2003). [CrossRef]
21. L. Liu, J. A. Gardecki, S. K. Nadkarni, J. D. Toussaint, Y. Yagi, B. E. Bouma, and G. J. Tearney, “Imaging the subcellular structure of human coronary atherosclerosis using micro-optical coherence tomography,” Nat. Methods 17, 1010–1014 (2011).
22. J.-H. Park, C. Park, H. Yu, J. Park, S. Han, J. Shin, S. H. Ko, K. T. Nam, Y.-H. Cho, and Y. Park, “Imaging the subcellular structure of human coronary atherosclerosis using micro-optical coherence tomography,” Nat. Photonics 7, 454–458 (2013). [CrossRef]
23. C. Park, J.-H. Park, C. Rodriguez, H. Yu, M. Kim, K. Jin, S. Han, J. Shin, S. H. Ko, K. T. Nam, Y.-H. Lee, Y.-H. Cho, and Y. Park, “Full-field subwavelength imaging using a scattering superlens,” Phys. Rev. Lett. 113, 113901 (2014). [CrossRef]
24. H. Yilmaz, E. G. van Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). [CrossRef]
25. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]
26. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]
27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]
28. A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24, 16835–16855 (2016). [CrossRef]
29. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the fourier-domain shower-curtain effect,” Optica 3, 71–74 (2016). [CrossRef]
30. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017). [CrossRef]
31. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [CrossRef]
32. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [CrossRef]
33. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7, 746–751 (2013). [CrossRef]
34. M. Chakrabarti, M. L. Jakobsen, and S. G. Hanson, “Speckle-based spectrometer,” Opt. Lett. 40, 3264–3267 (2015). [CrossRef]
35. M. Mazilu, T. Vettenburg, A. Di Falco, and K. Dholakia, “Random super-prism wavelength meter,” Opt. Lett. 39, 96–99 (2014). [CrossRef]