Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive imaging in scattering media

Open Access Open Access

Abstract

One challenge that has long held the attention of scientists is that of clearly seeing objects hidden by turbid media, as smoke, fog or biological tissue, which has major implications in fields such as remote sensing or early diagnosis of diseases. Here, we combine structured incoherent illumination and bucket detection for imaging an absorbing object completely embedded in a scattering medium. A sequence of low-intensity microstructured light patterns is launched onto the object, whose image is accurately reconstructed through the light fluctuations measured by a single-pixel detector. Our technique is noninvasive, does not require coherent sources, raster scanning nor time-gated detection and benefits from the compressive sensing strategy. As a proof of concept, we experimentally retrieve the image of a transilluminated target both sandwiched between two holographic diffusers and embedded in a 6mm-thick sample of chicken breast.

© 2015 Optical Society of America

1. Introduction

The most widespread principle for high-resolution imaging through inhomogeneous media is the isolation of photons that have not experienced scattering. This idea is at the heart of so diverse approaches such as scanning multiphoton microscopy [1] or imaging techniques based on time-resolved [2], coherence-gated [3] and polarization-sensitive detection [4]. The penetration depth of these methods, irrespective of their operation principle, is limited by the fact that the intensity of the unscattered light decreases exponentially with distance. As a consequence, imaging inside a turbid medium begins to be difficult or unfeasible at penetration depths larger than the transport mean free path, the mean distance that photons travel before they become diffuse [5]. Imaging techniques that model the random propagation of diffuse photons, such as multispectral optoacoustic tomography [6] or hybrid fluorescence molecular tomography [7], can reach greater penetration depths (>1 cm in tissue), but they only enable macroscopic imaging due to their relatively low resolution [8]. Recently it has been demonstrated a non-invasive approach that uses diffuse photons while preserving optical resolution. This method is based on the angular correlation (“memory effect”) inherent to the speckle patterns generated by photon scattering [9]. However, current implementations are restricted to thin scattering media [10], since the memory-effect range is inversely proportional to the medium thickness.

In this paper, we present a “scattering-free” wide-field imaging approach based on the concept of single-pixel camera [1115]. It combines a variable structured illumination, coming from an incoherent light source, and non-pixelated “bucket” photodetection. Spatially modulated illumination has been previously used in diffuse optical imaging (DOI) for quantitative mapping of the scattering properties of an inhomogeneous medium [16, 17]. Such a DOI approach assumes a transport model-based analysis to describe diffuse photon propagation (forward problem) and provides results after an elaborate reconstruction algorithm (inverse problem). Instead of this, our method relies on the “image-bearing” light component that preserves the illumination structure along the propagation path. The interaction between this component, which is masked by a strong diffuse background, and an absorbing object embedded in the scattering medium produces a fluctuating bucket signal. The object image is simply reconstructed from the correlations between the measured light fluctuations and the sequentially generated illumination patterns. In this way, a detailed characterization of the scattering parameters of the medium is not required.

Our technique uses non-diffuse photons at the illumination stage, but takes advantage of the total photon flux at the detection step. This feature distinguishes our single-pixel camera from time-gating or coherence imaging techniques, where the useful signal is composed exclusively of “image-bearing” photons arriving at the detector after passing through a scattering medium. In this case, the strong decay of this useful signal in a round trip dramatically limits the medium thickness. In contrast, our approach can work despite weakly scattered photons transmitted by the object eventually become diffuse on the way out. Recently, we have demonstrated that single-pixel detection of diffuse photons is crucial for image transmission through scattering media, even when they are dynamic [18]. However, in our previous experiments we considered an illumination stage that was free of scattering. Here, we overcome this strong limitation, demonstrating non-invasive imaging of an absorbing object completely embedded in an inhomogeneous medium. Such an advance represents a significant step towards the use of single-pixel imaging in more realistic scenarios, especially in biophotonics applications.

As a proof of concept, we present experimental results for an object hidden by a couple of holographic diffusers. To test the potentiality of our approach for imaging in tissue, the above diffusers are replaced by two 3mm-thick layers of chicken breast. For such a tissue thickness, multiple scattering is the predominant effect. Image reconstructions inside the “diagnostic window” (from 650 nm to 1000 nm) are presented. In order to harness the advantages offered by single-pixel imaging, we apply compressive sensing to speed up the data acquisition process.

2. Single-pixel imaging in scattering media

Our single-pixel scheme is sketched in Fig. 1(a). Spatially and temporally incoherent light coming from a white-light source impinges onto a digital micromirror device (DMD), which is composed of an array of electronically controlled micromirrors that can tilt between two angular directions. The DMD produces a set of illumination patterns, which are projected by an optical relay system onto a high-contrast object embedded in a non-absorbing inhomogeneous medium. The light that hits the object consists of two superimposed components: a strong diffuse halo, the result of averaging many uncorrelated and noninterfering speckle patterns generated by the incoherent source, plus a forward-scattered weak signal, which is a “ghost” illumination pattern with a spatial structure similar to the light pattern “sculpted” by the DMD (Fig. 1(b)). Finally, in our transillumination geometry, the light emerging from the back portion of the scattering medium is concentrated by a collecting lens onto the large active surface of a bucket detector, here a photodiode (PD).

 figure: Fig. 1

Fig. 1 Operation principle. (a), Schematic of the optical setup. Upper left inset: examples of projected patterns. Upper right inset: weighted superposition of the diffuse background and the illumination pattern. The contribution of the latter has been artificially increased to make it visible. Lower inset: binary amplitude object. (b), A ray light representation corresponding to a bright point of an illumination pattern. For a medium that produces multiple scattering, photons are randomly deviated from the directions they would trace in a homogeneous medium (dashed lines). Those diffuse light rays are red colored in our scheme. This forms a diffuse halo on the object plane, as rays arrive at multiple positions instead of the desired one. However, a fraction of the incoming rays follows a zigzagging path close to the dashed one, even after several interactions with the scattering centres. Those quasi ballistic or snake rays are green colored in our scheme. If such rays are transmitted by the object, they eventually become diffuse rays as a consequence of the second scattering process. Detection consists of collecting all the rays emerging from the ensemble in any direction.

Download Full Size | PDF

The purpose of the above optical system is to obtain the correlation between the weak structured patterns sampling the object and the tiny fluctuations that appear in the bucket signal. Such a correlation allows one to get rid of the prevalent noisy background interacting with the object due to scattering. As the DMD projects onto the object a sequence of M masks {Ii(x)} (i = 1,…,M), the intensity at each position x of the object fluctuates from one pattern to the next. On its turn, the bucket detector measures a signal {Yi}, where each Yi is a photocurrent proportional to the total light power emerging from the sample for the i-th mask. The image T(x) is then built from the spatially resolved cross-correlation T(x) = ∑i ΔIi(xYi between the pattern-to-pattern intensity fluctuations, ΔIi = Ii(x) I(x)⟩, and those registered by the bucket detector, YiY⟩. Interestingly, for a given illumination pattern, the bucket detector provides a weighted-sum of the fluctuations arising from all the pixels of the pattern and the weights of this sum correspond to the object’s transmission at each spatial position. Hence, when the pattern-to-pattern fluctuations that occur at each pixel are correlated with the photocurrent signal provided by the bucket detector, the strength of the correlation is proportional to the object’s transmission at the corresponding position. This result is not affected by the diffuse background, since it is uncorrelated with the “ghost” structured pattern that impinges onto the object.

Throughout the paper, we consider a reconstruction basis formed by a set of 2D Walsh-Hadamard matrices {Ii(x)i=1N} which entries are either +1 or −1. The entire reconstruction basis can be expressed through a Hadamard matrix H,

H=(I1(x1)I1(xN)IN(x1)IN(xN)),
where each row of H is a Walsh-Hadamard matrix rearranged in the form of a 1 × N vector. Note that Ii(xj) denotes the element of the i-th component of the Walsh-Hadamard basis placed at the position xj. Taking into account the definition of the Hadamard matrices, it is straightforward to demonstrate that
HHT=NId,
where Id is the identity matrix of rank N. If we represent the amplitude transmission function of the object by a N × 1 vector, T, the sequence of measurements taken by the bucket detector is given by
Y=HT,

The cross-correlation (at zero-shift) between the pattern-to-pattern fluctuations and those registered in the bucket detector signal is given by

corr[ΔI(x),ΔY]=i=1NΔIi(x)ΔYi,
where ΔIi(x) = Ii(x) I(x)⟩ and ΔY = YiY⟩. Here, the symbol =def1NΣi denotes an ensemble average over the entire set of measurements and the position x·takes N discrete values. From the properties of the Walsh-Hadamard matrices, iNIi(x) results to be a vector composed of zeros, with the exception of the first component, which takes the value N. This exception is due to the fact that the first Walsh-Hadamard matrix is composed exclusively by ones. Aside from this, the only term in the above cross-correlation is i=1NIi(x)Yi=HTY and, in accordance with Eqs. (3) and (2), HTY=NT As a consequence,
corr[ΔI(x),ΔY]=NT*,
where T* is a version of the object’s image whose first “pixel” has an additional constant term proportional to N, which can be easily corrected. In conclusion, the cross-correlation expressed in Eq. (5) allows, except for an irrelevant scale factor, retrieving the object’s transmission T.

In our approach, the object is completely embedded in a scattering medium. The light impinging onto the object is composed of a weak structured component that has a strong superimposed diffuse background. Such a noisy background, in principle, does not prevent the measurement of the bucket signal fluctuations that leads to the image reconstruction. Let us consider the measured data as the sum of a signal S plus a random noise n, Y = S + n. The cross-correlation expressed in Eq. (4) can then be written as

corr[ΔI(x),ΔY]=corr[ΔI(x),ΔS]+i=1NIi(x)(nin),

As I and n are completely uncorrelated to each other,

i=1NIi(x)ni=(i=1NIi(x))(i=1Nni)=N2I(x)n,

Substituting Eq. (7) in Eq. (6) gives

corr[ΔI,ΔY]=corr[ΔI,ΔS]+(N2N)I(x)n,

Taking into account that the ensemble average of the Walsh-Hadamard matrices is null,

corr[ΔI,ΔY]=corr[ΔI,ΔS],
that is, the cross-correlation between ΔI and the fluctuations of the bucket signal ΔS is insensitive to the diffuse background noise. Therefore, the object’s image can be recovered provided that the sequence of fluctuations ΔS (which are originated by the structured illumination) can be resolved by the employed detector.

Our approach can be reformulated into the framework of computational single-pixel imaging, where the N-pixel image T of an object is recovered from its M incoherent projections in a proper function basis. This formulation leads to the algebraic problem Y = Θ · T, where Θ is a matrix constructed by arranging in successive rows each illuminating pattern. To form a completely determined set of measurements, the rank of Θ must equal the object’s data dimension. Our procedure requires the sequential delivery of N illumination patterns, so the refreshing rate of the DMD limits the system temporal resolution (i.e., the time necessary to acquire one image). To speed up the data acquisition, we can take advantage of the ground-breaking theory of compressive sensing (CS), which makes it possible to recover an N-pixel image from M < N measurements [19], as shown in Sec. 3. Another aspect to be considered is the choice of the multiplexing masks, key to achieve a high-fidelity imaging. We implement patterns derived from the Hadamard matrices [15], which have been proven to improve by a factor N the signal-to-noise ratio (SNR) provided by standard raster-scanning [20]. In addition, the efficiency of our scheme benefits from the use of a single-element detector to simultaneously detect all the photons transmitted by the object, instead of spreading them out over an array of sensors in a pixelated device.

3. Methods and experimental results

We used the experimental setup shown schematically in Fig. 1(a). The light source was a high-power Xe lamp (Model 66002, Oriel Corporation). The SLM was a DMD with a pixel pitch of 13.7 μm (DLP Discovery 4200, Texas Instruments). The unit cell of the binary intensity patterns displayed onto the modulator was composed of 4 × 4 DMD pixels. Two thin lenses projected the illuminating patterns onto the object and enlarged their size with a lateral magnification of 2.7, so the length of the unit cell became 148 μm. An optical collecting system formed by two condenser lenses (25.4mm × 25.4mm FL Condenser Lens, Edmund) ensured that the light was focused within the sensor area (13 mm2) of a photodiode (DET36A Si Biased Detector, Thorlabs). An analog-to-digital converter (NI USB–6003, National Instruments) digitized the photodetected signal. Custom software written in LabVIEW was used to control the measurement process.

To assess the capability of our technique for imaging through scattering media, we retrieved an image of a binary amplitude object sandwiched between two thin holographic diffusers (Edmund Optics #T48–002 and #T54–497). The object was a square part of a resolution target (NBS 1963A) with a size of about 1 cm. The projected binary amplitude patterns were a shifted and rescaled version of the Walsh-Hadamard matrices. The number of illumination patterns was N = 4096, so the resolution of the recovered images was 64 × 64 pixels. In order to take a conventional picture, we removed both the photodiode and the collecting lens. The object plane was imaged via a photographic objective (Nikkor 28mm f/2.8, Nikon) onto a region of 280 × 280 pixels of a digital camera (UI-3480CP-M, Imaging Developmetn Systems). Both conventional and single-pixel pictures are presented in Fig. 2. Figure 2(a) shows the spectrum of our Xe lamp. We used a set of dichroic filters to select three spectral bands in the green (Green Dichroic Filter #46–139, Edmund), red (Red Dichroic Filter #30–634, Edmund) and infrared (Colored Glass Filter RDG665, Melles Griot). Even though the conventional images have a higher resolution, the halo originated by the thin diffusors makes it impossible to recover any spatial information about the scene. Meanwhile, the single-pixel approach allows us to recover the object information. We also studied the spectral behaviour of the diffusers. As can be seen in Fig. 2(b), our technique is able to recover information in all three bands with very similar quality. As a consequence, it can work with the full spectrum given by the light source.

 figure: Fig. 2

Fig. 2 (a), Power spectrum of the Xe lamp used in the experiments. In order to select different spectral bands, several combinations of dichroic filters were used. (b), Set of pictures captured both with a traditional camera and the single-pixel configuration when the object is sandwiched between two holographic diffusing layers. The pseudocolored picture (last column) corresponds to the full spectrum image (without filters). The other pictures show the results in the green, red and infrared bands, respectively.

Download Full Size | PDF

Once the method had been proved to work with holographic diffusers, we also tried to recover and object embedded in a volume diffuser. In this experiment, we chose chicken breast tissue as biological scattering medium. The results can be seen in Fig. 3. A reference image for the object is shown in Fig. 3(a). The object was sandwiched between two layers of chicken breast, with a thickness of 2.84 mm and 2.92 mm, respectively (Fig. 3(b)). No chemical or mechanical processes were applied to the tissue layers. They were placed between two transparent plastic films in order to fix them to the object under study. The image reconstruction is presented in Fig. 3(c). A sample of the measured light fluctuations, corresponding to the first 500 Hadamard patterns, is shown in Fig. 3(d). Such fluctuations make it possible to reconstruct the object image, as can be observed by comparing them with those obtained with the object immersed in air.

 figure: Fig. 3

Fig. 3 Imaging inside a biological tissue: experimental results. (a), Single-pixel reconstruction of a binary amplitude object. For this reconstruction, the object was immersed in air. (b), Photograph of one of the 3-mm samples of chicken breast that were used in this experiment. (c), Image reconstruction when the object was embedded in chicken breast. (d), Normalized intensity fluctuations corresponding to the first 500 Hadamard patterns used to recover the object in (a) and (c).

Download Full Size | PDF

The capability of our approach for imaging through tissue along the light source spectrum is analyzed in Fig. 4. In this case, the object was placed between two chicken samples with a thickness of 2.4 mm and 2.9 mm, respectively. In thick biological tissues photons undergo multiple scattering as they migrate through the medium. This multiple scattering mechanism is quite different from that observed in the thin holographic diffusers. Now, the reduced scattering parameter (the inverse of the transport mean free path) varies across the spectrum. For chicken breast, it decreases smoothly with increase in wavelength [21]. As a consequence, the retrieved images degrade progressively towards shorter wavelengths of the visible spectrum, as can be observed in Fig. 4. For the millimeter tissue thickness that we are considering here, multiple scattering dominates over optical absorption, which means that a collimated beam impinging onto the tissue is basically scattered out of the incident light direction [22]. The improvement of image reconstructions from 650 nm shows the potentiality of our approach for imaging in the tissue optical window (650–1350 nm), where light reaches its maximum penetration depth [5].

 figure: Fig. 4

Fig. 4 Spectral study of a biological tissue. From the left to the right column: images for green, red and infrared bands. The reconstruction for the full source spectrum is also shown in the last column (pseudocolored image).

Download Full Size | PDF

Another feature of our single-pixel camera is that it can take advantage of the theory of CS. By using this technique, we are able to sample the scene by projecting only a fraction of the employed basis of functions. This procedure can speed-up the acquisition process without notably decreasing the quality of the reconstruction. The fundamentals of single-pixel imaging by CS can be briefly presented as follows. Let us consider an object whose N-pixel image is arranged in a N × 1 column vector x. This image is supposed to be compressible when it is expressed in a basis of functions, Ψ = {Ψl}(l = 1,…,N). From a mathematical point of view, x can be written as x = Ψ · s, where Ψ is a N × N matrix that has the vectors {Ψl} as columns and s is the N × 1 vector composed of the expansion coefficients in the above basis. We assume that the image is sparse in Ψ, which implies that only a small group of the expansion coefficients is nonzero. In order to determine x, we implement an experimental system that generates a set of M light patterns φm(m = 1,…,M) of N-pixel resolution. These patterns enable to measure a subset of M projections of the object in the basis function Ψ. Mathematically, such a subsampling is expressed through a M × N sensing matrix Φ, whose rows are elements of the basis Ψ. The CS acquisition process can be then written as [19]

y=Φx=Φ(Ψs)=Θs,
where y is the M × 1 vector formed by the measured projections and the product of Φ and Ψ gives an M × N matrix Θ acting on the Ψ domain. The formalism of CS states that there is a high probability of reconstructing x when a random subsampling in the Ψ basis is carried out. In other words, the key point is that the matrix Θ “picks” a subset of the object projections randomly. As Eq. (10) constitutes an underdetermined matrix relation (M < N), it must be resolved by means of a proper reconstruction algorithm. The best strategy to perform this step is based on the minimization of the l1-norm of s subjected to the restriction given by Eq. (10). As the measurements y = {ym} are affected by noise, the CS recovery process is usually reformulated with inequality constrains [11, 19]. In this case, the proposed reconstruction x* is given by x* = Ψ · s*, where s* is the solution of the optimization program
minsl1such thatyΘsl2<ε,
where ε is an upper bound of the noise magnitude and the l2-norm is used to express the measurement restriction.

In order to test these ideas, we repeated the first experiment with the holographic diffusers with different compression ratios (CR), given by the formula CR = N/M. The fidelity of the reconstructed images was estimated by calculating the mean square error (MSE), given by

MSE=1Ni,j=1N[I(i,j)Iref(i,j)]2,
where I(i, j) is the lossy image obtained for a given CR and Iref (i, j) is the non-compressed reference image. The quality of the reconstructed images was quantified by the peak signal-to-noise ratio (PSNR), defined as
PSNR=10logImax2MSE,
where Imax is the maximum possible pixel value of the reference image. Figure 5(a) illustrates that CR around 50% can be achieved while the PSNR of the recovered picture remains higher than 20 dB, showing high image fidelity. In Fig. 5(b) we show a comparison between the picture achieved without CS and a set of pictures recovered with different CR. Without a significant loss in quality, CS can reduce the acquisition time at least by a factor of 2. The programming code used to obtain the optimal reconstruction was the so-called l1eq-pd, which solves the standard basis pursuit problem using a primal–dual algorithm [23]. This code includes a collection of MATLAB routines and is a well-tested algorithm for CS problems. However, other selections are possible and, in fact, the search of improved CS algorithms (more robust to data noise, with lower computation time, etc.) is an active area in the field of convex optimization [24].

 figure: Fig. 5

Fig. 5 Compressive sensing study. (a), Plot of the quality of the recovered pictures in dB as a function of the compression ratio. (b), Comparison between the reference image (without compression) and five of the pictures with different compression ratios. From left to right: reference picture, CR = 2, CR = 2.5, CR = 3.3, CR = 5 and CR = 10.

Download Full Size | PDF

4. Conclusions

In conclusion, we have presented a technique that merges Hadamard illumination and single-pixel photodetection for non-invasive imaging through scattering media. In our approach, images free of scattering noise are reconstructed from the correlation between a set of illumination patterns and the corresponding intensity fluctuations, which are measured by a “bucket” detector. We have obtained results for an absorbing object sandwiched between two opaque scattering layers, as well as for the same object embedded in a millimeter sample of chicken breast tissue, where the light experienced multiple scattering. The spectral behavior of the scattering medium has been analyzed in both cases, demonstrating imaging through tissue slides inside the “therapeutic window”. The above goals have been reached without increasing the cost and complexity of the imaging system, as is evidenced by the use of a white-light lamp and an off-the-shelf DMD. In our optical scheme, image resolution and field of view are controlled by the DMD parameters and the data acquisition process can be sped up by the application of compressive sensing. In addition, the freedom to select a proper bucket detector could lead to performing hyperspectral or polarimetric imaging even in the presence of scattering media [25].

Acknowledgments

This work was supported in part from MINECO (grant FIS2013-40666-P), Generalitat Valenciana (grants PROMETEO2012-021 and ISIC 2012/013), and Universitat Jaume I ( P1-1B2012-55). E.I. and F.S. were partially supported by a Generalitat Valenciana research fellowship.

References and links

1. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005). [CrossRef]   [PubMed]  

2. L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-d imaging through scattering walls using an ultrafast optical kerr gate,” Science 253, 769–771 (1991). [CrossRef]   [PubMed]  

3. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

4. M. P. Rowe, E. N. Pugh, J. S. Tyo, and N. Engheta, “Polarization-difference imaging: a biologically inspired technique for observation through scattering media,” Opt. Lett. 20, 608–610 (1995). [CrossRef]   [PubMed]  

5. V. Tuchin, Tissue Optics (SPIE, 2007). [CrossRef]  

6. D. Razansky, M. Distel, C. Vinegoni, R. Ma, N. Perrimon, R. W. Köster, and V. Ntziachristos, “Multispectral opto-acoustic tomography of deep-seated fluorescent proteins in vivo,” Nat. Photonics 3, 412–417 (2009). [CrossRef]  

7. C. Vinegoni, C. Pitsouli, D. Razansky, N. Perrimon, and V. Ntziachristos, “In vivo imaging of Drosophila melanogaster pupae with mesoscopic fluorescence tomography,” Nat. Methods 5, 45–47 (2008). [CrossRef]  

8. V. Ntziachristos, J. Ripoll, L. V Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nat. Biotechnol. 23, 313–320 (2005). [CrossRef]   [PubMed]  

9. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]   [PubMed]  

10. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

11. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, K. F. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sign. Process. Mag. 25, 83–91 (2008). [CrossRef]  

12. J. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

13. P. Clemente, V. Durán, E. Tajahuerce, V. Torres-Company, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012). [CrossRef]  

14. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013). [CrossRef]   [PubMed]  

15. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605–609 (2014). [CrossRef]  

16. D. J. Cuccia, F. Bevilacqua, A. J. Durkin, and B. J. Tromberg, “Modulated imaging: quantitative analysis and tomography of turbid media in the spatial-frequency domain,” Opt. Lett. 30, 1354–1356 (2005). [CrossRef]   [PubMed]  

17. D. J. Cuccia, F. Bevilacqua, A. J. Durkin, F. R. Ayers, and B. J. Tromberg, “Quantitation and mapping of tissue optical properties using modulated imaging,” J. Biomed. Opt. 14, 024012 (2009). [CrossRef]   [PubMed]  

18. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22, 16945 (2014). [CrossRef]   [PubMed]  

19. E. J. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008). [CrossRef]  

20. N. J. A. Sloane, Hadamard Transform Optics (Academic, 1979).

21. G. Marquez, L. V Wang, S. P. Lin, J. A Schwartz, and S. L. Thomsen, “Anisotropy in the absorption and scattering spectra of chicken breast tissue,” Appl. Opt. 37, 798–804 (1998). [CrossRef]  

22. B. J. Tromberg, N. Shah, R. Lanning, A. Cerussi, J. Espinoza, T. Pham, L. Svaasand, and J. Butler, “Non-invasive in vivo characterization of breast tumors using photon migration spectroscopy,” Neoplasia 2, 26–40 (2000). [CrossRef]   [PubMed]  

23. http://users.ece.gatech.edu/justin/l1magic/.

24. M. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). [CrossRef]  

25. F. Soldevila, E. Irles, V. Durán, P. Clemente, M. Fernández-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113, 551–558 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Operation principle. (a), Schematic of the optical setup. Upper left inset: examples of projected patterns. Upper right inset: weighted superposition of the diffuse background and the illumination pattern. The contribution of the latter has been artificially increased to make it visible. Lower inset: binary amplitude object. (b), A ray light representation corresponding to a bright point of an illumination pattern. For a medium that produces multiple scattering, photons are randomly deviated from the directions they would trace in a homogeneous medium (dashed lines). Those diffuse light rays are red colored in our scheme. This forms a diffuse halo on the object plane, as rays arrive at multiple positions instead of the desired one. However, a fraction of the incoming rays follows a zigzagging path close to the dashed one, even after several interactions with the scattering centres. Those quasi ballistic or snake rays are green colored in our scheme. If such rays are transmitted by the object, they eventually become diffuse rays as a consequence of the second scattering process. Detection consists of collecting all the rays emerging from the ensemble in any direction.
Fig. 2
Fig. 2 (a), Power spectrum of the Xe lamp used in the experiments. In order to select different spectral bands, several combinations of dichroic filters were used. (b), Set of pictures captured both with a traditional camera and the single-pixel configuration when the object is sandwiched between two holographic diffusing layers. The pseudocolored picture (last column) corresponds to the full spectrum image (without filters). The other pictures show the results in the green, red and infrared bands, respectively.
Fig. 3
Fig. 3 Imaging inside a biological tissue: experimental results. (a), Single-pixel reconstruction of a binary amplitude object. For this reconstruction, the object was immersed in air. (b), Photograph of one of the 3-mm samples of chicken breast that were used in this experiment. (c), Image reconstruction when the object was embedded in chicken breast. (d), Normalized intensity fluctuations corresponding to the first 500 Hadamard patterns used to recover the object in (a) and (c).
Fig. 4
Fig. 4 Spectral study of a biological tissue. From the left to the right column: images for green, red and infrared bands. The reconstruction for the full source spectrum is also shown in the last column (pseudocolored image).
Fig. 5
Fig. 5 Compressive sensing study. (a), Plot of the quality of the recovered pictures in dB as a function of the compression ratio. (b), Comparison between the reference image (without compression) and five of the pictures with different compression ratios. From left to right: reference picture, CR = 2, CR = 2.5, CR = 3.3, CR = 5 and CR = 10.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

H = ( I 1 ( x 1 ) I 1 ( x N ) I N ( x 1 ) I N ( x N ) ) ,
H H T = N I d ,
Y = H T ,
c o r r [ Δ I ( x ) , Δ Y ] = i = 1 N Δ I i ( x ) Δ Y i ,
c o r r [ Δ I ( x ) , Δ Y ] = N T * ,
c o r r [ Δ I ( x ) , Δ Y ] = c o r r [ Δ I ( x ) , Δ S ] + i = 1 N I i ( x ) ( n i n ) ,
i = 1 N I i ( x ) n i = ( i = 1 N I i ( x ) ) ( i = 1 N n i ) = N 2 I ( x ) n ,
c o r r [ Δ I , Δ Y ] = c o r r [ Δ I , Δ S ] + ( N 2 N ) I ( x ) n ,
c o r r [ Δ I , Δ Y ] = c o r r [ Δ I , Δ S ] ,
y = Φ x = Φ ( Ψ s ) = Θ s ,
min s l 1 such that y Θ s l 2 < ε ,
M S E = 1 N i , j = 1 N [ I ( i , j ) I r e f ( i , j ) ] 2 ,
P S N R = 10 log I max 2 M S E ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.