## Abstract

Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

© 2014 Optical Society of America

## 1. Introduction

Ghost imaging (GI) [1–4] has aroused considerable attention due to its counter-intuitive imaging mechanism. Initially, two photo-detectors were used to produce a GI image, one being a bucket detector without spatial resolution which is used to collect the light field coming from an object, the other with high spatial resolution for recording the intensity distribution of the source. The image can be retrieved merely by correlating the signals of these two detectors, but not either one alone.

Later, computational ghost imaging (CGI) was reported by Shapiro [5] and the first such experiment demonstrated by Bromberg [6], in which the ghost image was retrieved by only one single-pixel detector. Since then, more and more attention has focused on CGI due to its flexible optical design, compatibility with computer processing programs like compressed sensing (CS) [7,8], and high signal-to-noise ratio (SNR) compared with conventional GI. Subsequently, many related novel applications emerged, such as single-pixel cameras [9, 10], dual photography [11], and optical encryption [12,13]. Thus, we can say that, combined with the technology to date, CGI has brighter application prospects than traditional GI.

Since the first CS algorithm was reported by Candès, Tao and Donoho [14–16], there emerged various papers that integrated CS into GI protocols in order to recover higher quality images [17, 18]. But CS also has some shortcomings in that it takes much more time than the calculation of the second-order correlation, and the time consumed increases exponentially with the size of the image to be recovered, or may even fail to work under specific circumstances. Fortunately, these shortcomings can be overcome by a method named compressive adaptive computational ghost imaging (CCGI) reported by Aßmann and Bayer [19], similar to the work first proposed by Averbuch et al. [20] that directly uses the patterns that form the sparse basis to replace classical random patterns. In [19] it was demonstrated experimentally that one could take a coarse image with low resolution, perform a one-step wavelet transform, then adaptively determine regions of large coefficients and scan these areas with higher resolution. After all measurements have been done, a new finer image can be reconstructed. This method requires fewer measurements than CS but without any computational overhead, and images of any size can be retrieved. However, we find that it is not the ultimate optimal solution as the measured data can be further reduced.

In this paper, a strategy that combines all the advantages of the above but avoids their shortcomings is proposed, which we call adaptive compressive ghost imaging (ACGI). This method has four main advantages: first, very large images can be retrieved without stringent hardware restrictions, which is a huge problem for CS algorithms; second, the number of measurements can be significantly fewer than any other CGI methods [1–4, 21]; third, the time needed to retrieve the image is reduced, compared with traditional CS algorithms, thus it will be applicable to real-time computational ghost imaging such as compressive fluorescence microscopy [22]; fourth, by absorbing the advantages of CS, the weak signals or images in the noisy environments can also be recovered with high quality. This paper is organized as follows. In Sec. 2, we describe the ACGI model, and illustrate how it works with some results both by numerical simulation and experiment. Then, some discussion of the results and prospective applications are given in Sec. 3. Finally, a brief summary is presented in Sec. 4.

## 2. Adaptive compressive ghost imaging model and results

The CCGI protocol adaptively finds out which parts of the image need to be scanned with lower or higher resolution, block by block or pixel by pixel, instead of taking all the data. Therefore, its main advantage is that it can retrieve the image with fewer measurements and without any computational overhead. However, point-scanning the areas of interest in the image by using a spatial light modulator (SLM) takes a very long time. Besides, the regions to be scanned in finer resolution may also have a sparse representation in some basis so that their effective information content is lower than the number of pixels, and thus a full data acquisition is unnecessary.

Aiming to combine the advantages of both CCGI and CS without their shortcomings, we modify the sampling mechanism and replace the point-scanning in each target scale with single-pixel sampling according to their adaptive thresholds. We use a digital micro-mirror device (DMD) instead of an SLM, which greatly speeds up the data acquisition rate.

To illustrate our strategy, we first define a square image [Fig. 1(a)] consisting of *q* × *q* pixels and then convert it from the space domain to the wavelet domain. The wavelet decomposition procedure [23, 24] is given in Fig. 1(b). For the first-level wavelet decomposition, the upper right, lower left and lower right sector of the transform correspond to horizontal, vertical and diagonal edges, respectively. The upper left sub-sector also consists of four quadrants. The upper left quadrant represents a coarse version of the original image as well as the mean intensity of the picture, while the other three quadrants still contain information about horizontal, vertical and diagonal edges; this is called the second-level wavelet decomposition. When we perform a wavelet transform on a natural image that is typically compressive or has sparse representation, only a small portion of the wavelet coefficients corresponding to sharp edges are large, while the coefficients in the upper left coarse quadrant that represent the outline of the image are also large, so together they are enough to approximately recover the full image.

Wavelet decomposition provides a hierarchical data structure for image representation, which is called a wavelet tree and is composed of the coefficients at different resolutions and different orientations but with the same spatial location [20,24–28], as shown in Fig. 1(c). Each wavelet coefficient of the high frequency components in the third decomposition corresponds to a 2 × 2 wavelet coefficients set within its adjacent larger resolution scale in the second decomposition with the same spatial location, followed by 4 × 4 wavelet coefficients in the next set. If the low frequency components in the third decomposition are not taken into account, the three wavelet coefficients in the horizontal, vertical and diagonal orientations, together with the 2 × 2 and 4 × 4 wavelet coefficients sets on the larger scale, consist of a hierarchical data structure of size 63. The three points in the third decomposition are called the root (parent) nodes, while the next sets are called child nodes, and the last sets leaf nodes. It is clear that a 256 × 256 image has 1,024 wavelet subtrees if we perform a 3-level wavelet decomposition.

Our strategy is based on wavelet transformation theory [23, 24]. Unlike Aßmann’s method, for a *q* × *q* pixels image and total *L*-level wavelet transform, where *q* is the *k*th power of two, we first produce *m _{j}* (

*m*<

_{j}*q*

^{2}) full screen scattered speckles of 2

^{j}^{−1}× 2

^{j}^{−1}pixel size (

*j*=

*L*) by using sparse random matrices. After the $\frac{q}{{2}^{j-1}}\times \frac{q}{{2}^{j-1}}$ pixels coarse image is reconstructed by a CS algorithm, we perform a one-step wavelet transform on it. Since the coefficients of the tree or subtree correspond to the wavelet coefficients in the same directions but on different scales, and these coefficients correspond to the same spatial regions of the original image, therefore they have a strong correlation [24]. Now we search the remaining three child-quadrants to find absolute values of wavelet coefficients that are larger than a predefined threshold, then mark the locations in the real space image corresponding to these points, and enlarge the set union of these areas fourfold on its scale. Next, in these union areas of the DMD, we load pre-generated sparse speckles of 2

^{j−1}× 2

^{j−1}pixel size (

*j*=

*L*− 1). The number of frames

*m*is

_{j}*b*% (5 ≤

_{j}*b*≤ 30) of the number of elements in the union. Each frame corresponds to a measurement value recorded by the point detector. Once all the measurements on the finer scale have been done, a finer area image can be retrieved by a CS reconstruction algorithm. We still perform another one-level wavelet decomposition on this finer image, and find the large coefficients against a new threshold in the three child-quadrants that represent the information of sharp edges. This entire process is repeated until finally the speckles reach a size of 1 × 1 pixel (

_{j}*j*= 1). After that, the wavelet transform of this image is created, which is then converted back to a real space image using the inverse transform. A flowchart of the whole process is shown in Fig. 2.

Different from the method described in [19], instead of measuring the signal *x* by point-scanning with an SLM, we measure the scalar product of the signal with truly random speckles:

*y*is the measurement vector of dimension

*M*, Φ is an

*M*×

*N*sensing matrix,

*e*is the noise vector of dimension

*M*, and

*x′*is the sparse representation coefficient. When

*M*<

*N*, the problem is ill-conditioned [15, 16], but we notice that the signal of interest is sparse or compressive in a certain basis Ψ. For the recovery problem, many methods [7, 8, 29, 30] have been developed during the last few years. Usually the

*l*

_{1}– norm serves as a measure of sparsity [16], and by minimizing it we can obtain the optimal solution of such a problem:

*ε*is the allowed error, and

*l*norm is defined as ${\Vert x\Vert}_{p}={\left(\sum _{i=1}^{N}{x}_{i}\right)}^{1/p}$. The standard method to solve the

_{p}*l*

_{1}minimization problem is the so-called basis pursuit [29]. There are also many other algorithms such as OMP, IST and l1magic, but they demand high sparsity of the signal. In most cases unfortunately, natural images only have approximately sparse representations where the number of the big coefficients is rather small, thus it is difficult for these algorithms to recover high quality images. The ACGI strategy that we use is based on the wavelet tree, similar to the wavelet basis used in conventional CS algorithms, so if we still use CS algorithms that are based on wavelet transforms, the effect of the secondary compression will not be obvious. Only by using a basis that is independent of wavelet transformation as the secondary compression can solve this problem. Total variation is a good choice. TVAL3 [30] is a good example of a total variation method that does not require Φ * Φ′ =

*I*and has good generality as well as noise robustness, which is very suitable for our imaging system. The solution can be approximately sparse. Given that, we choose TVAL3 as the compressive reconstruction algorithm instead of other standard algorithms to address the optimization problem of Eq. (2) and to smooth the noisy image. Here, we perform total variation on the union of interest at each level to acquire a considerable reduction ratio. In addition, we use sparse speckles to obtain better image quality.

The simulation procedure for the gray-scale map of living cells is presented in Fig. 3. The top left three images [Figs. 3(a)–3(c)] correspond to CCGI, and the bottom left three images [Figs. 3(d)–3(f)] correspond to our algorithm. By presetting thresholds in each stage, both these two approaches can find the regions of large wavelet coefficients. Picking a reasonable threshold depends strongly on the nature of the image, and also on experience. Suppose the number of elements in the union is *c _{j}*. When the coefficients are sorted in descending order, we find that only the first

*b*% coefficients are relatively large. The

_{j}*a*th coefficient of this sequence is the threshold needed to be set, here

*a*=

*c**

_{j}*b*%. According to the data structure of the wavelet subtree, we put all the locations of selected wavelet coefficients in the

_{j}*j*th decomposition together to form a union set. The next procedure is to measure these areas in the real image. However, unlike the former detection approach, our strategy regards these regions as a whole, and generates sparse speckles using a much smaller number of measurements than the total number of pixels in these regions, then calculate the inner product of the pattern matrix and the target to simulate the total light intensity measured in the

*j*th decomposition. An area image can be obtained by applying CS instead of SLM point-scanning. Once all the sampling steps have been completed, a wavelet transform [Fig. 3(g)] of the image will be created. The real image is recovered by performing the inverse wavelet transform, as shown in Fig. 3(h).

For a quantitative comparison of the image quality, we introduce the peak signal-to-noise ratio (PSNR) and the mean square error (MSE) as figures of merit:

where*T*represents the original image consisting of

_{o}*s*×

*t*pixels, and

*T̃*the retrieved image. Naturally, the larger the PSNR value, the better the quality of the image recovered. The PSNR of Fig. 3(h) is 28.055 dB. The whole three-level wavelet decomposition corresponds to 64 × 64, 128 × 128 and 256 × 256 pixel square regions, respectively, so the total signal dimension is 86,016. In this case, after setting appropriate thresholds 81.000 (

*j*= 2,

*b*= 24.697%) and 60.221 (

_{j}*j*= 1,

*b*= 16.787%), there are (4, 096 + 4, 496 + 12, 224) = 20, 816 patterns that need to be scanned in CCGI. However, our method can perform 60%, 90% and 90% compressive sampling on each respective square region, which decreases the sampling rate by 3.847%, while

_{j}*b*also can be changed by choosing suitable thresholds. We can see that the total acquisition rate is roughly proportional to the area of detail or the number of nonzero high frequency wavelet coefficients.

_{j}To check the validity of our theory and numerical simulation results, an experiment was performed. The optical setup is shown in Fig. 4. It is based on the single-pixel camera which is a point detector without spatial resolution. Light from a 55 W halogen lamp illuminates the object, which is imaged directly onto a DMD consisting of 1, 024 × 768 pixels of size 13.68 × 13.68 *μ*m^{2}. The frame modulation frequency can reach 32,552 Hz [31]. To obtain better image quality, the DMD was programmed with sparse randomly distributed speckles, generated by a computer controlled region selector. The total light intensity was collected by a bucket (single-pixel) detector. Knowing the measured values and using speckles of 4 × 4 pixel size, the first coarse image was reconstructed by a CS algorithm [30]. According to this result, the region selector could adaptively find the regions of interest in real space corresponding to large detail coefficients in the wavelet domain of the previous coarse image, and magnify these areas four times. Then in the next imaging process the region selector only loaded the sparse speckles of finer size onto the micro-mirrors in these magnified areas of the DMD. The process was then repeated in the same manner, as described above.

For simplicity but without loss of generality, only a gray scale object will be considered here. We choose a black-and-white 1951 USAF resolution test chart as the object, which is illustrated in Fig. 5(a), where the black part of the mask blocks the light and the white part transmits light. We used a photomultiplier tube (PMT) (Hamamatsu H7468-20) as the bucket (single-pixel) detector for ultra-weak light detection, which can be operated at up to 450 Hz. Although our technique is limited by this readout frequency, it is still reasonably fast. The PMT bias voltage was set at 500 V, and the integration time over the duration of each pattern was set at 1800 *μ*s, with a dead time of 200 *μ*s, to ensure that the light intensity transmitted did not exceed the saturation limitation of the PMT. The PMT is active only when the rising edges of the pattern triggered signals arrive. Here we also used a total 3-level wavelet transform and performed 45.117% and 22.559% finer measurements, respectively, for a series of largest wavelet coefficients on the 2 × 2 and 1 × 1 scale by setting different thresholds. For this case we had *m*_{3} = 2, 464 for the first 64 × 64 coarse image, *m*_{2} = 7, 392 for the second stage, and *m*_{1} = 14, 784 for the third step, so the total number of measurements needed was 24,640, approximately 28.646% of 86,016, and 37.598% of the number of original image pixels 65,536. Of course, we could have fixed the number of finer measurements for each scale beforehand, but in this way some image quality would have been sacrificed. Besides, the diameter *D* of the imaging lens we used was 25.4 mm, with a focal length *f* of 100 mm, and central wavelength *λ* of 550 nm. As is known, the minimum angle resolution
${\theta}_{\text{min}}=\frac{d/2}{f}=1.22\frac{\lambda}{D}$, and Airy disk radius can be written as
$\frac{d}{2}=\frac{1.22\lambda f}{D}$, where *d* is the diameter. Considering the magnification *β* and micro-mirror size *ρ*, we obtain the following inequality:
$\beta \frac{d}{2}=\frac{1.22\beta \lambda f}{D}\ge \rho $. If
$\beta \ge \frac{\rho D}{1.22\lambda f}\approx 5.178$, then the resolution of the system that can be realized depends on the spatial resolution of the lens. Otherwise, it will depend on the micro-mirror size *ρ*. Since the DMD actually replaces the function of a traditional charge-coupled device (CCD) detector, the image should match the size of the screen to obtain optimal results. For objects that are very small, it would therefore be necessary to magnify the image first by a microscope. The reconstructed image of the test chart is given in Fig. 5(b).

As this method is adaptive compressive, we only need to load sparse binary random speckles on the DMD at the beginning of each stage, so it is no longer necessary to precompute
${q}^{2}\left(1+\frac{1}{4}+\dots +{\left(\frac{1}{4}\right)}^{L}\right)=\frac{4{q}^{2}}{3}\left(1-{\left(\frac{1}{4}\right)}^{L}\right)$ patterns or compute them on the fly for SLM point-scanning as in [19]. Although the area sampled seems to be enlarged, we achieve further sub-sampling on the basis of CCGI. Besides, the speckle regions can be changed by choosing optimal thresholds to obtain the best imaging quality. The exact number of measurements needed may be hard to predict as it depends on how sparse an image is in the wavelet basis, as well as the inter-relevance and further compressibility of details. Going to larger images, the advantage of this strategy will show up in the realization of high sub-sampling in real time to reconstruct large images. Thus our algorithm may provide some inspiration for actual applications that employ single-pixel detectors to record large images or data known *a priori* to be compressive.

## 3. Discussion

#### 3.1. Analysis of results

We now discuss a more practical condition in which noise is added, as shown in Figs. 6(a)–6(i) and Figs. 7(a)–7(b).

For a fair comparison, we calculate the PSNR with a fixed total acquisition rate at the same noise level. It is evident that as the standard deviation of the Gaussian white noise or the mean Poisson noise increases, the PSNR of ACGI gradually becomes better than that of CCGI, which owes much to the excellent robustness of CS against noise. To demonstrate the advantages of ACGI further, the PSNRs for different total acquisition rates under the same Poisson distribution noise of mean value 40 are given in Fig. 7(c). It can be seen that the PSNR is almost linearly proportional to the total acquisition rate, and that ACGI is able to retrieve an image of better quality than CCGI but with much less data manipulation and sampling time. From the above analysis, we can regard our method as also being more suitable for measuring images containing strong noise or weak signals. It is interesting that the PSNR curves all exhibit some slight oscillations. To our knowledge, this is partly due to the randomness of the noise fluctuations and partly because as a consequence the retrieved images at each level will change as well.

Furthermore, we would like to have a comparison of ACGI with standard CS algorithms which reconstruct the full image, as shown in Fig. 8. For all the four figures the same sampling rate is 25% of the original 65,536 image pixels. Fig. 8(a) shows the CCGI result, while Figs. 8(b)–8(d) show the images recovered by RecPF [32], TVAL3 and SSMP [33] software toolkits, respectively. As there are so many algorithms and approaches, most of them (e.g., OMP, BP, IST, SpaRSA) have huge computational overhead when dealing with large images, while the time consumed increases horrendously with image size, taking perhaps several hours or more. Even if the large images are finally reconstructed, the results are far from satisfactory, so it is not necessary to make a comparison with these algorithms. A few optimization algorithms have appeared in recent years which can deal with large objects under some specific conditions, such as RecPF and SSMP. The former is based on partial Fourier data while the latter can decrease storage space and computation time by using sparse random matrices. Figs. 8(c)–8(d) all used sparse random speckles and can reconstruct the full image in a relatively short time. Our ACGI also benefits from this great merit of sparse random speckles. From the figures of Fig. 8, we see that although ACGI gives a slightly lower quality image than RecPF, but it is better than the other two while taking up a much shorter computation time than all of them. Actually, ACGI is an intermediate approach of computational ghost imaging, benefiting from both CCGI and CS.

#### 3.2. Potential applications

Our new method offers a general approach applicable to all traditional GI based on thermal light. Here we present some simulation results by which we can verify its validity and feasibility in potential applications, such as time-resolved and multi-wavelength applications of correlation imaging.

For time-resolved simulation, we also chose photos of time-resolved living cells from the picture gallery of Matlab as the objects, as shown in Figs. 9(a)–9(d). As the cells are sparse in both space-domain and wavelet-domain, our method is especially applicable for these types of time-resolved images. The results are illustrated in Figs. 9(e)–9(h). The computation time, including both the data input time and the core computation time, for each retrieved image at different times is shorter than 14 s with an Intel Core i5-2400 CPU. If a supercomputer with parallel processing were used, the computation time would be much shorter than 0.1 s. With an avalanche photodiode of GHz response rate and a DMD with a frame rate up to 32,552 Hz, it is conceivable that real-time applications on the order of seconds could be realized.

In order to investigate the feasibility of color ACGI, we took a 256 × 256 pixel color picture of some flowers photographed by ourselves. The ACGI simulated results are shown in Fig. 10. The original image [Fig. 10(a)] is split into R (red), G (green) and B (blue) planes to simulate the effect of RGB color filters, as shown in Figs. 10(b)–10(d). By overlaying the R, G and B adaptive compressive ghost images as shown in Figs. 10(e)–10(g), we obtain a colored ghost image, as shown in Fig. 10(h). These results show that a multi-wavelength composite image can be reconstructed clearly with 255 tones without any color distortion.

For the future, we will continue to experimentally realize the adaptive compressive spectral imaging with a dispersion grating and by collecting the signals containing visible and near-infrared wavelength information, for applications such as fluorescence imaging. A more extended experimental setup and quantitative analysis will be presented in a future paper.

## 4. Summary

In this work, we have proposed and demonstrated a new method named adaptive compressive ghost imaging which combines the advantages of both adaptive computational ghost imaging and compressed sensing protocols. Both numerical simulations and experimental realizations have been used to demonstrate its exceptional features. First, very large images can be retrieved without stringent hardware restrictions; second, the number of measurements is much less than for CGI or pure CS methods; third, the computational time is greatly reduced, compared with conventional CS algorithms; fourth, it is robust against noise. It is therefore a far better choice for retrieving high quality images from ultra-weak or noisy signals. Furthermore, this technique can be used to improve all computational ghost imaging or single-pixel imaging protocols where the image has sparse representation in the wavelet basis, and can be extended to imaging applications at any wavelength, including non-visible wavebands.

## Acknowledgments

This work was supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595), the National High Technology Research and Development Program of China (Grant No. 2011AA120102), the State Key Development Program for Basic Research of China (Grant No. 2010CB922904), and the National Natural Science Foundation of China (Grant Nos. 61274024 and 11375224).

## References and links

**1. **D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. **74**, 3600–3603 (1995). [CrossRef] [PubMed]

**2. **A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. **93**, 093602 (2004). [CrossRef] [PubMed]

**3. **D. Zhang, Y. H. Zhai, L. A. Wu, and X. H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. **30**(18), 2354–2356 (2005). [CrossRef] [PubMed]

**4. **B. I. Erkmen and J. H. Shapiro, “Unified theory of ghost imaging with Gaussian-state light,” Phys. Rev. A **77**(4), 043809 (2008). [CrossRef]

**5. **J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A **78**, 061802 (2008). [CrossRef]

**6. **Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A **79**(5), 053840 (2009). [CrossRef]

**7. **M. Fornasier and H. Rauhut, “Iterative thresholding algorithms,” Appl. Comput. Harmon. Anal. **25**(2), 187–208 (2008). [CrossRef]

**8. **N. B. Karahanoglu and H. Erdogan, “A* orthogonal matching pursuit: best-first search for compressed sensing signal recovery,” Digit. Sig. Process. **22**(4), 555–568 (2012). [CrossRef]

**9. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Proc. Mag. **25**(2), 83–91 (2008). [CrossRef]

**10. **W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. **93**(12), 121105 (2008). [CrossRef]

**11. **P. Sen and S. Darabi, “Compressive dual photography,” Computer Graphics Forum **28**(2), 609–618 (2009). [CrossRef]

**12. **S. Li, X. R. Yao, W. K. Yu, L. A. Wu, and G. J. Zhai, “High-speed secure key distribution over an optical network based on computational correlation imaging,” Opt. Lett. **38**(12), 2144–2146 (2013). [CrossRef] [PubMed]

**13. **W. K. Yu, S. Li, X. R. Yao, X. F. Liu, L. A. Wu, and G. J. Zhai, “Protocol based on compressed sensing for high-speed authentication and cryptographic key distribution over a multiparty optical network,” Appl. Opt. **52**(33), 7882–7888 (2013). [CrossRef]

**14. **E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**(2), 489–509 (2006). [CrossRef]

**15. **D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**16. **E. J. Candès, “Compressive sampling,” in *Proc. Int. Cong. Math*, (European Mathematical Society, Madrid, Spain, 2006), 3, pp. 1433–1452.

**17. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**(13), 131110 (2009). [CrossRef]

**18. **W. L. Gong and S. S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A **376**(17), 1519–1522 (2012). [CrossRef]

**19. **M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. **3**, 1545 (2013).

**20. **A. Averbuch, S. Dekel, and S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. Imaging Sci. **5**(1), 57–89 (2012). [CrossRef]

**21. **M. F. Li, Y. R. Zhang, X. F. Liu, X. R. Yao, K. H. Luo, H. Fan, and L. A. Wu, “A double-threshold technique for fast time-correspondence imaging,” Appl. Phys. Lett. **103**, 211119 (2013). [CrossRef]

**22. **V. Studer, J. Bobin, M. Chahid, H. Moussavi, E. J. Candès, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” in Proceedings of the National Academy of Sciences, (2012), 109(26), E1679–E1687. [CrossRef]

**23. **S. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Trans. Pattern Anal. **11**(7), 674–693 (1989). [CrossRef]

**24. **S. Mallat, *A wavelet tour of signal processing, the sparse way* (Elsevier, 2009), pp. 340–346.

**25. **J. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Trans. Signal Proces. **41**(12), 3445–3462 (1993). [CrossRef]

**26. **A. Said and W. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circ. Syst. Video Technol. **6**(3), 243–250 (1996). [CrossRef]

**27. **S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process. **9**(9), 1532–1546 (2000). [CrossRef]

**28. **J. Haupt, R. Nowak, and R. Castro, “Adaptive sensing for sparse signal recovery,” in Proceedings of the 2009 IEEE Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, (Marco Island, FL, Jan., 2009), 702–707.

**29. **S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. **20**(1), 33–61 (1998). [CrossRef]

**30. **C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” Master Thesis, Rice University, (2010).

**31. ** Texas Instruments, “DLP discovery 4100 chipset data sheet (Rev. A),” (2013), "http://www.ti.com/lit/er/dlpu008a/dlpu008a.pdf""”.

**32. **J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE J. Sel. Top. Signal Processing **4**(2), 288–297 (2010). [CrossRef]

**33. **R. Berinde and P. Indyk, “Sequential sparse matching pursuit,” in Proc. 47th Annu. Allerton Conf. Commun. Control Comput., (2009), 36–43.