## Abstract

A structured image reconstruction method has been proposed to obtain high quality images in three-dimensional ghost imaging lidar. By considering the spatial structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated to reconstruct the three-dimensional scenes in remote sensing. Numerical simulations have been performed to demonstrate that scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene has been successfully reconstructed by using structured image reconstruction in three-dimensional ghost imaging lidar.

© 2015 Optical Society of America

## 1. Introduction

Ghost imaging has attracted much attention these years for its ability to acquire signals from sampling beyond Nyquist limit when combined with compressive sensing [1–4] and its potential applications in remote sensing, super-resolution microscopy, morphology component analysis, diffraction imaging, etc [5–19]. In order to accurately determine the unknown targets in remote sensing, a large number of measurements are required in ghost imaging lidar, which limits its practical application significantly. To improve sampling efficiency and image quality, ghost imaging via sparsity constraints (GISC) has been developed [20–24], and high lateral resolution images can be obtained with two-dimensional GISC lidar [25]. However, the natural scenes in remote sensing are three dimensional, and longitudinal information of the scenes is not accessible in two-dimensional GISC lidar. On the other hand, based on the single-pixel camera scheme, a parametric signal model has been provided to recover a set of depths present in the scene by considering the scene’s true parametric response and the detector’s impulse response [26]. And a binocular stereo vision method has also been used to recover the depth information [27]. Recently, three-dimensional ghost imaging lidar has been realized by introducing time-resolved measurement into the reflecting signal detection [28], and high quality image reconstruction both in lateral and longitudinal directions will get more pressing for practical utilizations of three-dimensional ghost imaging lidar.

Structured compressive sensing (SCS) is an emerging framework which goes beyond the “random measurement/sparsity model” paradigm of basic compressive sensing [29]. More general signal recovery problems in practice can be treated with SCS theory by introducing structure prior information and complicated signal models [30–33]. Thus, the consideration of integrating structure in GISC is a feasible way to obtain high quality images in three-dimensional GISC lidar. Several kinds of structured signal models have been proposed in literature, including multiple measurement vectors [30], unions of subspaces [31,32], and low rank model [33]. However, these models have been designed to recover sparse signals with structure similarity, such as common sparse support in multiple measurement vectors model, tree-structured sparse support in finite union of subspaces model, and so on. While in remote sensing, a three-dimensional scene can be divided into slices at different longitudinal distances, which are spatially different from each other, so there is no structure similarity in the recovered signals. Therefore, signal models mentioned above cannot be directly applied to three-dimensional GISC lidar. But this non-similarity characteristic of recovered images also provides structure prior information, and still can be exploited as extra structure constraint to reconstruct natural scenes in three-dimensional GISC lidar.

The main contribution of this paper is to demonstrate how structure information can be exploited to obtain high quality images in three-dimensional GISC lidar. In section 2, the model for three-dimensional ghost imaging via sparsity constraints (3D GISC) is described and the structured image reconstruction method incorporating orthogonality constraint is proposed. In section 3, simulation is performed to explore the performance of structured image reconstruction in 3D GISC. Section 4 is the conclusion.

## 2. The model and method

The schematic for 3D GISC is shown in Fig. 1. Distinguished from two-dimensional GISC, the reflected signals from the target are recorded by a time-resolved bucket detector(TBD), and the three-dimensional target scene can be divided into slices at distances *d _{1},…,d_{i},…,d_{K}*. When the pseudo-thermal light reach a scene slice, the light encountering obstacles in the slice will be reflected and the rest will propagate to the next slice. In each measurement, an incident light pulse is partially reflected by different slices successively, and the reflected light of different slices will finally reach the detector at times

*t*successively to produce a time-resolved signal with

_{1},…,t_{i},…,t_{K}*K*elements. Each element in the time-resolved signal can be used to reconstruct image of corresponding scene slice similar to two-dimensional GISC.

The speckle intensity distribution (*m × n* pixels) recorded by the CCD in each measurement can be reshaped into a row vector of length *N(N = m × n)* to form a single row of the measurement matrix denoted by $A$. The time-resolved signal recorded by the TBD in each measurement is a vector of length *K*, and can be used to form a row of the signal matrix denoted by $Y$. After *M* measurements (*M<<N*), the *M × N* measurement matrix$A$and the *M × K* signal matrix $Y$ can be obtained. The unknown scene slice at distance *d _{i}* can be denoted by a column vector

*x**of length*

_{i}*N*, and each column vector

*y**in signal matrix $Y$ consists of the*

_{i}*M*measurement signals of

*x**. Then, the two dimensional GISC of scene slice*

_{i}

*x**can be expressed as*

_{i}

*y**=*

_{i}

*Ax**. According to compressive sensing theory,*

_{i}

*x**can be represented as ${x}_{i}=\Psi {\theta}_{i}$, $\Psi $is the transform operator to the sparse representation basis. The three-dimensional scene as a whole denoted by*

_{i}**can be written as $X=[{x}_{1},\dots ,{x}_{i},\dots ,{x}_{K}]$, and the image reconstruction similar to two dimensional GISC can be regarded as solving the following optimization:**

*X***||**

*v*_{2}denotes the

*l*

_{2}Euclid norm of vector

**, ||**

*v***||**

*v*_{1}denotes the

*l*

_{1}Euclid norm of vector

**,**

*v**τ*is nonnegative parameters. In this way, the standard sparsity constraint has been incorporated.

_{i}However, the structure information between scene slices has not been taken into account. In 3D GISC, the pulsed pseudo-thermal light illuminates the target scene, and reflected by the scene slices at different distances successively as shown in Fig. 2. The objects in slices at *d _{1}, d_{2},* and

*d*distances are described with different colors. The light reflected by the object in the slice at certain distance cannot reach the object in the slices behind, such as part of the green object and part of the blue object circled in Fig. 2. Therefore the images of scene slices at different distances have no spatial overlap theoretically, which means the column vectors of $X$are incoherent. This orthogonal characteristic can be exploited as structure constraints to reconstruct $X$.

_{3}The coherence between any two column vectors of $X$ is defined as

*N × K*target matrix to be reconstructed, $A$is the

*M × N*measurement matrix, $Y=[{y}_{1},\dots ,{y}_{i},\dots ,{y}_{K}]$is the

*M × K*signal matrix, ${x}_{i}$is the unknown image of scene slice at distance

*d*, ${y}_{i}$ is the

_{i}*M*measurement signals of ${x}_{i}$.

The quality of reconstructed images can be improved not only because the extra structure information reduces sampling requirements, but also because the noise distribution is not orthogonal, and the influence of laser pulse width can be included by incorporating orthogonality constraint. The laser pulse can be described by a time-varying function *h(t)*, which can be discretized into a row vector ** h** of length

*K*. In each measurement, the signal recorded by the time-resolved bucket detector is the convolution of

_{h}**and the reflected signal from scene slices, and the length of each recorded signal is ${K}^{\prime}=K+{K}_{h}-1$. Therefore, the 3D GISC considering the laser pulse width can be described by**

*h**h(t)*, and it is a row vector of length

*K*.

_{h}If we set ${X}^{\prime}=XH$, it can be found that each row vector of ${X}^{\prime}$is the convolution of corresponding row vector of $X$and $h$. As a result of the convolution, the column vectors of ${X}^{\prime}$are not completely incoherent, and partially spatial overlap exists in the reconstructed images of neighbor slices. To include this effect, $\mu ({X}^{\prime})$ should be minimized. However, this minimizing problem is difficult to solve. An alternative is the average value of absolute inner products between any two columns of ${X}^{\prime}$, which can be expressed as

*τ*and

_{i}*τ*are nonnegative parameters,

_{c}*τ*should be chosen according to the sparsity of transform coefficients similar to other compressive sensing cases,

_{i}*τ*should be chosen according to the coherence between images of neighbor slices.

_{c}Since ${X}^{\prime}=XH$ and $H$ is a $K\times {K}^{\prime}$ matrix, ${X}^{\prime}$ is a matrix consisting of ${K}^{\prime}$ column vectors. In fact, the target scene is divided into *K* slices, which means only *K* column vectors need to be reconstructed. The extra columns in ${X}^{\prime}$ are added because of the convolution with $h$. Therefore, we can reconstruct the central *K* column vectors in ${X}^{\prime}$, denoted by ${X}^{\u2033}$, using the following optimization:

*K*= 1,3,5,…. When the laser pulse width is narrow enough, Dirac function can be used to describe the laser pulses. Then

_{h}*K*= 1, ${K}^{\prime}=K$and $H$ is a $K\times K$ matrix. Thus, we have ${X}^{\u2033}={X}^{\prime}=X$. In this special case, the quality of reconstructed images still can benefit from noise suppressing by using orthogonality constraints.

_{h}We employ the fast iterative shrinkage-threshold algorithm (FISTA) [34] to complete the reconstructions for its convenience to be extended to multiple vectors processing. And other standard reconstruction algorithms also can be adopted to complete the reconstructions. Some necessary modifications have been made to recover multiple vectors of ${X}^{\u2033}$simultaneously. The functions needed in our multi-vector FISTA can be defined as

*L*is the Lipschitz constant of $\nabla f$.

The main iteration procedure of our recovery algorithm can be summarized as follows:

Initialization: *X*_{0} = 0,*Z** _{1} = 0, t_{1} = 1*.

Iteration k: compute

## 3. Results and discussion

We make numerical experiments to explore the performance of our proposed method. The speckle distributions used in our simulation are produced by a 532 nm solid-state pulsed laser passing through a rotating ground glass, and recorded by a CCD camera with 128 × 128 pixels. In each measurement, the recorded speckle distribution is reshaped into a row vector of length *N*(*N* = 128 × 128) to form a single row of measurement matrix. After *M* measurements, the *M × N* measurement matrix can be obtained. By adjusting the number of measurements, measurement matrix with different sizes (*M* = 1000, 500, and 200) are generated. The pulse width is 10 ns, and a Gaussian function $h(t)=\mathrm{exp}(-\frac{{t}^{2}}{2{\sigma}^{2}})$ is used to simulate the laser pulse. Signal matrix can be obtained according to Eq. (5), and Gaussian noise of different levels (SNR = 15dB, 20dB, 25dB, 30dB) is added. The sparse representation basis adopted in our reconstruction is two dimensional discrete cosine transform (DCT) basis.

The first scene we simulated is a structure which consists of four slices as shown in Fig. 3(a). The real-space sparse ratios [21] of the four slices from left to right are 0.086, 0.173, 0.259, and 0.346, and there is no spatial overlap between any two slices. The distance between two neighbor slices is 2m. When the separate distance between slices is smaller than 3m (3 × 10^{8}m/s × 10ns), the influence of laser pulse width should be considered. Figure 3 shows the reconstructed results of the four slices. Figure 3(b)-3(d) are the reconstructed results without structure considering, and Fig. 3(e)-3(g) are the reconstructed results of structured reconstruction using orthogonality constraint. The results of 1000, 500, 200 measurements are shown in Fig. 3(b), 3(e), 3(c), 3(f), and (d)(g), respectively. Whether using orthogonality constraint or not, the contrast of the stripes in the four slices degrade with the decrease of measurement numbers. It is clear that the images in Fig. 3(e)-3(g) are better than those in Fig. 3(b)-3(d), especially for the results of 200 measurements. It can be noticed that the stripes in Fig. 3(b)-3(d) are wider than the corresponding stripes in Fig. 3(a), while the width of the stripes in Fig. 3(e)-3(g) are almost the same to those in Fig. 3(a), which means scene slices can be reconstructed more accurately using orthogonality constraint.

The max coherence and average coherence between any two different recovered images of the four slices can be calculated according to Eq. (3) and Eq. (7), respectively. Figure 4 shows how the max coherence and average coherence decrease with the growth of iteration number. The solid lines are the results of reconstruction using orthogonality constraint, and the dashed lines are the results of standard reconstruction, in which only standard sparsity constraint is used and without structure considering. The results of 1000, 500, 200 measurements are shown in Fig. 4 with green, blue, and red lines, respectively. The solid lines decrease much faster than the dashed lines in both Fig. 4(a) and Fig. 4(b), which proves that the image reconstruction method using orthogonality constraint works well in reducing the coherence between any two different recovered images, especially for the results of 200 measurements. Although the values of max coherence is larger than the values of average coherence, the similar decreasing trend of the curves in Fig. 4(a) and Fig. 4(b) demonstrates that the average coherence can be used as a substitute for the max coherence in image reconstruction.

To quantitatively evaluate the recovered image’s quality, mean squared error (MSE) can be calculated with $MSE=\frac{1}{N}{\displaystyle \sum _{i=1}^{m}{\displaystyle \sum _{j=1}^{n}{({{I}^{\prime}}_{i,j}-{I}_{i,j})}^{2}}}$, ${{I}^{\prime}}_{i,j}$ is the pixel values of recovered image, ${I}_{i,j}$ is the pixel values of original image. Figure 5 shows the MSE curves of the recovered images of the four slices. The MSE values of different iteration numbers, different measurement numbers and different noise levels are shown in Fig. 5(a)-5(c), respectively. The MSE results of both kinds of reconstructions increase from slice 1 to slice 4, which means the MSE values increase with slice sparse ratio, and this is coincident with the results in reference [21]. In Fig. 5(a), the MSE values of reconstructed results using orthogonality constraint decrease faster than those of standard reconstruction, especially for the slices with smaller sparse ratios. In Fig. 5(b) and Fig. 5(c), we can see that the MSE values of reconstructed results using orthogonality constraint are smaller than those of standard reconstruction. The MSE difference between the two reconstruction methods decreases with the growth of measurement number, and increases with the growth of SNR. Therefore, incorporating orthogonality constraint in image reconstruction can improve the recovered slice image’s quality in 3D GISC, and the improvement is significant especially for ghost imaging with less measurements. In the meanwhile, high SNR and high slice sparse ratio will be helpful to further enhance the performance of structured imaging reconstruction in 3D GISC applications.

The second scene we simulated is a city scene in remote sensing. As Fig. 6(a) shows, there are several kinds of city buildings in this scene, and a road divides the scene into two parts. In the left part, there are three main buildings with different heights and lots of small houses. In the right part, there are four buildings arranged orderly and a twin towers nearby. The maximum height of the buildings in the simulated city scene is 100m. To simulate aerial photography, the city scene is supposed to be illuminated from above, and layered into slices from top to bottom. Figure 6(b) shows some characteristic slices of the three-dimensional city scene at different heights. The image size of each slice is 128 × 128 pixels. The number of measurements is 1000, and the noise level is 30dB. Figure 6(c) and Fig. 6(d) are the results of structured reconstruction. Figure 6(c) is the simulated aerial photography results of 3D GISC lidar. Different colors in Fig. 6(c) means different heights in the city scene. Figure 6(d) is the reconstructed 3D scene. As a reference, the reconstructed results of characteristic slices at different heights using structured reconstruction and standard reconstruction are shown in Fig. 6(e) and Fig. 6(f), respectively. Although the first slice in Fig. 6(f) is clear because of its high sparsity, most of the images in Fig. 6(f) are so blurred that those small houses in the left part of the scene can’t be determined accurately. The contours of the twin towers and the road are also difficult to confirm.

Although MSE for each slice can provide quality information of image reconstruction in 3D GISC, a more comprehensive evaluation of the reconstructed 3D scene as a whole can be obtained with three-dimensional MSE, which can be calculated with $MS{E}_{3D}=\frac{1}{NK}{\displaystyle \sum _{p=1}^{K}{\displaystyle \sum _{i=1}^{m}{\displaystyle \sum _{j=1}^{n}{({{I}^{\prime}}_{i,j,p}-{I}_{i,j,p})}^{2}}}}$, ${{I}^{\prime}}_{i,j,p}$ is the pixel values of recovered image for the p-th slice, ${I}_{i,j,p}$ is the pixel values of original image for the p-th slice. The MSE_{3D} of structured reconstruction results in Fig. 6 is 0.0084, and the MSE_{3D} of standard reconstruction is 0.0467.

Figure 7 shows the MSE_{3D} curves of different measurement numbers and different slice amounts. The solid red lines are the results of structured reconstruction using orthogonality constraint, and the dashed blue lines are the results of standard reconstruction. In Fig. 7(a), the MSE_{3D} difference between the two reconstruction methods decreases with the growth of measurement number, and we can see that the MSE_{3D} value of reconstruction using orthogonality constraint at 1000 measurements is almost equal to the MSE_{3D} value of standard reconstruction at 4000 measurements. Thus, structured image reconstruction using orthogonality constraint can reduce sampling requirements in 3D GISC lidar. In Fig. 7(b), the MSE_{3D} values of both reconstruction methods decrease with the growth of slice amount, and the MSE_{3D} difference between the two reconstruction methods increases with the growth of slice amount. As we know, image quality increases with the sparsity of reconstructed images, and the sparsity of slices in natural scenes increases with the growth of slice amount. But the sparsity of reconstructed images and the sparsity of slices are not always equivalent. In fact, the sparsity of reconstructed images is determined by the sparsity of slices and the influence of laser pulse width. When the slice amount is small, the separate distance between neighbor slices is large, so the influence of laser pulse width is not significant, and the sparsity of reconstructed images is determined by the sparsity of slices. But when the slice amount is large, the separate distance between neighbor slices is small, and the influence of laser pulse width should be considered. So partially spatial overlap exists in the reconstructed images, and the overlap will increase with laser pulse width, which means the influence of laser pulse width will be dominant, and the sparse ratios of reconstructed images is larger than the sparse ratios of original slices. However, the influence of laser pulse width is not considered in standard reconstruction. Thus, the image quality of standard reconstruction cannot be further improved significantly by simply increasing slice amount. While in structured image reconstruction, the influence of laser pulse width is included, and the image quality improvement is the result of sparsity enhancement with the growth of slice amount and noise suppression obtained by orthogonality constraint. Therefore, structured image reconstruction using orthogonality constraint includes the influence of laser pulse width and can accurately reconstruct three-dimensional scenes in 3D GISC lidar.

A comparison between three-dimensional ghost imaging and imaging laser radar has been performed in reference [6]. Suppose a traditional lidar works in the focused mode, and the focus size is the diffraction limit of the transmitting aperture, 128 × 128 measurements will be needed to accomplish this 128 × 128 image reconstruction of city scene at different ranges. In ghost imaging, the average intensity of received light in each measurement is proportional to the target reflectivity as a whole and inversely proportional to the square of the distance between ghost imager and target [6]. While in traditional lidar, the intensity of received light in each measurement is proportional to the target reflectivity at certain scanning point and inversely proportional to the square of the distance between imager and target. Therefore, the energy required in each measurement of 3D GISC lidar is almost the same as the energy needed in each measurement of traditional scanning imaging lidar. In the case of our simulated three-dimensional city scene, the total energy required in traditional lidar is about 128 × 128/1000 (the measurement number of traditional lidar/ the measurement number of 3D GISC lidar) ~16.4 times of the total energy for 3D GISC lidar.

## 4. Conclusion

A structured image reconstruction method has been proposed to improve sampling efficiency and achieve high quality 3D reconstruction of natural scenes in 3D GISC lidar. By considering the structure relationship between recovered images of scene slices at different longitudinal distances, orthogonality constraint has been incorporated in image reconstruction to accurately determine the three-dimensional scenes in remote sensing. Numerical experiments have been performed to explore the performance of structured image reconstruction using orthogonality constraint. Scene slices with various sparse ratios can be recovered more accurately by applying orthogonality constraint, and the enhancement is significant especially for ghost imaging with less measurements. A simulated three-dimensional city scene with 128 × 128 pixels for each slice has been successfully reconstructed using our proposed method with 1000 measurements in 3D GISC, and the three-dimensional mean squared error for the reconstructed result is 0.0084.

## Acknowledgments

The work was supported by the Hi-Tech Research and Development Program of China under Grant Projects No.2013AA122901 and No.2013AA122902, and the National Natural Science Foundation of China under Grant Projects No. 11105205 and No. 11175227.

## References and links

**1. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**(13), 131110 (2009). [CrossRef]

**2. **P. Zerom, K. W. C. Chan, J. C. Howell, and R. W. Boyd, “Entangled-photon compressive ghost imaging,” Phys. Rev. A **84**(6), 061804 (2011). [CrossRef]

**3. **E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**(2), 489–509 (2006). [CrossRef]

**4. **D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory **52**(4), 1289–1306 (2006). [CrossRef]

**5. **M. Bina, D. Magatti, M. Molteni, A. Gatti, L. A. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. **110**(8), 083901 (2013). [CrossRef] [PubMed]

**6. **N. D. Hardy and J. H. Shapiro, “Computational ghost imaging versus imaging laser radar for three-dimensional imaging,” Phys. Rev. A **87**(2), 023820 (2013). [CrossRef]

**7. **B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A **29**(5), 782–789 (2012). [CrossRef] [PubMed]

**8. **E. Meyers, K. S. Deacon, and Y. Shih, “Turbulence free ghost imaging,” Appl. Phys. Lett. **98**(11), 111115 (2011). [CrossRef]

**9. **P. Zhang, W. Gong, X. Shen, and S. Han, “Correlated imaging through atmospheric turbulence,” Phys. Rev. A **82**(3), 033817 (2010). [CrossRef]

**10. **J. Cheng, “Ghost imaging through turbulent atmosphere,” Opt. Express **17**(10), 7916–7921 (2009). [CrossRef] [PubMed]

**11. **D. Shi, C. Fan, P. Zhang, H. Shen, J. Zhang, C. Qiao, and Y. Wang, “Two-wavelength ghost imaging through atmospheric turbulence,” Opt. Express **21**(2), 2050–2064 (2013). [CrossRef] [PubMed]

**12. **W. K. Yu, M. F. Li, X. R. Yao, X. F. Liu, L. A. Wu, and G. J. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express **22**(6), 7133–7144 (2014). [CrossRef] [PubMed]

**13. **S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express **17**(26), 23920–23946 (2009). [CrossRef] [PubMed]

**14. **W. Gong and S. Han, “Super-resolution ghost imaging via compressive sampling reconstruction,” arXiv preprint arXiv: 0910.4823v1 (2009).

**15. **H. Wang, S. Han, and M. I. Kolobov, “Quantum limits of super-resolution of optical sparse objects via sparsity constraint,” Opt. Express **20**(21), 23235–23252 (2012). [CrossRef] [PubMed]

**16. **J. Bobin, J. L. Starck, J. M. Fadili, Y. Moudden, and D. L. Donoho, “Morphological component analysis: An adaptive thresholding strategy,” IEEE Trans. Image Process. **16**(11), 2675–2681 (2007). [CrossRef] [PubMed]

**17. **X. Xu, E. Li, H. Yu, W. Gong, and S. Han, “Morphology separation in ghost imaging via sparsity constraint,” Opt. Express **22**(12), 14375–14381 (2014). [CrossRef] [PubMed]

**18. **J. Cheng and S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. **92**(9), 093903 (2004). [CrossRef] [PubMed]

**19. **H. Wang and S. Han, “Coherent ghost imaging based on sparsity constraint without phase-sensitive detection,” Europhys. Lett. **98**(2), 24003 (2012). [CrossRef]

**20. **W. Gong and S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A **376**(17), 1519–1522 (2012). [CrossRef]

**21. **J. Du, W. Gong, and S. Han, “The influence of sparsity property of images on ghost imaging with thermal light,” Opt. Lett. **37**(6), 1067–1069 (2012). [CrossRef] [PubMed]

**22. **W. Gong and S. Han, “Multiple-input ghost imaging via sparsity constraints,” J. Opt. Soc. Am. A **29**(8), 1571–1579 (2012). [CrossRef] [PubMed]

**23. **W. Gong, Z. Bo, E. Li, and S. Han, “Experimental investigation of the quality of ghost imaging via sparsity constraints,” Appl. Opt. **52**(15), 3510–3515 (2013). [CrossRef] [PubMed]

**24. **M. Chen, E. Li, and S. Han, “Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,” Appl. Opt. **53**(13), 2924–2928 (2014). [CrossRef] [PubMed]

**25. **C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. **101**(14), 141123 (2012). [CrossRef]

**26. **A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express **19**(22), 21485–21507 (2011). [CrossRef] [PubMed]

**27. **W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. **54**(3), 363–367 (2015). [CrossRef]

**28. **W. Gong, C. Zhao, J. Jiao, E. Li, M. Chen, H. Wang, W. Xu, and S. Han, “Three-dimensional ghost imaging ladar,” arXiv preprint arXiv:1301.5767 (2013).

**29. **M. F. Duarte and Y. C. Eldar, “Structured compressed sensing: from theory to applications,” IEEE Trans. Signal Process. **59**(9), 4053–4085 (2011). [CrossRef]

**30. **S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Signal Process. **53**(7), 2477–2488 (2005). [CrossRef]

**31. **Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Trans. Inf. Theory **55**(11), 5302–5316 (2009). [CrossRef]

**32. **R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Trans. Inf. Theory **56**(4), 1982–2001 (2010). [CrossRef]

**33. **M. E. Davies and Y. C. Eldar, “Rank awareness in joint sparse recovery,” IEEE Trans. Inf. Theory **58**(2), 1135–1146 (2012). [CrossRef]

**34. **A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. **2**(1), 183–202 (2009). [CrossRef]