## Abstract

Ghost imaging LiDAR via sparsity constraints using push-broom scanning is proposed. It can image the stationary target scene continuously along the scanning direction by taking advantage of the relative movement between the platform and the target scene. Compared to conventional ghost imaging LiDAR that requires multiple speckle patterns staring the target, ghost imaging LiDAR via sparsity constraints using push-broom scanning not only simplifies the imaging system, but also reduces the sampling number. Numerical simulations and experiments have demonstrated its efficiency.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

LiDAR [1] is a novel detection tool that has been widely applied in remote sensing. Most of the commercial LiDAR systems are whiskbroom LiDAR, which obtain the target information by the optical-mechanical scanning mechanism. It exploits a whiskbroom mirror to scan in one dimension across the aircraft flight path, and the physical motion of the aircraft itself to scan along the flight path. It is very sensitive and low-cost, but heavily suffers from optical-mechanical pointing errors and bad real-time imaging of moving targets [2]. Another type of LiDAR, as known as push-broom LiDAR, uses a line array detector across the aircraft flight path instead of the optical-mechanical scanning. The elimination of the optical-mechanical scanning greatly improves the system stabilization and the real-time performance. However, it is less sensitive than the whiskbroom LiDAR and more easily influenced by the background noise [2]. However, in both LiDARs, the target information is acquired by point-to-point corresponding method between the target and the detector plane, therefore, the correlation between pixels of the target is not involved in the imaging process. In addition, all these imaging methods require sampling rate at or beyond the Nyquist limit. The redundancy sampling method of both LiDARs greatly limits the further development of the LiDAR techniques, especially for high-resolution imaging in long ranges.

As a novel time-resolved single-pixel LiDAR method, ghost imaging LiDAR via sparsity constraint (GISC LiDAR) [3–17], which takes advantages of the sparsity property of nature targets and compressive sensing approach [18–21], makes the compressive sampling possible [22–25]. On the basis of its unique information acquisition mechanism, GISC LiDAR realizes high resolution imaging and keeps the high sensitivity and low-cost merits, and it has the potential of real-time imaging the moving targets [26–32]. However, as a typical staring imaging mechanism, traditional single-pixel GISC LiDAR always requires a great number of measurements for practical remote sensing application. Because of the relative movement caused by aircraft flight, there still exists the motion blur problem [33]. One successful strategy applied in the reported airborne GISC LiDAR system [33] is the motion compensation technique based on opto-electronic tracking technique. However, this motion compensation mechanism limits its imaging field of view centered at the focusing area of the motion compensation system, which means a continuous remote sensing will be hard to be realized. And the opto-electronic tracking method might be disabled in the cases such as at night or low visibility weather. Another possible scheme to solve this motion blur problem is referring to the traditional push-broom mechanism. In this letter, we propose ghost imaging LiDAR via sparsity constraints using push-broom scanning (push-broom GISC LiDAR). In this method, the target scene can be imaged continuously along the scanning direction by taking advantage of the relative movement between the platform and the target scene. The size of the target is no longer limited by the field of view, and the motion compensation is not required.

## 2. Model and methods

Figure 1 presents the schematic of the proposed push-broom GISC LiDAR. The push-broom GISC LiDAR system, mounted on a moving platform, emits predetermined and calibrated rectangular speckle pattern(s) to cover a certain region of the target scene (The blue area in Fig. 1(a)) in each laser pulse. Then a receiving system focuses the rectangular receiving speckle pattern(s) to a line array detector through orthogonal cylindrical lens group. Across the flight direction, the imaging region is divided into many strips. The signal of each pixel of the line array detector corresponds to a distinct strip of this target region; and as the system moves along the flight path strip by strip (push-broom scanning), every strip can then be scanned and sampled multiple times sequentially by each pixel of the linear detector. Synchronizing the emission system and receiving system, a continuous sensing along the flight direction can be realized.

The system may either use an invariant illuminating pattern or multiple illuminating patterns during the sampling process, denoted as method-1 and method-2, respectively (Fig. 1(b)). For method-1, the emitting illuminating pattern always keeps invariant during the whole push-broom scanning process (Fig. 1(b)). On the contrary, for method-2, the emitting illuminating pattern changes every time when a new strip comes into the focusing region along the flight direction (Fig. 1(b)). Nonetheless, these two emission schemes can be described in a general manner as below. For the sake of simplicity, we assume that during the sampling process the platform moves at a fixed direction parallel to the line array detector. Suppose the transverse resolution across the flight direction is *n* and the pixels of the line array detector in the receiving system is *m*, the sampling rate of this push-broom GISC LiDAR system can be defined as *η* = *m*/*n.* If we want to realize sensing for *q* strips along the flight direction, a push-broom scanning for *q* + *m* − 1 strips is required so that each of these scanned strips could be equally sampled *m* times in this process(Fig. 1(c)). Then, we use a *n* -by-1 column vector *X _{i}*, to represent the target in

*i*-th strip, and denote the

*k*-th

*n*×

*m*illuminated pattern as

*I*

^{(k)}, and the

*k*-th detection signals of the

*m*-pixels line array detector as

*D*

^{(k)}(

*m*-by-1 vector). With the information of the geometry of the system, one may write down the equation that relates the detection signal

*Y*and

_{i}*X*in a general form:

_{i}*m*-by-1 column vector, where ${D}_{j}^{(i+j-1)}$ is the

*j*-th element of the

*D*

^{(i+j−1)}. And

*A*is a

_{i}*m*-by-

*n*measurement matrix, of which the vector form can be represented as,

*n*column vector, ${I}_{j}^{(i+j-1)}$ (1 ≤

*j*≤

*m*), denotes the

*j*-th column of the (

*i*+

*j*− 1)-th illuminating pattern

*I*

^{(i+j−1)}.

Analog to the above analyses, after the push-broom scanning for *q* + *m* − 1 strips (*q* ≥ 1), one may have the general equation for the total detection signal *Y* and the total information of these *q* strips *X* as follows,

*Y*and

*X*can be respectively represented as and And the total measurement matrix

*A*is the combination of

*A*:

_{i}Particularly, for method-1, the emitting illuminating pattern always keeps invariant during the whole push-broom scanning process so that the measurement matrix of every strip is actually the same as each other. Therefore, the sensing formulation for method-1 can be simplified as,

where ${Y}^{\prime}=\left[\begin{array}{ccccc}{Y}_{1}& \cdots & {Y}_{i}& \cdots & {Y}_{q}\end{array}\right]$ is a*m*-by-

*q*matrix, ${X}^{\prime}=\left[\begin{array}{ccccc}{X}_{1}& \cdots & {X}_{i}& \cdots & {X}_{q}\end{array}\right]$ is a

*n*-by-

*q*matrix corresponding to the target scene, and

*A′*is the invariant illuminating pattern used in the method-1 push-broom GISC LiDAR.

To reconstruct *X* from *Y* according to Eq. (3), exploiting the sparsity property of the target scene image, here we adopt the total variation (TV) regularization and solve the following optimization program [34], namely,

*x*denotes the (

_{i}*i*)-th element of the column vector

*X*,

*N*stands for natural number.

To reconstruct *X′* from *Y′* according to Eq. (7), the TV regularization program can be represented as,

*x′*

_{i,j}the (

*i*,

*j*)-th element of the matrix

*X′*.

## 3. Simulation and experimental results

To verify the proposed imaging methods, numerical simulations and experiments were performed correspondingly. In the proposed push-broom scanning methods demonstrated in Fig. 1, the push-broom GISC LiDAR system was located on a moving platform and the target scene keeps static. In our experiment, a moving target strategy was substituted for convenience, instead of a moving platform to simulate the continuous relative movement process. The experimental setup was illustrated in Fig. 2. A halogen lamp emited the light through a Kohler lighting system made up of lens1 and lens2, uniformly illuminating the digital mirror array device (DMD) (XD-ED01(N)). The focal lengths of lens1 and lens2 were 10*cm* and 15*cm*, respectively, and a 10*nm* bandwidth filter centered at 655*nm* was used here to improve the system’s image quality. The coded patterns were prefabricated by modulating the mirrors of the DMD. This DMD has 1024 × 768 pixels, of which the size was 13.68*μm* × 13.68*μm*. We choose 256 × 256 pixels from DMD for the experiments with 4 × 4 points merged in the pattern element. Each pattern element took the value of either 0 or 1. Then the speckle pattern reflected by the DMD was imaged onto the target scene by camera lens1 (Canon EFS55-250mm F/4-5.6 IS II). The region size of target scene was 1.59cm × 1.59cm and the resolution of each strip is *r* = 248*um*. The reflected light of the pattern projected onto the target was imaged onto CCD by camera lens2 (Tamron AF70-300mm F/4-5.6). Test target scenes were images printed on paper, characterized by some strips. The CCD energy in the vertical direction of motion was integrated working as a line array detector. Because the sampling frequency was set as 2Hz in the experiment, the velocity of stepper motor was *v* = 496*um*/*s* base on the synchronization requirement.

Firstly, the performance of the compressive sampling capability of the proposed push-broom GISC LiDAR is investigated both simulatively (Fig. 3) and experimentally (Fig. 4). In both of the Figs. 3 and Figs. 4, the images of column (1) are three different typical targets, from binary target to grayscale target, and simple to complex. And the results shown in columns (2)–(7) correspond to the sampling rate *η* of 0.25, 0.5, 0.625, 0.75, 0.875 and 1, respectively. Rows (a), (c), and (e) are for method-1 and rows (b), (d), and (f) are for method-2.

Here, we use the peak signal to noise ratio (PSNR) to evaluate the quality of reconstructed images quantitatively. It is the relative logarithm value that the mean square error (2* ^{n}* − 1)

^{2}between original image and reconstructed image, the unit of PSNR is dB, namely,

After that, a simulation for a continuous scene images with *η* = 0.625 has been performed to further demonstrate the performance of the push-broom GISC LiDAR. The simulated image is a “Great Wall” picture (Fig. 6(a)) with a size of 64 × 192 pixels. The simulated results of method-1 and method-2 are shown in Figs. 6(b) and 6(c), respectively. Note that method-1 takes less time for reconstruction with same computing resources (85.5987 seconds and 96.9540 seconds for method-1 and method-2, respectively, with an Intel(R) Core(TM) i3-2120 CPU @ 3.3 GHz and a 6.00 GB RAM).

## 4. Discussion

As depicted in Fig. 3, Fig. 4 and Fig. 5, firstly, all these imaging qualities degrade when the sampling rate decreases, but the reconstructions are still acceptable with compressive sampling. Secondly, according to the three typical types of objects, which represent targets scene from simple to complex, utilized in the simulations and experiments, both methods of the proposed push-broom GISC LiDAR are demonstrated. Thirdly, as presented in Fig. 3 and Fig. 4, the reconstruction with low sampling rate of method-1 schematic have obvious regular noise which seems like there are some transverse strings overlapping on the results. But there are no such obvious phenomenon in the corresponding reconstructions of method-2. This interesting phenomenon can be explained as follows. In method-1, all strips share the same measurement matrix. Therefore, if the targets scene in different strips are very similar with each other, for example, the alphabet “I” and some parts of “M”, the noise of the reconstructions are very similar, which brings about those regular visual noise results. On the contrary, in method-2, every strip has its own measurement matrix. Hence even when the target scene is similar, the reconstruction does not accord with each other usually. Therefore, although method-1 requires no dynamic light modulation device, which seems more stable and economic, and the computation cost can be saved, its imaging quality is still need further improved. In addtion, the simulated results demonstrated in Fig. 6 shows that our proposed push-broom GISC LiDAR can realize remote sensing for a continuous target scene along the scanning direction, which indicates this novel schematic have the potential to develop to be a novel LiDAR technique for remote sensing. However, even though method-2 realizes a better reconstruction, the method-1 is more advantageous than method-2 in the computation time.

For traditional push-broom LiDAR, because the line array detector is across the flight direction, the pixel number of the line array detector must be equal to the resolution number of every strip. In contrast, our proposed push-broom GISC LiDAR can realize the same resolution *n*, with only a *m* pixels line array detector (*m* ≤ *n*) due to its compressive sampling capability. Therefore, push-broom GISC LiDAR could be more economic and more easy to be generalized for higher resolution remote sensing application, than the traditional Push-broom LiDAR. Compared to the previous GISC LiDAR system based on motion compensation technique, target scene now can be imaged continuously along the scanning direction, rather than only focusing on a certain area. And the application of the line array detection mechanism transforms the previous direct two-dimensional reconstruction for a rectangular area to a sequential one-dimensional reconstruction for a strip, which dramatically decreases the required measurements for a single strip. In addition, without the complicated motion compensation system, there is no trouble of the tracking accuracy problem any more, and the performance under low-visible condition can also be further improved.

Clearly, much optimization for this proposed push-broom GISC LiDAR system still need to be further performed and we might pay close attention on the following aspects in the future work. First of all, for method-2, the emission scheme can be further optimized to reduce the computational scale by repeating the speckles for some period of strips. Secondly, although the method-1 seems more convenient in realization and fast in computation, its imaging quality need further improvements by approaches such as speckle pattern optimization [35]. What’s more, because in the sampling process, both the shake of the moving platform and the variation of the flight speed could heavily influence the final reconstruction, great attentions on how to reduce these impacts will be paid in the future work.

## 5. Conclusion

In conclusion, a push-broom GISC LiDAR is proposed and demonstrated experimentally. Because of its compressive sensing capability, the higher imaging resolution across the push-broom direction can realize by a line array detector with less pixels. With further improvements, the push-broom GISC system might find many useful applications, such as automotive LiDAR, airborne LiDAR, and so on.

## Funding

Hi-Tech Research (2013AA122902, 2013AA122901); Development Program of China (2011AA120102, 2011AA120101).

## References

**1. **C. Elachi, *Spaceborne radar remote sensing: applications and techniques* (IEEE, 1988).

**2. **N. R. Council, *Laser Radar: Progress and Opportunities in Active Electro-Optical Sensing* (National Academies, 2014).

**3. **B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express **20**, 16892–16901 (2012). [CrossRef]

**4. **J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. **11**, 949–993 (2012). [CrossRef]

**5. **J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A. **78**, 061802 (2008). [CrossRef]

**6. **T. A. Smith and Y. Shih, “Turbulence-free double-slit interferometer,” Phys. Rev. Lett. **120**, 063606 (2018). [CrossRef] [PubMed]

**7. **Y. Shih, “The physics of ghost imaging: nonlocal interference or local intensity fluctuation correlation?” Quantum Inf. Process. **11**, 995–1001 (2012). [CrossRef]

**8. **F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. **104**, 253603 (2010). [CrossRef] [PubMed]

**9. **J. Cheng and S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. **92**, 093903 (2004). [CrossRef] [PubMed]

**10. **D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. **30**, 2354–2356 (2005). [CrossRef] [PubMed]

**11. **S. Han, H. Yu, X. Shen, H. Liu, W. Gong, and Z. Liu, “A review of ghost imaging via sparsity constraints,” Appl. Sci. **8**, 1379 (2018). [CrossRef]

**12. **W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Reports **6**, 26133 (2016). [CrossRef]

**13. **Z. Liu, S. Tan, J. Wu, E. Li, X. Shen, and S. Han, “Spectral camera based on ghost imaging via sparsity constraints,” Sci. Reports **6**, 25718 (2016). [CrossRef]

**14. **H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard x rays,” Phys. Rev. Lett. **117**, 113901 (2016). [CrossRef] [PubMed]

**15. **Z. Liu, X. Shen, H. Liu, and S. Han, “Lensless wiener-khinchin telescope based on high-order spatial autocorrelation of light field,” arXiv preprint arXiv:1804.01270 (2018).

**16. **C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. **101**, 141123 (2012). [CrossRef]

**17. **M. Chen, E. Li, W. Gong, Z. Bo, X. Xu, C. Zhao, X. Shen, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints in real atmosphere,” Opt. Photonics J. **3**, 83 (2013). [CrossRef]

**18. **E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Trans. Signal Process. **25**, 21–30 (2008). [CrossRef]

**19. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**, 1289–1306 (2006). [CrossRef]

**20. **A. Stern, Y. Rivenson, and B. Javidi, “Single-shot compressive imaging,” Int. Soc. Opt. Photonics **6778**, 67780 (2007).

**21. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**, 131110 (2009). [CrossRef]

**22. **P. Zerom, K. W. C. Chan, J. C. Howell, and R. W. Boyd, “Entangled-photon compressive ghost imaging,” Phys. Rev. A. **84**, 061804 (2011). [CrossRef]

**23. **W. Gong and S. Han, “Experimental investigation of the quality of lensless super-resolution ghost imaging via sparsity constraints,” Phys. Lett. A. **376**, 1519–1522 (2012). [CrossRef]

**24. **W. Gong and S. Han, “Multiple-input ghost imaging via sparsity constraints,” JOSA A. **29**, 1571–1579 (2012). [CrossRef] [PubMed]

**25. **W. Gong, Z. Bo, E. Li, and S. Han, “Experimental investigation of the quality of ghost imaging via sparsity constraints,” Appl. Opt. **52**, 3510–3515 (2013). [CrossRef] [PubMed]

**26. **E. Li, Z. Bo, M. Chen, W. Gong, and S. Han, “Ghost imaging of a moving target with an unknown constant speed,” Appl. Phys. Lett. **104**, 251120 (2014). [CrossRef]

**27. **C. Zhang, W. Gong, and S. Han, “Improving imaging resolution of shaking targets by fourier-transform ghost diffraction,” Appl. Phys. Lett. **102**, 021111 (2013). [CrossRef]

**28. **X. Li, C. Deng, M. Chen, W. Gong, and S. Han, “Ghost imaging for an axially moving target with an unknown constant speed,” Photonics Res. **3**, 153–157 (2015). [CrossRef]

**29. **W. Yu, X. Liu, X. Yao, C. Wang, G. Zhai, and Q. Zhao, “Single-photon compressive imaging with some performance benefits over raster scanning,” Phys. Lett. A **378**, 3406–3411 (2014). [CrossRef]

**30. **W. Yu, X. Yao, X. Liu, R. Lan, L. Wu, G. Zhai, and Q. Zhao, “Compressive microscopic imaging with positive–negative light modulation,” Opt. Commun. **371**, 105–111 (2016). [CrossRef]

**31. **P. Wang, X. Yao, X. Liu, W. Yu, P. Qiu, and G. Zhai, “Moving target compressive imaging based on improved row scanning measurement matrix,” Acta Phys. Sin. **66**, 014201 (2017).

**32. **W. Yu, X. Yao, X. Liu, L. Li, and G. Zhai, “Compressive moving target tracking with thermal light based on complementary sampling,” Appl. Opt. **54**, 4249–4254 (2015). [CrossRef]

**33. **C. Wang, X. Mei, L. Pan, P. Wang, W. Li, X. Gao, Z. Bo, M. Chen, W. Gong, and S. Han, “Airborne near infrared three-dimensional ghost imaging lidar via sparsity constraint,” Remote Sens. **10**, 732 (2018). [CrossRef]

**34. **E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**, 489–509 (2006). [CrossRef]

**35. **M. Chen, E. Li, and S. Han, “Application of multi-correlation-scale measurement matrices in ghost imaging via sparsity constraints,” Appl. Opt. **53**, 2924–2928 (2014). [CrossRef] [PubMed]