## Abstract

Ghost imaging (GI) is an imaging technique that uses the correlation between two light beams to reconstruct the image of an object. Conventional GI algorithms require large memory space to store the measured data and perform complicated offline calculations, limiting practical applications of GI. Here we develop an instant ghost imaging (IGI) technique with a differential algorithm and an implemented high-speed on-chip IGI hardware system. This algorithm uses the signal between consecutive temporal measurements to reduce the memory requirements without degradation of image quality compared with conventional GI algorithms. The on-chip IGI system can immediately reconstruct the image once the measurement finishes; there is no need to rely on post-processing or offline reconstruction. This system can be developed into a realtime imaging system. These features make IGI a faster, cheaper, and more compact alternative to a conventional GI system and make it viable for practical applications of GI.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Ghost imaging (GI) is an imaging technology which reconstructs the image of an object by calculating the correlation between two beams (test and reference). The test beam interacts with the object and is collected by a bucket detector without spatial resolution, and the reference light field is detected using a space-resolving detector without going through the object. It has been demonstrated that correlations of both quantum-entangled [1] and thermal light sources [2–5] can be used to achieve the GI. The image can be formed without a lens (lensless ghost imaging) [6–8] or only by using a single-pixel detector (computational ghost imaging, CGI) [9–11]. Due to the underlying physics and potential applications in many fields, including lidar [12], tomography [13], and medical imaging [14–16], GI has attracted much attention in recent years [17–22]. It has also been extended to different domains with certain freedoms of correlation, including atomic domain [23,24], time domain [25–27], and spiral imaging [28,29].

A significant obstacle to practical applications of GI is that reconstituting an image requires massive temporal measurements, which necessitates huge memory space with high space complexity. This limitation stems from conventional GI algorithms. For example, the background subtraction algorithm requires the second-order correlation function calculation [3,4,10,16]

Compressive sensing [36,37], a convex optimization procedure, reduces the required number of acquisitions for GI with good image quality [38–40]. However, it is at the cost of more computing resources, which increases the GI’s dependence on the computer. The single-pixel imaging uses a complete orthogonal basis, such as the Fourier basis [41,42] and the Hadamard basis [43,44], to obtain a perfect image of the object. This solution requires complete sampling, for example, a $256\times 256$ image requires 65,536 measurements. Therefore, a computer is needed to do the inverse transform to obtain the image especially for the large images. To date, on-chip GI has not been perfected due to high space complexity.

We first proposed a sequence differential-based GI algorithm in 2015. It uses the signals between two consecutive temporal measurements, the $(n+1)^{th}$ and $n^{th}$, in the test and the reference beams, ${S_{n + 1}} - {S_n}$ and ${I_{n + 1}}(x) - {I_n}(x)$, to reconstruct the image of the object [45–47]. Jun-Lin Li introduced the algorithm into computer-based experiments as sequence differential ghost imaging (SDGI) in his doctoral dissertation in 2016 [45]. Ya-Xin Li et al. also discussed the virtually identical algorithm with computer-based experiments in detail [48]. However, the strong dependence on the computer is an obstacle to practical applications of GI, and previous related works [45–48] have not solved this problem. In this work, to demonstrate the validity and the hardware feasibility of the SDGI algorithm, we developed a prototype on-chip hardware system using a single field-programmable gate array (FPGA), without any external memory; it can process 500 measurements per second online. This system was named instant ghost imaging (IGI) due to one significant advantage of this system: its image reconstruction time is almost zero and the image is formed immediately once the temporal measurement is complete. The on-chip IGI system makes GI computer-independent for the first time.

IGI offers the following advantages:

- • IGI can drastically reduce memory requirements and space complexity without increasing computation.
- • IGI does not reduce image quality compared to the background subtraction algorithm.
- • IGI is a generalized GI algorithm that can be used for lensless ghost imaging and CGI.
- • The on-chip IGI hardware system measures the signal and reconstructs the image online: it does not rely on post-processing or offline reconstruction.
- • The structure of the on-chip IGI system is compact and much smaller than the computers needed to calculate the correlation function in conventional GI procedures. Moreover, the IGI hardware system could be developed into a realtime imaging system at a frame rate of more than 24 frames per second. These features make IGI a faster, cheaper, and more compact alternative to a conventional GI system and make it viable for practical applications of GI.

## 2. Methods

#### 2.1 Instant ghost imaging algorithm

Experimentally, we can use the $N$-times measurements to calculate Eq. (1) of the background subtraction algorithm

The IGI algorithm we proposed differs from Eq. (2) in using $(N+1)$ measurements

where ${S_{n + 1}}\;-\;{S_n}$ and ${I_{n + 1}}(x) - {I_n}(x)$ are the temporal differential signals between two consecutive measurements of the bucket detector and the reference detector.We can demonstrate that Eq. (3) of the IGI algorithm is equivalent to Eq. (2) of the background subtraction algorithm when $N$ is rather large. It can be inferred that Eq. (3) has four terms

#### 2.2 Experimental setup

The schematic of the experimental setup is shown in Fig. 1(a). A 532 nm laser light goes through a slowly-rotating ground glass disk to produce pseudo-thermal light. A beam splitter (BS) divides the light into two beams, the test beam and the reference beam. A binary mask object of letters TH is placed in the test beam 300 mm downstream of the disk. The mask is close to a complementary metal-oxide-semiconductor CMOS1 (PYTHON300) which is used to simulate the bucket detector; the bucket signal $S$ is calculated by summing up all the light intensities detected by the CMOS1. Another detector CMOS2 is in the reference beam at a distance of 300 mm from the ground glass disk. Each CMOS can carry out 500 measurements per second. The hardware specification about the experimental setup can be found in the Methods section.

The entire calculation required for image reconstruction is performed in the IGI hardware, which consists of two CMOSs, an FPGA (Xilinx Kintex-7 XC7K325T) and a monitor. The FPGA is used to compute the temporal differential signals ${S_{n + 1}}-{S_n}$, ${I_{n + 1}}(x)-{I_n}(x)$, and their product $({S_{n + 1}} - {S_n})[{I_{n + 1}}(x) - {I_n}(x)]$; it can process all the 500 measurements per second made by each CMOS. The monitor shows the intermediate results of IGI for a fixed interval, typically four times per second. The IGI hardware system is completely on-chip because the two CMOSs, the FPGA, and the monitor are integrated on a printed circuit board (PCB). This results in a smaller and much more compact configuration than conventional GI systems. We also want to emphasize that the system contains only a single FPGA without any external memory.

We now introduce the framework and workflow of the IGI hardware system, as shown in Fig. 1(b). After the $n^{th}$ measurement has been processed, $S_n$, ${I_{n}}(x)$, and $\mathcal {G}_{n-1}(x)$, which is defined as ${\cal G}_{n - 1}(x) = \sum \nolimits _{i = 1}^{n - 1} {({S_{i + 1}} - {S_{i}})[{I_{i + 1}}(x) - {I_{i}}(x)]}$, are stored in the corresponding registers, $R_S$, $R_I$, and $R_{\mathcal {G}}$. When the $(n+1)^{th}$ signal is detected by two CMOSs, giving ${S_{n+1}}$ and ${I_{n+1}}(x)$, the FPGA can compute the differential signals ${S_{n + 1}} - {S_n}$ and ${I_{n + 1}}(x) - {I_n}(x)$, using ${S_{n}}$ and ${I_{n}}(x)$ from $R_S$ and $R_I$. $S_{n+1}$ and ${I_{n+1}}(x)$ overwrite $S_n$ and ${I_{n}}(x)$ in $R_S$ and $R_I$. $({S_{n + 1}} - {S_n})[{I_{n + 1}}(x) - {I_n}(x)]$ is then calculated and added to $\mathcal {G}_{n-1}(x)$ to give $\mathcal {G}_{n}(x)$, which overwrites $\mathcal {G}_{n-1}(x)$ in $R_{\mathcal {G}}$. This illustration of IGI workflow in processing one measurement shows that the on-chip IGI system can make a pair of measurements and process them immediately before the next measurement is made. At every 125$^{th}$ measurements (i.e. four times per second), the monitor will show the intermediate result $\mathcal {G}_{n}(x)/(2N)$. When the number of measurement $n$ increases to the preset $N$, the reconstructed image of the object is immediately available without any post-processing (hence the Instant in IGI).

## 3. Results

#### 3.1 Hanbury Brown and Twiss effect

GI is based on the second-order point-to-point correlation between the test beam and the reference beam. We demonstrate this correlation by conducting the Hanbury Brown and Twiss (HBT) experiment, which takes the form

The HBT experiment is conducted on the setup shown in Fig. 1(a) to verify the accuracy of the $G_{HBT}^{IGI}$ algorithm. The mask object is removed, and one pixel of the test beam is fixed, $x_t = x_{t0}$. The experiment is conducted using both offline and online methods. For the offline experiment, we take 15,000 measurements made at a rate of 25 measurements per second, store them in a computer, and use both the $G_{HBT}$ algorithm and the $G_{HBT}^{IGI}$ algorithm to process these data offline. The results, with image resolution 400$\times$280, are shown in Figs. 2(a)–2(b). These two algorithms produce almost equal results.

For the online experiment, we use the on-chip IGI system to process the measured data at a rate of 500 measurements per second online. The results, showing the time and number of measurements, are shown in Figs. 2(c)–2(h), which show that as time increases, the HBT effect becomes clearer. Note that when the time is 30 s, the 15,000 measurements have all been made and the final result is immediately available. The movie shown in the monitor of the IGI hardware system can be found in the Visualization 1.

The experimental results show that the $G_{HBT}^{IGI}$ algorithm accurately calculates the second-order correlation for the two beams, thus providing a solid foundation for IGI to successfully reconstruct the image of the object.

#### 3.2 Image of the object

In a similar process, we use both the offline and online methods to reconstruct the image of the object (the letters TH), which is located in the test beam extremely close to CMOS1 [Fig. 3(a)]. The image that is directly captured by CMOS1 is shown in Fig. 3(b). For the offline experiment, 30,000 measurements are made at a rate of 25 measurements per second, which are stored in a computer. The background subtraction algorithm and the IGI algorithm are used to process these data offline. The results, given in Figs. 3(c)–3(d), show that the two algorithms can reconstitute a clear image of the object at a resolution of 400$\times$280.

For the online experiment, we use the on-chip IGI system to directly measure and process the data online at a rate of 500 measurements per second. Figures 3(e)–3(n) show intermediate images produced by the IGI system; they show that as time increases, the ghost image gets clearer. The image appears within 5 s after 2,500 measurements are processed by the IGI system [Fig. 3(i)]; it becomes much more finely resolved at 60 s after 30,000 measurements [Fig. 3(n)]. The movie shown in the monitor of the IGI system can be found in the Visualization 2.

#### 3.3 Two variants of the IGI

We further propose two variants of the IGI algorithm

#### 3.4 Analysis of the IGI

To determine why IGI can reduce the memory requirement and is feasible for hardware, we plotted the values of $\sum \nolimits _{i = 1}^n {{S_i}{I_i}} (x)$ and ${{\cal G}_n}(x) = \sum \nolimits _{i = 1}^n {({S_{i + 1}} - {S_i})} [{I_{i + 1}}(x) - {I_i}(x)]$ as the number of measurements increased (Fig. 5). Note that the value in each case is the average of all the pixels in one image. $\sum \nolimits _{i = 1}^n {{S_i}{I_i}} (x)$ increases much more quickly than ${{\cal G}_n}(x)$. In Fig. 5, we point out that after 30,000 measurements, for a pixel on average, the GI algorithm needs to store a value of $9.93 \times {10^{11}}$ and 40 bits memory (${2^{40}} = 1.09 \times {10^{12}}$) is needs; the IGI algorithm needs to store a value of $1.25\times {10^{8}}$, which requires 27 bits memory (${2^{27}} = 1.34 \times {10^{8}}$). Compared with GI, IGI saves $400\times 280\times 13$ bits, which is 1.5 M-bits memory space, for a $400\times 280$ picture. The advantage of IGI algorithm is that it need not to store the meaningless average numbers of $S$ and $I(x)$, i.e. the direct current (DC) terms. Therefore, the memory space of the chip is used to store the fluctuation of thermal light. The fluctuation term of different patterns is the crucial part of the GI.

A conventional GI algorithm needs to store the values of $\sum \nolimits _{i = 1}^n {{S_i}{I_i}} (x)$, $\sum \nolimits _{i = 1}^n {{S_i}}$, and $\sum \nolimits _{i = 1}^n {{I_i}} (x)$. However, IGI needs only to store the values of ${{\cal G}_n}(x)$, ${{S_n}}$, and ${{I_n}} (x)$. In Fig. 5, we just used the comparison of $\sum \nolimits _{i = 1}^n {{S_i}{I_i}} (x)$, and ${{\cal G}_n}(x)$, to illustrate the advantages of IGI in the required memory space. In fact, for GI, the 26.9 G-bits ($30000\times 400\times 280\times 8$) memory space is needed to store 30,000 measured $I(x)$. IGI only needs 896 K-bits ($400\times 280\times 8$) bits of memory space to store one measurement of $I_n (x)$. FGPA (Xilinx XC7K325T) has only 16 M-bits of on-chip storage capacity, so only IGI can be implemented on-chip, GI cannot. Furthermore, the memory requirement of GI increases rapidly as the number of measurements increases while the memory requirement of IGI increases much more slowly, indicating that IGI needs much less memory overall.

## 4. Disscusion and conclusion

In this study, we conducted offline and online experiments to investigate both the HBT effect and lensless ghost imaging. The offline experiments validated the IGI algorithm, showing that this algorithm provides the same image quality as the background subtraction algorithm. The online experiments demonstrated the feasibility of implementing the IGI algorithm in hardware. The results show the capability of the on-chip IGI system and its two variants. The on-chip IGI system can process 500 measurements per second, and the image is reconstituted immediately after the measurement without any post-processing. Assuming that reconstructing an image with the size of 400 $\times$ 280 requires 10,000 measurements, this on-chip hardware system needs 20 s to obtain one image. We noted that there are some high-speed schemes in the field of computational GI and single-pixel imaging [42], for example, Xu et al. displayed a single-pixel imaging at a speed of 1,000 frames per second with the size of 32 $\times$ 32 [44]. However, the limiting factor of our imaging speed is the CMOS (PYTHON300, whose operating frequency is 500 measurements per second), not the speed of FPGA. Assuming that reconstructing an image requires 10,000 measurements, to reach 24 frames per second, 240,000 measurements per second need to be processed. This is not fast for the FPGA because it can operate at 100 MHz, which is 400 times faster than that is required. The measurement speed of our system can be further increased by faster CMOSs or high-speed photodiode arrays.

The reason why the IGI can drastically reduce the memory requirement and the space complexity of GI were analyzed as follows: Firstly, the use of temporal differential signals removes the need for the space-hungry background term in the data acquisition step. Secondly, IGI requires only one frame of data from both the test and reference beams; hence, it needs less memory space to store the differential signals than conventional GI algorithms, such as a background subtraction algorithm or its normalized version. Thirdly, an empirically determined law of digital circuits states that fewer bits to be processed means fewer hardware computational resources are required to process a signal.

In summary, we have novelly developed an IGI algorithm that significantly reduced the memory requirement of conventional GI without more computational resources and degradation of image quality by using the differential signals between two consecutive temporal measurements of each beam. Although we used a lensless thermal light ghost imaging system to illustrate the capability of IGI, IGI can be directly incorporated in a CGI system. This means that IGI is applicable to GI in general. We also conclude that the development of on-chip IGI is feasible and that all the main components, such as FPGA, CMOSs, and monitor, can be integrated onto a PCB. The on-chip implementation of IGI is significantly cheaper, smaller, and more compact than a conventional GI system, which requires computers and other digital components. These advantages pave the way for practical applications of GI. Our next step is to develop this proof-of-principle setup into a realtime imaging system that operates at more than 24 frames per second.

## 5. Appendix: Hanbury Brown and Twiss algorithm

The conventional HBT algorithm is ${G_{HBT}}({x_t},{x_r}) = \left \langle {[I({x_t}) - \left \langle {I({x_t})} \right \rangle ][I({x_r}) - \left \langle {I({x_r})} \right \rangle ]} \right \rangle .$ Experimentally, we can use the $N$-times measurements to calculate the HBT effect by

We can demonstrate that the Eq. (12) of the $G_{HBT}^{IGI}({x_t},{x_r})$ algorithm is equivalent to the Eq. (11) of the conventional $G_{HBT}({x_t},{x_r})$ algorithm when $N$ is rather large. It can be inferred that Eq. (12) has four terms

## Funding

National Natural Science Foundation of China (NSFC) (51727805).

## Acknowledgment

We thank prof. Kai-Li Jiang for helpful discussions.

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A **52**(5), R3429–R3432 (1995). [CrossRef]

**2. **R. S. Bennink, S. J. Bentley, and R. W. Boyd, “’Two-photon’ coincidence imaging with a classical source,” Phys. Rev. Lett. **89**(11), 113601 (2002). [CrossRef]

**3. **A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. **93**(9), 093602 (2004). [CrossRef]

**4. **F. Ferri, D. Magatti, A. Gatti, M. Bache, E. A. Brambilla, and L. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. **94**(18), 183602 (2005). [CrossRef]

**5. **A. Valencia, G. Scarcelli, M. D’Angelo, and Y. H. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. **94**(6), 063601 (2005). [CrossRef]

**6. **D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A **71**(1), 013801 (2005). [CrossRef]

**7. **G. Scarcelli, V. Berardi, and Y. Shih, “Can two-photon correlation of chaotic light be considered as correlation of intensity fluctuations?” Phys. Rev. Lett. **96**(6), 063602 (2006). [CrossRef]

**8. **L. Basano and P. Ottonello, “Experiment in lensless ghost imaging with thermal light,” Appl. Phys. Lett. **89**(9), 091109 (2006). [CrossRef]

**9. **J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A **78**(6), 061802 (2008). [CrossRef]

**10. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science **340**(6134), 844–847 (2013). [CrossRef]

**11. **M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. **7**(1), 12010 (2016). [CrossRef]

**12. **W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. **6**(1), 26133 (2016). [CrossRef]

**13. **A. M. Kingston, D. Pelliccia, A. Rack, M. P. Olbinado, Y. Cheng, G. R. Myers, and D. M. Paganin, “Ghost tomography,” Optica **5**(12), 1516–1520 (2018). [CrossRef]

**14. **D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. **117**(11), 113902 (2016). [CrossRef]

**15. **H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. **117**(11), 113901 (2016). [CrossRef]

**16. **A. X. Zhang, Y. H. He, L. A. Wu, L. M. Chen, and B. B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica **5**(4), 374–377 (2018). [CrossRef]

**17. **M. Bina, D. Magatti, M. Molteni, A. Gatti, L. A. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. **110**(8), 083901 (2013). [CrossRef]

**18. **P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. **6**(1), 5913 (2015). [CrossRef]

**19. **A. M. Paniagua-Diaz, I. Starshynov, N. Fayard, A. Goetschy, R. Pierrat, R. Carminati, and J. Bertolotti, “Blind ghost imaging,” Optica **6**(4), 460–464 (2019). [CrossRef]

**20. **Z. Yang, L. Zhao, X. Zhao, W. Qin, and J. Li, “Lensless ghost imaging through the strongly scattering medium,” Chin. Phys. B **25**(2), 024202 (2016). [CrossRef]

**21. **A. V. Diebold, M. F. Imani, T. Sleasman, and D. R. Smith, “Phaseless coherent and incoherent microwave ghost imaging with dynamic metasurface apertures,” Optica **5**(12), 1529–1541 (2018). [CrossRef]

**22. **G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica **6**(8), 921–943 (2019). [CrossRef]

**23. **R. I. Khakimov, B. M. Henson, D. K. Shin, S. S. Hodgman, R. G. Dall, K. G. H. Baldwin, and A. G. Truscott, “Ghost imaging with atoms,” Nature **540**(7631), 100–103 (2016). [CrossRef]

**24. **S. S. Hodgman, W. Bu, S. B. Mann, R. I. Khakimov, and A. G. Truscott, “Higher-Order Quantum Ghost Imaging with Ultracold Atoms,” Phys. Rev. Lett. **122**(23), 233601 (2019). [CrossRef]

**25. **P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics **10**(3), 167–170 (2016). [CrossRef]

**26. **F. Devaux, P. A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica **3**(7), 698–701 (2016). [CrossRef]

**27. **H. Wu, P. Ryczkowski, A. T. Friberg, J. M. Dudley, and G. Genty, “Temporal ghost imaging using wavelength conversion and two-color detection,” Optica **6**(7), 902–906 (2019). [CrossRef]

**28. **L. Chen, J. Lei, and J. Romero, “Quantum digital spiral imaging,” Light: Sci. Appl. **3**(3), e153 (2014). [CrossRef]

**29. **Z. Yang, O. S. Maga na-Loaiza, M. Mirhosseini, Y. Zhou, B. Gao, L. Gao, S. M. H. Rafsanjani, G. L. Long, and R. W. Boyd, “Digital spiral object identification using random light,” Light: Sci. Appl. **6**(7), e17013 (2017). [CrossRef]

**30. **F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. **104**(25), 253603 (2010). [CrossRef]

**31. **Y. O-oka and S. Fukatsu, “Differential ghost imaging in time domain,” Appl. Phys. Lett. **111**(6), 061106 (2017). [CrossRef]

**32. **W. Wang, Y. P. Wang, J. Li, X. Yang, and Y. Wu, “Iterative ghost imaging,” Opt. Lett. **39**(17), 5150–5153 (2014). [CrossRef]

**33. **X. R. Yao, W. K. Yu, X. F. Liu, L. Z. Li, M. F. Li, L. A. Wu, and G. J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express **22**(20), 24268–24275 (2014). [CrossRef]

**34. **K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. **34**(21), 3343–3345 (2009). [CrossRef]

**35. **S. S. Hodgman, W. Bu, S. B. Mann, R. I. Khakimov, and A. G. Truscott, “Higher-Order Quantum Ghost Imaging with Ultracold Atoms,” Phys. Rev. Lett. **122**(23), 233601 (2019). [CrossRef]

**36. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. **95**(13), 131110 (2009). [CrossRef]

**37. **Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A **79**(5), 053840 (2009). [CrossRef]

**38. **H. Huang, C. Zhou, T. Tian, D. Liu, and L. Song, “High-quality compressive ghost imaging,” Opt. Commun. **412**, 60–65 (2018). [CrossRef]

**39. **X. Shi, X. Huang, S. Nan, H. Li, Y. Bai, and X. Fu, “Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method,” Laser Phys. Lett. **15**(4), 045204 (2018). [CrossRef]

**40. **Y. Wang, Y. Liu, J. Suo, G. Situ, C. Qiao, and Q. Dai, “High speed computational ghost imaging via spatial sweeping,” Sci. Rep. **7**(1), 45325 (2017). [CrossRef]

**41. **Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. **6**(1), 6225 (2015). [CrossRef]

**42. **K. M. Czajkowski, A. Pastuszczak, and R. Kotynski, “Real-time single-pixel video imaging with Fourier domain regularization,” Opt. Express **26**(16), 20009–20022 (2018). [CrossRef]

**43. **L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh-Hadamard transform,” Photonics Res. **4**(6), 240–244 (2016). [CrossRef]

**44. **Z. H. Xu, W. Chen, J. Penuelas, M. J. Padgett, and M. J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express **26**(3), 2427–2434 (2018). [CrossRef]

**45. **J. L. Li, “Study on second-order correlated imaging with pseudo-thermal light,” Doctoral dissertation, Tsinghua University, (2016) (In Chinese).

**46. **J. L. Li, Z. Yang, and L. L. Long, Patent CN201510151008.2 (2015) (In Chinese).

**47. **Z. Yang, J. L. Li, and W. X. Zhang, Patent CN201910795456.4 (2019) (In Chinese).

**48. **Y. X. Li, W. K. Yu, J. Leng, and S. F. Wang, “Pseudo-thermal imaging by using sequential-deviations for real-time image reconstruction,” Opt. Express **27**(24), 35166–35181 (2019). [CrossRef]

**49. **J. W. Goodman, * Statistical Optics* (Wiley, 1985).