Abstract

Research towards practical applications of ghost imaging attracts more and more attention in recent years. Signal-to-noise ratio (SNR) of bucket results thus quality of images can be greatly affected by environmental noise, such as strong background light. We introduce temporal cross-correlation into typical ghost imaging to improve SNR of bucket value, taking temporal profile of illumination pulses as a prior information. Experimental results at sunny noontime verified our method, with the imaging quality greatly improved for the object at a distance of 1.3km. We also show the possibility of 3-dimensional imaging, experimentally.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging(GI) is a widely concerned imaging technique, which can provide image with detector of only a single pixel [14]. Ghost imaging was firstly demonstrated by means of two-photon coincidence based on quantum entanglement [5]. Afterwards, the idea was extended to thermal ghost imaging, taking thermal or pseudo-thermal source as illumination [6]. In typical thermal GI configuration, speckle fields are usually employed as illumination source, divided into two paths. In the object path, the field illuminates the object, with reflected (or transmitted) light collected by a detector of no spatial resolution, which is called bucket detector. In the reference path, intensity distribution of the field is recorded by a spatial resolving detector which doesn’t look towards the object. Results of neither detector can offer image of the object unless they are combined by correlation.

Nowadays, people pay lots of attention on improving performance of ghost imaging, towards possible practical applications. Efforts have been paid to speckle design [7,8], imaging algorithm [915], high-order correlation [1618], and machine learning [1921], etc. On the other hand, people are also trying to expand the boundary of GI, such as considering the capability of GI at extremely weak light level. Especially, images of the object can be achieved even averagely less than one photon per pixel is detected [22,23]. However, in practical applications, noise from the environment and detector itself will cause reduction of the signal-to-noise ratio(SNR) of the bucket detection, and the quality of images will decrease accordingly. Sometimes, the noise is so strong that the echoes of the illumination pulses cannot even be located in time domain. Therefore the bucket detection value cannot be correctly identified and no image of object can be recovered.

On figuring out this issue, space-time duality [24,25] can be inspiring. In general, space-time duality is based on the fact that diffraction equation of monochromatic light in space and dispersion equation of narrow pulses in time share equivalent mathematical description [26]. Such comparison and analogy are then extended. Considering the space-time duality, many techniques have been generalized from spatial domain to temporal domain, or vice versa. As an example, temporal ghost imaging(TGI) [27,28] has been demonstrated, extended from spatial version. In this case, the two dimensions of time and space are considered separately. In another computational temporal ghost imaging [29] configuration, both dimensions are combined together, which utilizes multiple points in spatial domain to reconstruct temporal signals and reduce the sampling requirement to single-shot measurement. Similarly, promoting applications of spatial ghost imaging by using temporal information are also taken into account [30,31]. As a coherent version, chirped amplitude modulation heterodyne ghost imaging (CAM-HGI) [32] was also proposed, which helps improve the performance on circumstances of strong background light via coherent detection. However, monochromaticity and frequency stability are critically required for coherent detection, which limits its practicability in complicated environments.

To enhance the robustness against harsh environment noise, we propose temporal cross-correlation ghost imaging(TCGI), inspired by space-time duality. Using temporal cross-correlation, temporal location of the echoes caused by the object can be precisely recovered, even if they are immersed in noise, within typical GI configuration. Furthermore, the bucket detection is replaced with the temporal correlation between the echoes and the illumination pulses, which further improves robustness of GI against noise. Our method is confirmed by experiments at sunny noontime, with the object set at a distance of 1.3km.

2. Theoretical analysis

For typical GI with pulsed illumination, the image is

$$G_0(x)=\left\langle\int_{t_0}^{t_0+T_0} i_2(t)dt \int_{t_0}^{t_0+T_0} I_r(x,t)dt\right\rangle_N - \left\langle\int_{t_0}^{t_0+T_0} i_2(t)dt\right\rangle_N \left\langle \int_{t_0}^{t_0+T_0} I_r(x,t) dt\right\rangle_N.$$

Here, $\langle \cdot \rangle _N$ means ensemble average over $N$ different illumination patterns, $t_0$ and $T_0$ define the start and temporal duration of each pattern. $I_r(x,t)=I_r(x)s(t)$ is the intensity distribution of illuminating field at time $t$, with $s(t)$ being the temporal profile of the total illumination power. $i_2(t)\propto \int dx I_r(x)O(x)s(t)+n(t)$ represents the bucket detection results, with $O(x)$ being the reflection rate of the object and $n(t)$ being the noise from the environment and the detector itself. Generally, the photodetector is time resolving, with the integral of $i_2(t)$ over $T_0$ usually used as the bucket value. For practical cases of strong noise, it can be hard to find out $t_0$ of the bucket detection for each pattern. Even if the noise is not such strong to submerge the valuable signal, low SNR of the bucket detection will also affect the imaging quality. Toward this, we propose to replace the bucket value with the maximum of cross-correlation between the bucket results and the profile of illumination pulses. The cross-correlation function can be written as [33,34]

$$C(\tau)=s(t) \star i_2(t) = \int s(t)i_2(t+\tau)dt=s(t)\star s(t)+s(t)\star n(t)$$
$$\propto s(t)\star s(t)\int I_r(x)O(x)dx +s(t)\star n(t).$$

Since the noise is not correlated to the illumination, the last term in Eq. (2) is not sensitive to the value of $\tau$. Accordingly, $\tau _0=\arg \max C(\tau )$ implies the right location $t_0$ of the echoes. Distance of the object can also be determined as $z=\frac {1}{2}c \tau _0$. From Eq. (3), the spatial information of the object is linearly delivered into the temporal correlation. Therefore, we can take $C_{max}(\tau )$ as new bucket value. The image turns to be

$$G_1(x)=\langle C_{max}(\tau) I_r(x)\rangle_N - \langle C_{max}(\tau)\rangle_N \langle I_r(x)\rangle_N$$

At the same time, since $C(\tau )$ is maximized, $s(t)\star n(t)\vert _{\tau _0}$ is promisingly smaller than $s(t)\star s(t)\vert _{\tau _0}$, which means the influence of noise is reduced when using the maximum correlation. Therefore, the quality of image will be improved.

Our discussion above is based on the assumption of planar objects. If there are multiple objects at different distances, or objects with thickness, multi-peaks or wider peak will appear in the correlation function. By analyzing in details, distances of different part can be measured by temporal correlation, as can be used to achieve 3-dimensional (3D) imaging.

Temporal correlation enhances the robustness of GI in two ways. On the one hand, it helps to temporally locate the reflected pulses. On the other hand, the uncorrelated noise is actually filtered out, which improves SNR of bucket value. Imagine a special case that, a train of binary pulses is contained for each illumination pattern, using temporal correlation actually picks out those part of bucket detection which exactly match the temporal profile of the illumination pulses. For this case, our results can be taken as equivalent to that of traditional GI with given position of every pulse, as can be implied by comparing between Eq. (1) and Eq. (4). For convenience, we will analyze the quality of image for such special case, considering the contrast to noise ratio(CNR) of the images, defined as [16]

$$CNR_{G}=\frac{\langle G(x_{in})\rangle -\langle G(x_{out})\rangle }{\sqrt{\frac{1}{2}[\Delta^2G(x_{in})+\Delta^2G(x_{out})]}},$$
for binary objects, with $x_{in}$ and $x_{out}$ infer the spatial positions within and out of the object area. The variance of the image at $x$ reads
$$\Delta^2G(x)=\frac{1}{N}[\langle G(x)^2 \rangle-\langle G(x) \rangle^2].$$

Take the average size of speckles illuminated on the object as size of one pixel on the image, and assume the average intensity of illumination on the surface of the object is well-distributed, satisfying a negative exponential probability distribution for every pixel. Suppose the area of object contains $P$ speckles(pixels) and all reflected light is received by the bucket detector. Consider only additive noise, which is statistically independent to the valuable signal. With the spatial position pixelated, the $m^{th}$ bucket detection signal $B_m$ can be rewritten as

$$B_m=\sum_{x_{in}}^{P}I_r(x_{in})+n_m.$$

Then the image can be rewritten as,

$$G(x)=\langle B_m -\langle B_m \rangle_N \rangle_N \langle I_r(x)-\langle I_r(x) \rangle_N \rangle_N$$

Therefore,

$$\begin{aligned}\langle G(x_{in}) \rangle &=\left\langle( \sum_{x_{in}}^{P}I_r(x_{in})+n_m)I_r(x_{in}) \right\rangle-\left\langle \sum_{x_{in}}^{P}I_r(x_{in})+n_m \right\rangle \langle I_r(x_{in}) \rangle\\ &=\left\langle (\sum_{x_{in}}^{P}I_r(x_{in}))I_r(x_{in}) \right\rangle+\langle n_mI_r(x_{in}) \rangle- \langle I_r(x_{in}) \rangle \left\langle\sum_{x_{in}}^{P}I_r(x_{in})\right\rangle-\langle n_m\rangle\langle I_r(x_{in}) \rangle. \end{aligned}$$

Since illumination intensity on every pixel belongs to independently identical distribution (i.i.d.), we can obtain

$$\langle I(x_1)I(x_2)\rangle=\left\{\begin{matrix} & \langle I^2(x_1)\rangle & x_1=x_2\\ & \langle I(x_1)\rangle\langle I(x_2)\rangle & x_1\neq x_2\end{matrix}\right.$$

Thus,

$$\langle G(x_{in}) \rangle= \langle I_r(x_{in})^2 \rangle-\langle I_r(x_{in}) \rangle ^2, \ \ \ \langle G(x_{out}) \rangle= 0.$$

Then, we can get

$$CNR_{G} = \sqrt{\frac{N}{P[1+7/2P+\frac{\langle n^2 \rangle-\langle n\rangle^2}{P(\langle I_r(x_0)^2 \rangle-\langle I_r(x_0)\rangle^2)}]}},$$
with $x_0$ being one pixel (anyone) within the illumination field. It shows that, for given number of samplings and certain object, the quality of image is mainly determined by signal-to-noise ratio of the bucket detection(BSNR), which is defined as the ratio between fluctuation caused by reflection from the object and that of the noise. Namely,
$$BSNR =\frac{\langle B_0^2 \rangle-\langle B_0\rangle^2}{\langle n^2 \rangle-\langle n\rangle^2}=\frac{P(\langle I_r(x_0)^2 \rangle-\langle I_r(x_0)\rangle^2)}{\langle n^2 \rangle-\langle n\rangle^2}.$$

Here, $B_0$ is the bucket value, excluding noise. The total fluctuation of $P$ speckles on the object surface is taken as the fluctuation caused by reflection from the object. With i.i.d. illumination at different pixels, $\sum _{x_{in}}\langle I_r(x_{in})^2 \rangle =P\langle I_r(x_0)^2 \rangle$.

From the above analysis, the higher BSNR, the better images will be achieved. Taking the maximum of temporal cross-correlation as bucket value, the influence of noise in bucket value can be reduced and BSNR gets improved. This also implies the direction for further improving by profile designing. As an easy example, it is convenient for us to discuss how the width of pulses influences quality of images, in accordance with Eq. (13). If the width of pulses are set $m(m>1)$ times larger with the amplitude unchanged, average intensity of the patterns will be also $m$ times larger $\langle I'_r(x) \rangle =m \langle I_r(x)\rangle$, while the variance will be $m^2$ times larger $\langle I'_r(x)^2 \rangle =m^2\langle I_r(x)^2 \rangle$. At the same time, for i.i.d noise, only the variance gets $m$ times larger $\langle n'^2 \rangle =m\langle n^2 \rangle$, with more temporal pixels within the detection duration are involved. Thus, BSNR gets $m$ times larger $BSNR'=mBSNR$.

Note that, when fluctuations of noise increase, CNR decreases. While, if the average amplitude of noise is large while fluctuations of it is small, both traditional GI and our method are able to recover the image of objects. The effect of such high power noise of low fluctuations can be eliminated via fluctuation correlation. Considering practical situation, such noise can reduce the effective dynamic range of detection, or even saturate the detector, which will definitely affect the imaging quality.

3. Experiments and results

To verify our method of TCGI, especially to confirm the robustness, we performed experiments within a typical GI configuration, with the schematic diagram shown in Fig. 1. A pulsed laser with wavelength of 1064nm is headed onto a rotating ground glass (RGG), which serves as a pseudo-thermal source. The refreshing rate of illumination patterns is set as $5000$Hz. Then the pseudo-thermal light is divided into two paths by a beam splitter(BS). On the reference path, the patterns created by RGG are imaged onto a CMOS camera (Mikrotron Eosens 3CL) by a lens $L_1$ with focal length of $f_1=20$cm. The data acquisition field of the camera is set as $128*128$ pixels, with a capturing rate of $5000$ frames per second. On the other path, the patterns are imaged onto the object plane, which is 1.3km away from the system. Through the imaging lens $L_2$($f_2=75cm$), the average size of speckles is about 4cm on the object surface. The object is made of high-reflection paper (usually used for traffic signs), as shown at the bottom right of Fig. 1. The reflected light is collected by a telescope $L_3$(MEADE $LX90$ ACF Telescope, $D=30.48cm, f_3=304.8cm$), located beside the laser, and focalized into a photomultiplier tube (PMT, $800$MHz) fitted with a spectrum filter (center wavelength at 1064nm, with the full width at half maximum being FWHM=10nm).

 figure: Fig. 1.

Fig. 1. The experimental configuration of temporal cross-correlation ghost imaging. The generated speckle patterns are split into two paths by a beam splitter(BS). On the reference arm, the patterns are imaged onto the CMOS camera by $L_1$($f_1=20cm$). On the object path, the patterns are imaged onto the object surface by $L_2(f_2=75cm)$ and the reflected light is collected by $L_3$ into a photomultiplier tube (PMT). The object is shown at bottom right corner. Profile of the pulse train is controlled and recorded by a computer.

Download Full Size | PPT Slide | PDF

To perform temporal correlation of our method, the laser launches a pulse train containing 32 pulses of 5ns for each speckle pattern, with the time span being 5$\mu s$ and the profile recorded. With the echoes from object detected, the temporal correlation is calculated between the signal from PMT and the recorded pulse train profile. The intervals between pulses within the train are designed based on two considerations. Firstly, the expected cross-correlation should be a single-peak function within the time span of each pattern. This ensures that we can measure the distance via searching the maximum of cross correlation. Secondly, the obtained cross-correlation should appear high peak signal-to-sidelobe ratio, which will make it easier to extract signal from complex echoes. In addition, the duration of every pulse train (5$\mu s$) is much smaller than the refreshing interval of the patterns (1/5kHz=200$\mu s$), the pattern recorded by the CMOS camera can be taken as still.

The experiments were taken at noontime on sunny days in May 2021, with the background noise being high. By controlling the launched power of the laser, different values of BSNR can be achieved. The values of BSNR are obtained from the bucket detection results. After cross correlation, the locations of the pulses can be determined. Accordingly, the raw data acquired from PMT in every frame is divided into two parts. One part contains the region where the duty of the pulses exist, as is taken as valuable signal. The other parts are taken as the noise. BSNR is then calculated, as the ratio between the variance of signal part and that of the noise part.

By calculating cross-correlation between the detection results and the recorded profile, the distance of the object is measured as $d=1.324km$. Then the maximum of the cross correlation is taken as new bucket value and images can be obtained according to Eq. (4). For comparison, images of traditional GI are also obtained with Eq. (1). The results are shown in Fig. 2. Images by TCGI and GI at different level of BSNR are reconstructed from $N=7000$ illumination patterns, with the values of BSNR shown along with the images.

 figure: Fig. 2.

Fig. 2. Experimental results. (a)Images obtained with both methods, with the duty ratio of the pulse train being 0.044. Images in the first and second row are obtained by TCGI, and images in the third and fourth row are from GI. The first row and the third row are reconstructed by compressed sensing(CS). The second and the fourth row are recovered by fluctuation correlation. For each column, $BSNR$ is shown on the top. (b)Images obtained by traditional GI in different duty ratios. The duty ratio is shown on top of each image. The number of speckle patterns used for each image is M=7000.

Download Full Size | PPT Slide | PDF

The duty ratio of each pulse train is firstly set as $0.044$. With different launched laser powers, the achieved BSNR of the raw data from PMT ranges from 0.344 to 1.957. From Fig. 2(a), both methods can work well with the raw data BSNR being as high as 1.957. TCGI can offer better quality of image than traditional GI. While, with lower raw data BSNR, traditional GI can no longer provide acceptable image. TCGI gains clear images all the time. That is, TCGI enhanced the robustness of GI in sunny weather, which still contains good performance even when the raw data BSNR goes down to 0.344. What’s more, from Fig. 2(a), compared with fluctuation correlation algorithm, compressed sensing (CS) can help to improve quality of images when BSNR is high. While, with rather low BSNR, both CS and fluctuation correlation algorithm cannot provide acceptable image. On this occasion, CS doesn’t have extra advantages.

Temporal cross-correlation enables to identify the valuable signal even if it is flooded by noise. To explain this, we show a segment of raw data from PMT in Fig. 3(a). Within the data of $62.5\mu s$, those periodic peaks (marked with circles) that clearly stand out are actually caused by electronic circuit of PMT and the amplifier. Temporal cross-correlation inferred that the valuable signal located in the first 5$\mu s$, with the profile of the pulse train shown in Fig. 3(c) and the temporal cross-correlation between the profile and raw data shown in (d). As shown in Fig. 3, it is hard to tell what the bucket value should be, only from the raw data.

 figure: Fig. 3.

Fig. 3. Experimental results of bucket values. (a) Raw data from PMT. (b) Raw data located by temporal correlation for an illumination pattern. (c) Profile of illumination pulse train. (d) Temporal cross-correlation between raw data and pulse train. Those stand-out peaks are actually caused by circuit of PMT and the amplifier, which is not signals from the object.

Download Full Size | PPT Slide | PDF

BSNR can be greatly improved by temporal correlation, as seen by comparing between Fig. 3(b) and (d). For the images shown above, we also investigated the improvement of BSNR. From the obtained temporal correlation, as shown in Fig. 3(d), BSNRs are calculated. Fluctuations of the max values for different patterns are calculated and fluctuations of the noise part in correlation function are also calculated, with the ratio between them taken as the improved BSNR. The results are shown in Table 1. Since noise can be filtered out by temporal correlation, BSNR for each case is increased by about two orders of magnitude.

Tables Icon

Table 1. Enhancement of BSNR with temporal cross-correlation. With different level of launched power, SNRs of the raw data are shown in the first row. The improved BSNRs with temporal cross-correlation are shown in the second row.

To explore the possibility of 3D imaging, we performed experiments considering multiple objects at different distances. Due to our design of the pulse train, when there is one planar object at certain distance, only one peak appears in cross-correlation function. The width of the peak depends on the duration of single pulse, instead of that of the whole train. Besides, it is also related to the bandwidth of the detector. Then, the distance resolution is determined by both. With our devices, we demonstrated imaging of multiple objects at different distances. Several peaks show up in the correlation function. By analyzing those peaks, distances for each object can be achieved. Three traffic signs are fixed at 1.378km, 1.380km, 1.382km, respectively. The imaging results obtained with 30000 illumination patterns are shown in Fig. 4. Different colors are used to mark difference in distance. The result shows that TCGI can not only enhance the robustness of GI, but also be available for distinguishing objects in different distances. For non-planar objects, higher resolution is required to obtain meaningful images.

 figure: Fig. 4.

Fig. 4. Experimental results of 3D imaging. Three traffic signs are set at 1.378km,1.380km,1.382km, respectively. By analyzing those peaks in temporal correlation functon, distances of three objects are determined. Then image of them is ahiceved with 30000 illumination patterns. To mark the difference in distance, we set colors for different part within the image.

Download Full Size | PPT Slide | PDF

To further explore the reason why TCGI provides better performance, we did numerical experiments with the raw data that produced to the middle image shown in Fig. 2(a) of BSNR=0.518. Based on the location of pulses obtained via temporal cross-correlation, we found out valuable signals from the raw data. For TCGI, actually only such data is employed for image reconstruction, due to replacing bucket value with the maximum of correlation. While, those NOISE parts are also involved in image reconstruction for traditional GI, since the noise is not filtered. Therefore, according to Eq. (1), the duty ratio of illumination pulse will influence the imaging quality, since the percentage of valuable signal will change accordingly. To simulate the duty ratio, we numerical pick data of certain length from the raw data, including the position of pulses and the neighbour. The wider the picked region, the lower duty ratio is simulated. Images from traditional GI are shown in Fig. 2(b), with the effective duty ratio labeled. It can be seen that, the lower duty ratio, the worse image quality. Therefore, TCGI works well because it helps to match the temporal position of the real pulses even if SNR is very low, which reduces the noise involved in imaging reconstruction.

Inversely, those noise that happen to match the temporal position of the pulses cannot be filtered out, in accordance with the second term of Eq. (2). If that term is removed, the imaging quality will be further enhanced. Although it is not possible to totally remove such noise and we only demonstrated with binary pulse train, optimization of the illumination profile can be helpful. For practical applications, noise under certain scenarios will be colored noise and usually appear specific features. Designing illumination profiles according to such features can help to further improve the performance of TCGI. In this respect, coherent detection can also be an efficient way. In CAM-HGI [32], the chirped waveform is received and coherently mixed with local oscillator. Then the noise can be efficiently filtered. However, for coherent detection, it is hard to adjust for fitting different situations. By contrast, TCGI does not require coherent detection, taking advantage of intensity correlation in time domain, which makes it easier for application.

In practical, there are different kinds of noise that can influence the imaging quality in GI system. For those additive noise, such as background light and electrical noise in the bucket detector, time cross-correlation can be used to reduce the influence. While for some kinds of noise, such as fluctuations caused by turbulence or space related noise in the reference arm, time correlation might not work well. At the same time, since the information of the object can be linearly transferred into the temporal correlation, our method can be easily combined with existing algorithms for GI, by simply replacing the bucket value with the maximum of temporal correlation, while with higher SNR. Besides, structure of interested objects can be much more complex and the background can be natural scene in practice. To achieve images for those cases, higher original SNR and larger number of samplings are required. Since we did not introduce any requirements on the object or the scene in theoretical analysis, our method will be still effective to improve imaging quality of GI. Or, our method actually extend the tolerant SNR of GI, as has been verified in our experiments.

In addition, from Eq. (12), it is the intensity fluctuations that decide the quality of valuable signals and the image. This comes from the difference between GI and other imaging techniques. For traditional imaging, people concentrate on mean intensity instead of the fluctuations. While for GI, since the images are reconstructed by averaging over different illumination patterns, the mean intensity only provides a background in the recovered image. Those patterns, that offer larger fluctuations, imply greater match or mismatch with the distribution of object in space domain. Therefore, samplings related to those patterns transporting more information about the object. Before that, the illumination fields are expected to contain higher fluctuation.

4. Conclusion

In conclusion, we introduced time cross-correlation into typical GI to enhance robustness of GI. By temporal correlation between the bucket detection results and the profile of illumination patterns, position of the echoes can be correctly located. By replacing bucket value with the maximum of temporal correlation, SNR of bucket value can be greatly improved. Thus the quality of images get significantly enhanced. Experiments are taken at sunny noontime, with the results confirmed our method well. This method is easy to combine with existing algorithms of GI. With optimization of illumination profile, the performance of TCGI can be further improved. Our method can help to extend the boundary of GI, especially for the situations of weak echoes. We believe our method helps to pave the way of practical applications of GI, especially under noisy environment.

Funding

National Natural Science Foundation of China (11774431, 61701511).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

2. J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004). [CrossRef]  

3. D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

6. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]  

7. X. Y. Xu, E. R. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101 (2015). [CrossRef]  

8. E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015). [CrossRef]  

9. K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

10. S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016). [CrossRef]  

11. F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

12. B. Q. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

13. E. F. Zhang, H. Z. Lin, W. T. Liu, Q. Li, and P. X. Chen, “Sub-rayleigh-diffraction imaging via modulating classical light,” Opt. Express 23(26), 33506–33513 (2015). [CrossRef]  

14. S. Sun, W. T. Liu, J. H. Gu, H. Z. Lin, L. Jiang, Y. K. Xu, and P. X. Chen, “Ghost imaging normalized by second-order coherence,” Opt. Lett. 44(24), 5993–5996 (2019). [CrossRef]  

15. S. Sun, J. H. Gu, H. Z. Lin, L. Jiang, and W. T. Liu, “Gradual ghost imaging of moving objects by tracking based on cross correlation,” Opt. Lett. 44(22), 5594–5597 (2019). [CrossRef]  

16. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express 18(6), 5562–5573 (2010). [CrossRef]  

17. K. Kuplicki and K. W. C. Chan, “High-order ghost imaging using non-rayleigh speckle sources,” Opt. Express 24(23), 26766–26776 (2016). [CrossRef]  

18. D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008). [CrossRef]  

19. H. K. Hu, S. Sun, H. Z. Lin, L. Jiang, and W. T. Liu, “Denoising ghost imaging under a small sampling rate via deep learning for tracking and imaging moving objects,” Opt. Express 28(25), 37284–37293 (2020). [CrossRef]  

20. T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018). [CrossRef]  

21. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

22. X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018). [CrossRef]  

23. X. L. Liu, J. H. Shi, L. Sun, Y. H. Li, J. P. Fan, and G. H. Zeng, “Photon-limited single-pixel imaging,” Opt. Express 28(6), 8132–8144 (2020). [CrossRef]  

24. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994). [CrossRef]  

25. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630 (1989). [CrossRef]  

26. R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013). [CrossRef]  

27. P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016). [CrossRef]  

28. Y. K. Xu, S. H. Sun, W. T. Liu, G. Z. Tang, and P. X. Chen, “Detecting fast signals beyond bandwidth of detectors based on computational temporal ghost imaging,” Opt. Express 26(1), 99 (2018). [CrossRef]  

29. F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698–701 (2016). [CrossRef]  

30. C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017). [CrossRef]  

31. Y. Yang, J. H. Shi, F. Cao, J. Y. Peng, and G. H. Zeng, “Computational imaging based on time-correlated single-photon-counting technique at low light level,” Appl. Opt. 54(31), 9277–9283 (2015). [CrossRef]  

32. L. Pan, C. J. Deng, Z. W. Bo, X. Yuan, D. Zhu, W. Gong, and S. S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 24, 26766–26776 (2020). [CrossRef]  

33. R. Bracewell, The Fourier Transform And Its Applications (McGraw-Hill, 1999).

34. A. Papoulis, The Fourier integral and its applications (McGraw-Hill, 1962).

References

  • View by:

  1. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
    [Crossref]
  2. J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004).
    [Crossref]
  3. D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
    [Crossref]
  4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
    [Crossref]
  5. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
    [Crossref]
  6. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
    [Crossref]
  7. X. Y. Xu, E. R. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101 (2015).
    [Crossref]
  8. E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015).
    [Crossref]
  9. K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
    [Crossref]
  10. S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
    [Crossref]
  11. F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
    [Crossref]
  12. B. Q. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
    [Crossref]
  13. E. F. Zhang, H. Z. Lin, W. T. Liu, Q. Li, and P. X. Chen, “Sub-rayleigh-diffraction imaging via modulating classical light,” Opt. Express 23(26), 33506–33513 (2015).
    [Crossref]
  14. S. Sun, W. T. Liu, J. H. Gu, H. Z. Lin, L. Jiang, Y. K. Xu, and P. X. Chen, “Ghost imaging normalized by second-order coherence,” Opt. Lett. 44(24), 5993–5996 (2019).
    [Crossref]
  15. S. Sun, J. H. Gu, H. Z. Lin, L. Jiang, and W. T. Liu, “Gradual ghost imaging of moving objects by tracking based on cross correlation,” Opt. Lett. 44(22), 5594–5597 (2019).
    [Crossref]
  16. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express 18(6), 5562–5573 (2010).
    [Crossref]
  17. K. Kuplicki and K. W. C. Chan, “High-order ghost imaging using non-rayleigh speckle sources,” Opt. Express 24(23), 26766–26776 (2016).
    [Crossref]
  18. D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
    [Crossref]
  19. H. K. Hu, S. Sun, H. Z. Lin, L. Jiang, and W. T. Liu, “Denoising ghost imaging under a small sampling rate via deep learning for tracking and imaging moving objects,” Opt. Express 28(25), 37284–37293 (2020).
    [Crossref]
  20. T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
    [Crossref]
  21. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
    [Crossref]
  22. X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018).
    [Crossref]
  23. X. L. Liu, J. H. Shi, L. Sun, Y. H. Li, J. P. Fan, and G. H. Zeng, “Photon-limited single-pixel imaging,” Opt. Express 28(6), 8132–8144 (2020).
    [Crossref]
  24. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).
    [Crossref]
  25. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630 (1989).
    [Crossref]
  26. R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
    [Crossref]
  27. P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
    [Crossref]
  28. Y. K. Xu, S. H. Sun, W. T. Liu, G. Z. Tang, and P. X. Chen, “Detecting fast signals beyond bandwidth of detectors based on computational temporal ghost imaging,” Opt. Express 26(1), 99 (2018).
    [Crossref]
  29. F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698–701 (2016).
    [Crossref]
  30. C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
    [Crossref]
  31. Y. Yang, J. H. Shi, F. Cao, J. Y. Peng, and G. H. Zeng, “Computational imaging based on time-correlated single-photon-counting technique at low light level,” Appl. Opt. 54(31), 9277–9283 (2015).
    [Crossref]
  32. L. Pan, C. J. Deng, Z. W. Bo, X. Yuan, D. Zhu, W. Gong, and S. S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 24, 26766–26776 (2020).
    [Crossref]
  33. R. Bracewell, The Fourier Transform And Its Applications (McGraw-Hill, 1999).
  34. A. Papoulis, The Fourier integral and its applications (McGraw-Hill, 1962).

2020 (3)

2019 (2)

2018 (3)

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018).
[Crossref]

Y. K. Xu, S. H. Sun, W. T. Liu, G. Z. Tang, and P. X. Chen, “Detecting fast signals beyond bandwidth of detectors based on computational temporal ghost imaging,” Opt. Express 26(1), 99 (2018).
[Crossref]

2017 (2)

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

2016 (4)

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698–701 (2016).
[Crossref]

K. Kuplicki and K. W. C. Chan, “High-order ghost imaging using non-rayleigh speckle sources,” Opt. Express 24(23), 26766–26776 (2016).
[Crossref]

S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
[Crossref]

2015 (4)

2013 (1)

R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
[Crossref]

2012 (1)

2010 (2)

2009 (2)

K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

2008 (2)

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

2005 (2)

D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
[Crossref]

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

2004 (1)

J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004).
[Crossref]

1995 (1)

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

1994 (1)

B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).
[Crossref]

1989 (1)

Barbier, M.

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Bo, Z. W.

Boyd, R. W.

Bracewell, R.

R. Bracewell, The Fourier Transform And Its Applications (McGraw-Hill, 1999).

Bromberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Cao, D. Z.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
[Crossref]

Cao, F.

Chan, K. W. C.

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Chen, P. X.

Chen, P.-X.

E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015).
[Crossref]

Cheng, J.

J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004).
[Crossref]

D’Angelo, M.

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

Deng, C. J.

L. Pan, C. J. Deng, Z. W. Bo, X. Yuan, D. Zhu, W. Gong, and S. S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 24, 26766–26776 (2020).
[Crossref]

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Denis, S.

Devaux, F.

Dudley, J. M.

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Edgar, M. P.

Endo, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Fan, J. P.

Ferri, F.

F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Foster, M. A.

R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
[Crossref]

Friberg, A. T.

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Gaeta, A. L.

R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
[Crossref]

Gao, L.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

Gao, X.

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Gatti, A.

F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Genty, G.

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Gong, W.

Gong, W. L.

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Gu, J. H.

Han, S.

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

X. Y. Xu, E. R. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101 (2015).
[Crossref]

Han, S. S.

Hasegawa, S.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Hirayama, R.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Hu, H. K.

Jiang, L.

Kakue, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Katz, O.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Kolner, B. H.

B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).
[Crossref]

B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630 (1989).
[Crossref]

Kuplicki, K.

Lantz, E.

Li, E. R.

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Li, Q.

S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
[Crossref]

E. F. Zhang, H. Z. Lin, W. T. Liu, Q. Li, and P. X. Chen, “Sub-rayleigh-diffraction imaging via modulating classical light,” Opt. Express 23(26), 33506–33513 (2015).
[Crossref]

Li, Y. H.

Lin, H. Z.

Lin, L. F.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

Liu, J. Y.

S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
[Crossref]

Liu, W. T.

Liu, W.-T.

E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015).
[Crossref]

Liu, X. L.

X. L. Liu, J. H. Shi, L. Sun, Y. H. Li, J. P. Fan, and G. H. Zeng, “Photon-limited single-pixel imaging,” Opt. Express 28(6), 8132–8144 (2020).
[Crossref]

X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018).
[Crossref]

Lugiato, L.

F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Lyu, M.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Magatti, D.

F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Moreau, P.-A.

Nagahama, Y.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Nazarathy, M.

Nishitsuji, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

O’Sullivan, M. N.

Ori, K.

K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Padgett, M. J.

Pan, L.

L. Pan, C. J. Deng, Z. W. Bo, X. Yuan, D. Zhu, W. Gong, and S. S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 24, 26766–26776 (2020).
[Crossref]

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Papoulis, A.

A. Papoulis, The Fourier integral and its applications (McGraw-Hill, 1962).

Peng, J. Y.

Pittman, T. B.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

Ryczkowski, P.

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Salem, R.

R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
[Crossref]

Sano, M.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Scarcelli, G.

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

Sergienko, A. V.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

Shapiro, J. H.

Shen, X.

Shi, J. H.

Shih, Y.

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

Shih, Y. H.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

Shimobaba, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Shiraki, A.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Silberberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Situ, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Strekalov, D. V.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

Sun, B. Q.

Sun, L.

Sun, S.

Sun, S. H.

Takahashi, T.

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Tang, G. Z.

Valencia, A.

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

Wang, C. L.

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Wang, H.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, K.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
[Crossref]

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Welsh, S. S.

Wu, X. Y.

X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018).
[Crossref]

Xiong, J.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
[Crossref]

Xu, X. Y.

Xu, Y. K.

Yang, Y.

Yaron, B.

K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Yaron, S.

K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Yuan, X.

Zeng, G. H.

Zhang, E. F.

S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
[Crossref]

E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015).
[Crossref]

E. F. Zhang, H. Z. Lin, W. T. Liu, Q. Li, and P. X. Chen, “Sub-rayleigh-diffraction imaging via modulating classical light,” Opt. Express 23(26), 33506–33513 (2015).
[Crossref]

Zhang, S. H.

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

Zhu, D.

Adv. Opt. Photonics (1)

R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photonics 5(3), 274 (2013).
[Crossref]

Appl. Opt. (1)

Appl. Phys. Lett. (2)

D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008).
[Crossref]

K. Ori, B. Yaron, and S. Yaron, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Chin. Opt. Lett. (1)

IEEE J. Quantum Electron. (1)

B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).
[Crossref]

J. Opt. (1)

E. F. Zhang, W.-T. Liu, and P.-X. Chen, “Ghost imaging with non-negative exponential speckle patterns,” J. Opt. 17(8), 085602 (2015).
[Crossref]

Nat. Photonics (1)

P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016).
[Crossref]

Opt. Commun. (1)

T. Shimobaba, Y. Endo, T. Nishitsuji, T. Takahashi, Y. Nagahama, S. Hasegawa, M. Sano, R. Hirayama, T. Kakue, and A. Shiraki, “Computational ghost imaging using deep learning,” Opt. Commun. 413, 147–151 (2018).
[Crossref]

Opt. Express (8)

Opt. Lett. (3)

Optica (1)

Photonics Res. (1)

C. J. Deng, L. Pan, C. L. Wang, X. Gao, W. L. Gong, and S. Han, “Performance analysis of ghost imaging lidar in background light environment,” Photonics Res. 5(5), 431–435 (2017).
[Crossref]

Phys. Rev. A (4)

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005).
[Crossref]

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref]

Phys. Rev. Lett. (3)

A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005).
[Crossref]

J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004).
[Crossref]

F. Ferri, D. Magatti, L. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Sci. Rep. (3)

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018).
[Crossref]

S. Sun, W. T. Liu, H. Z. Lin, E. F. Zhang, J. Y. Liu, Q. Li, and P. X. Chen, “Multi-scale adaptive computational ghost imaging,” Sci. Rep. 6(1), 37013 (2016).
[Crossref]

Other (2)

R. Bracewell, The Fourier Transform And Its Applications (McGraw-Hill, 1999).

A. Papoulis, The Fourier integral and its applications (McGraw-Hill, 1962).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. The experimental configuration of temporal cross-correlation ghost imaging. The generated speckle patterns are split into two paths by a beam splitter(BS). On the reference arm, the patterns are imaged onto the CMOS camera by $L_1$($f_1=20cm$). On the object path, the patterns are imaged onto the object surface by $L_2(f_2=75cm)$ and the reflected light is collected by $L_3$ into a photomultiplier tube (PMT). The object is shown at bottom right corner. Profile of the pulse train is controlled and recorded by a computer.
Fig. 2.
Fig. 2. Experimental results. (a)Images obtained with both methods, with the duty ratio of the pulse train being 0.044. Images in the first and second row are obtained by TCGI, and images in the third and fourth row are from GI. The first row and the third row are reconstructed by compressed sensing(CS). The second and the fourth row are recovered by fluctuation correlation. For each column, $BSNR$ is shown on the top. (b)Images obtained by traditional GI in different duty ratios. The duty ratio is shown on top of each image. The number of speckle patterns used for each image is M=7000.
Fig. 3.
Fig. 3. Experimental results of bucket values. (a) Raw data from PMT. (b) Raw data located by temporal correlation for an illumination pattern. (c) Profile of illumination pulse train. (d) Temporal cross-correlation between raw data and pulse train. Those stand-out peaks are actually caused by circuit of PMT and the amplifier, which is not signals from the object.
Fig. 4.
Fig. 4. Experimental results of 3D imaging. Three traffic signs are set at 1.378km,1.380km,1.382km, respectively. By analyzing those peaks in temporal correlation functon, distances of three objects are determined. Then image of them is ahiceved with 30000 illumination patterns. To mark the difference in distance, we set colors for different part within the image.

Tables (1)

Tables Icon

Table 1. Enhancement of BSNR with temporal cross-correlation. With different level of launched power, SNRs of the raw data are shown in the first row. The improved BSNRs with temporal cross-correlation are shown in the second row.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

G 0 ( x ) = t 0 t 0 + T 0 i 2 ( t ) d t t 0 t 0 + T 0 I r ( x , t ) d t N t 0 t 0 + T 0 i 2 ( t ) d t N t 0 t 0 + T 0 I r ( x , t ) d t N .
C ( τ ) = s ( t ) i 2 ( t ) = s ( t ) i 2 ( t + τ ) d t = s ( t ) s ( t ) + s ( t ) n ( t )
s ( t ) s ( t ) I r ( x ) O ( x ) d x + s ( t ) n ( t ) .
G 1 ( x ) = C m a x ( τ ) I r ( x ) N C m a x ( τ ) N I r ( x ) N
C N R G = G ( x i n ) G ( x o u t ) 1 2 [ Δ 2 G ( x i n ) + Δ 2 G ( x o u t ) ] ,
Δ 2 G ( x ) = 1 N [ G ( x ) 2 G ( x ) 2 ] .
B m = x i n P I r ( x i n ) + n m .
G ( x ) = B m B m N N I r ( x ) I r ( x ) N N
G ( x i n ) = ( x i n P I r ( x i n ) + n m ) I r ( x i n ) x i n P I r ( x i n ) + n m I r ( x i n ) = ( x i n P I r ( x i n ) ) I r ( x i n ) + n m I r ( x i n ) I r ( x i n ) x i n P I r ( x i n ) n m I r ( x i n ) .
I ( x 1 ) I ( x 2 ) = { I 2 ( x 1 ) x 1 = x 2 I ( x 1 ) I ( x 2 ) x 1 x 2
G ( x i n ) = I r ( x i n ) 2 I r ( x i n ) 2 ,       G ( x o u t ) = 0.
C N R G = N P [ 1 + 7 / 2 P + n 2 n 2 P ( I r ( x 0 ) 2 I r ( x 0 ) 2 ) ] ,
B S N R = B 0 2 B 0 2 n 2 n 2 = P ( I r ( x 0 ) 2 I r ( x 0 ) 2 ) n 2 n 2 .