Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Orthonormalization method in ghost imaging

Open Access Open Access

Abstract

Ghost imaging system requires a large number of samples to reconstruct the object. Computational ghost imaging can use well-designed pre-modulated orthogonal patterns to reduce the requirement of sampling number and increase the imaging quality, while the rotating ground glass (RGG) scheme cannot. Instead of the pre-modulation method, a post-processing method using Gram-Schmidt process to orthonormalize the patterns in a RGG scheme is introduced. Reconstructed ghost image after the Gram-Schmidt process (SGI) are tested using the quality indicators such as the Contrast-to-Noise (CNR), the Peak Signal to Noise Ratio (PSNR), the Correlation Coefficient (CC) and reducing the Mean Square Error (MSE). Simulation results show that this method has obvious advantage on enhancing the efficiency of image acquisition, and the sampling number requirement drops from several thousands to a few hundreds in ideal condition. However, in actual system with noise, the image quality from SGI declines in large sampling number, for noise and errors accumulate in the orthonormalization process. So an improved Group SGI method is then developed to avoid this error accumulation, which behaves effectively in reconstructing the image from experimental data and show good performances in large sampling number too. Since this method do not change the relationship between the reference patterns and the bucket values, it can easily combine with most of reconstruction algorithms and improve their reconstruction efficiency.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is a kind of indirect imaging scheme which images an object nonlocally [1,2]. GI schemes have shown advantages in imaging lidar [3], three-dimensional imaging [4,5] and photon-limited imaging [6]. The most notable feature of GI is that the object is reconstructed from a large number of measurements on two quantities: light patterns that never reach the object and total intensities of the light patterns which transmitted through or reflected from the object. The light patterns that contain no information of the object are called the reference patterns, and the light whose intensities are detected by a “bucket” detector with no spatial resolution is the signal. In 1995, Pittman et al. realized the first ghost imaging experiment in laboratory environment by entangled photon pairs [1]. After that, a variety of experimental forms gradually emerged. Besides entangled photon pairs, light sources can also be thermal light from hollow-cathode lamp [7], sunlight after atomic optical filter [8], pseudo-thermal light generated by a rotating ground glass (RGG) [9], computational light patterns produced by Spatial Light Modulator (SLM) or Digital Micro-mirror Device (DMD) [10–12], and even X-ray [13,14] and atoms [15].

Generally speaking, the reference pattern of GI mainly has three different types. The first is point-by-point scanning, such as entangled photon pairs. The second is random patterns, such as thermal and pseudo-thermal light. The last is well designed patterns, mainly based on computational GI. For the well designed patterns, the reconstruction time of the GI system can be much shorter and the quality of imaging be much higher [16–18]. However, these methods always need a SLM or DMD which has a low damage threshold and is not suitable for the long distance sensing. Ghost imaging with pseudo-thermal light generated by a RGG presents more potential in practical applications, particularly in remote sensing, for the RGG is able to modulate quite high power laser. Many properties of this scheme are investigated theoretically and experimentally [19–23]. However, as the reference patterns from a RGG are random modulated, there is a high constant background that reduces the reconstructing efficiency and the image quality [9]. Lots of studies focus on enhancing the visibility of the reconstructed image in the GI system with random patterns by a RGG [24–27]. While according to our knowledge, a method that changes the detected random patterns modulated by a RGG into orthonormal patterns has not been proposed yet, which can greatly improve the reconstructing efficiency and the image quality.

In this paper, we introduce a data preprocessing method to improve the reconstructing process in a GI system with pseudo-thermal light. We orthonormalize the reference patterns by Gram-Schmidt process into new orthonormal patterns, and the image can be reconstructed then. This method is named Schmidt GI (SGI). We run a simulation in ideal condition to test the feasibility of this method, and evaluate the performance under different noise levels. Then we use the SGI method to reconstruct the image by using the experimental data. Results show that our method can effectively reduce the correlation between different measurements, the sampling number requirement and improve the quality of the reconstructed image.

2. Theoretical model

This orthonormalization method can be described based on a typical pseudo-thermal light ghost imaging system, as shown in Fig. 1, where Φm is the mth matrices of reference patterns, and ym is the mth measured values of bucket detector. Supposing that the two arms are of equal length, the mth measurement of the two detectors satisfy

ym=αi=1Uj=1Vϕi,jmoi,j,
where α is a factor which takes into account any unbalancing, ym is the bucket value, ϕm is the element of reference pattern matrix Φm and o is the element of the object transmission matrix O. We can assume α=1 here, and we represent U × V as L, which is the resolution of the reference pattern. If we take N measurements, the bucket sequence y1, y2, · · · , yN can be represented by a column vector Y, and the reference matrices patterns Φ1, Φ2, · · · , ΦN can be reshaped to row vectors R1, R2, · · · , RN which can further be presented by a single matrix AN×L with N row vectors and L column vectors. This is called the measure matrix, whose row vectors and column vectors represent the reference patterns in different measurements and the light intensities on different pixels, respectively. The matrix O can also be reshaped to a column vector X. Then Eq. (1) can be expressed as a problem of solving systems of linear equation
Y=AX,
where
Y=[y1y2yN],A=[R1R2RN],X=[x1x2x3xL].
Usually, N < L in GI systems, and this is an undetermined problem. We cannot get the exact solution but only an approximate one. By fluctuation correlation method [28], the object can be reconstructed by
G=1NAT(YY)
where G is the approximate solution of X.

Substituting Eq. (2) into Eq. (4), we can find the relationship between G and X, which is

G=1NATAX1NATY.
It is clear that the orthogonality of matrix A determines the smoothness and accuracy of solving X. Gram-Schmidt processing is an effective way to improve the orthogonality of A. We deal with the row vectors of the matrix A. According to the principle of Gram-Schmidt process, for a new measurement, all projection components on other ones should be wiped off, because the projection components are redundant information. As A contains N row vectors R1, R2, · · · , RN, we define the projection coefficient by
cmn=R˜nRmR˜nR˜n,
and a group of orthogonal ones can be calculated by
R˜1=R1,R˜2=R2c21R˜1,R˜3=R3c31R˜1c32R˜2,R˜N=RNn=1N1cNnR˜i.

Equations (6) and (7) indicate that as we get a new measurement, we subtract all the projection components of the previous orthogonal patterns on this pattern. After this operation, the new set of 1, 2, · · · , N is orthogonal. As the number of measurements increases, more and more components of patterns are subtracted and the intensity of patterns gets smaller. We assume that the orthogonal patterns provide similar amount of information of the object, so the patterns should be normalized then. The new orthonormal basis R̃′1, R̃′2, · · · , R̃′N can be obtained by dividing the orthogonal patterns’ own 2-norm, as

R˜m=R˜mR˜m,
where m = 1, 2,· · · , N.

In order to maintain the relationship between the patterns and buckets, we do some similar operations for Y to generate the new Ỹ′, whose elements ỹ′ are calculated by

y˜m=ymn=1m1cmny˜ny˜m=y˜mR˜m.

According to Eqs. (5)(9), the new orthonormal patterns R̃′1, R̃′2, ..., R̃′N and new buckets ỹ′1, ỹ′2, ..., ỹ′N are created. It is easy to find that they also satisfy the equation

Y˜=A˜X.
So the object can also be reconstructed by the new patterns and the new buckets using the intensity fluctuation correlation function
G=1NA˜T(Y˜Y˜),

 figure: Fig. 1

Fig. 1 Classical pseudo-thermal light ghost imaging and the orthonormalization process.

Download Full Size | PDF

3. Method and simulation results

To test the feasibility of the SGI method, we run a simulation in ideal condition. As shown in Fig. 2, we use a CMOS camera (Thorlabs DCC 3240C) to capture reference patterns, which are Φ1, Φ2, ..., ΦN. To create an ideal condition to run the simulation, we use these random patterns and a binary picture mask generated by software to calculate bucket values y1, y2, ..., yN according to the Eq. (1). The object is a binary picture of two letters “GI” with the size of 140 × 140 pixels, so are the reference patterns. We reshape the patterns to row vectors R1, R2, ..., RN, each of whose length is 19600. Processes of orthogonalization and normalization introduced in section 2 are carried out on the patterns and buckets. R̃′1, R̃′2, ..., R̃′N and ỹ′1, ỹ′2, ..., ỹ′N are generated.

 figure: Fig. 2

Fig. 2 Simulation setup. The reference patterns are captured in the experimental system, while the bucket values are calculated in ideal condition by software.

Download Full Size | PDF

The matrix A and Ã′ can be use to predict the property of GI [29]. We separately calculate correlation coefficients (CC) between row vectors of each matrix and between column row vectors of each. Figure 3 shows the changes in CC between row vectors of A and those of Ã′. In Fig. 3(a), besides for most of the values in the range of −0.2 to 0.2, there are still a large number of different measurements whose CC are within the range 0.6 to 1. They have strong relativity, so most of their contributions are redundant. This phenomenon will cause time wasting in correlation operations. As shown in Figs. 3(c) and 3(d), the distribution change of the correlation coefficients, the variances and the sum of CC both reduce significantly after the processing. In Fig. 3(d), almost all values appear in the vicinity of zero, that is to say, all of the new reference patterns provide totally new information. Thus, the image can be reconstructed by much smaller sampling number.

 figure: Fig. 3

Fig. 3 Correlation coefficients (CC) distribution of row vectors in the measure matrix. (a) is the CC between row vectors before processing, (b) is the CC between row vectors after processing, (c) is the statistical histogram of (a), (d) is the statistical histogram of (b).

Download Full Size | PDF

In the other hand, orthonormalising the row vectors of the matrix A also affects the CC between column vectors, although we do not operate on the column vectors directly. Every column vector in A represents light intensities on a single pixel in the measurements. The spatial coherence of pseudo-thermal light field determines that these column vectors are not completely independent. Instead, the whole pattern is composed of speckle particles with a certain size, which makes the reconstructed image blur. Figure 4 shows the changes in correlation coefficients between column vectors before and after the orthonormalization process. Limited by the computer ability, we only calculate the CC between the first 4200 column vectors rather than all the 19600 column vectors, and these are enough to find the changes. The width of correlation area of pixels in space and the distribution width of CC between pixels are both narrowed after the process on the row vectors, or on the reference patterns. Changes of correlation coefficients between row vectors and between column row vectors indicate that the SGI method effectively reduces the correlation noise between different row and column vectors in the measure matrix, which leads to faster reconstruction and better imaging quality.

 figure: Fig. 4

Fig. 4 CC distribution of column vectors in the measure matrix. (a) is the CC between the first 4200 column vectors before processing, (b) is the CC between the first 4200 column vectors after processing, (c) is the statistical histogram of (a), (d) is the statistical histogram of (b).

Download Full Size | PDF

The new R̃′1, R̃′2, ..., R̃′N and ỹ′1, ỹ′2, ..., ỹ′N are used to recover the object information with the traditional correlation function of intensity fluctuation according to Eq. (10). In the next analysis, we will present the results of SGI, compressive ghost imaging (CGI), and directly using R1, R2, ..., RN and y1, y2, ..., yN to do correlation of fluctuation as GI. We compare the images from the three kinds of methods, and the results (which are reshaped back to 140 ×140) under different measurements are shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Simulation results. The reconstructed results are reshaped back to 140 ×140.

Download Full Size | PDF

According to Fig. 5, we find that the SGI has a great reconstructing efficiency, and the “GI” letters can almost be identified when the measurement number m = 100. It is fairly small compared with those in most GI systems, which are usually thousands of times. When the number of measurements m = 500, the image by SGI looks the same as that by CGI with 5000 measurements already. When the number of measurements m = 5000, the two letters image by SGI has a clearer and sharper outline than those by GI and CGI, and the filling inside the edge is more smooth. The object in this simulation is a a binary picture, and it is clear that the image reconstructed by SGI is more similar to the object.

In order to better evaluate the performance of different methods rather than intuitively judgments, we adopt four evaluating indicators of image quality: the Contrast-to-Noise (CNR) [25,30,31], the Mean Square Error (MSE), the Peak Signal to Noise Ratio (PSNR) [18] and the Correlation Coefficient (CC). They can be calculated by

CNR=G(1)G(0)Var[G(1)]+Var[G(0)],
MSE=1Li=1L(gixi)2,
PSNR=10×log10[(2k1)2MSE],
CC=Cov(G,X)Var(G)Var(X),
where G represents the reconstructed result, and X is the binary picture which used to calculate the bucket values. G(1) is the new set of points in G where xi = 1 in X. Correspondingly, G(0) is the new set of rest points in G where xi = 0 in X. They represent the signal area and the noise area in the reconstructed image, respectively. The Var function computes the variance of elements in its argument. k is the gray level of the image, and for a binary picture, k = 1. The Cov function computes the covariance of its two arguments.

The performance of the reconstructed images by different methods are shown in Fig. 6. Before m = 1000, as the measurements number increases, the image quality by all methods is improved quickly (The MSE value is the lower the better), while the growth rate by the SGI method is the fastest. If more measurements are taken, the image quality by GI and CGI tend to be saturated. The case for SGI is quite different, all the indicators are still improving. These trends agree with our previous analysis. As the measurement number increases, the new measured pattern contains more and more redundant components which also exist in previous patterns, and the new component with new information is smaller and smaller. So measurements contribute less and less to the image reconstructing. However, the SGI method subtracts all the projection components of the previous patterns and amplifies the new component to a same level as the previous patterns, so the image quality keeps improving even in a large measurement number. Generally speaking, the performance of SGI has a higher rate of growth and higher convergent value.

 figure: Fig. 6

Fig. 6 Image qualities via different measurement times by SGI, CGI and GI. (a) is lines of CNR, (b) is MSE, (c) is PSNR, (d) is CC.

Download Full Size | PDF

4. Noise analysis and experimental results

In the previous analysis in ideal condition, practical noises and errors are not included, such as the background noise, the quantization error and thermal noise of detectors. And sometimes, the distance of two arms is not equal, thus the light field illustrates on the object and CMOS camera are biased by diffraction. To find the influence of such noise and errors, we run a simulation under different noise levels.

We simply assume that all the noise and errors are equivalent to an additive white Gaussian noise (AWGN) on the bucket values, and the signal to noise ratio (SNR) Q can thus be calculated by

QdB=10logPrefPnoise,
where Pref is the power of the bucket values when there is no object and all the reference patterns are detected by the bucket detector. This SNR is image independent. For simplicity, we reduce the SNR from ideal, where the Pnoise = 0, to Q = 40dB and simulate the image quality of SCI and GI via measurement number under different noise levels. The results, indicated by CNR, MSE, PSNR and CC qualities, are compared in Fig. 7. It is clear that the image qualities still increase faster by SGI method than by GI method, but only in a certain sampling number range. The image qualities get bad when the measurement number is out of that range. Taking CNR for example, as shown in Fig. 7(a), when Q = 60dB, there is an extreme point for the line at about m = 360. When Q = 50dB, this point change to another measurement number (about m = 240), and the maximum CNR also reduce. When Q = 40dB, this point exists at about m = 160 and the CNR here is even smaller. It is quite different from the trends of GI, which almost has no change between conditions of Q = 40dB and Q is ideal.

 figure: Fig. 7

Fig. 7 Performance of different methods in different noise levels. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.

Download Full Size | PDF

The noise level is not the only factor that affects the best sampling number. We do another simulation by using three more different images, an anti GI, a cat and a hawk, as shown in Fig. 8(a). The anti GI is an orthogonal image of the original GI, where “1” and “0” are exchanged. The cat and hawk both have the same size as the GI, which means the counts of “1” pixels are roughly the same. They all contain about 2500 “1” pixels, which account for 12.8 percent of the total 140 × 140 image. The quality lines of the four images by SGI in same noise levels are compared, and the optimal sampling numbers via noise levels are shown in Fig. 8(b)–8(c). We can find that, although the optimal sampling numbers of GI and anti GI are different in the same noise level, the optimal sampling numbers of the cat and the hawk images, which are quite different but have the same size as GI, are almost the same. It indicates that the object size also affects the optimal sampling number when the noise level is fixed. It is not a good feature for common imaging, as the object to be shoot can be quite different. But for many imaging applications that we know a general size of the object, just like character recognition systems, we can use a known object to calibrate the system and find the optimal sampling number, in the range of which the image quality can increase fast.

 figure: Fig. 8

Fig. 8 Optimal sampling numbers via different noise levels for four images. (a) The images. (b) is optimal sampling numbers of CNR, (c) is those of PSNR, (d) is those of CC.

Download Full Size | PDF

We also test the SGI method in an experiment, and the experimental setup is shown in Fig. 1. The reference patterns are divided into two parts. One of them is captured by the camera. The other transmitted through an object with two letters “GI” and be detected by a bucket detector. They are no longer be calculated in this section. We measure 20000 patterns and divide them into 4 teams, each of which contains 5000 measurements. As the SGI method is noise-sensitive, we take 1000 measurements on the bucket value for each reference pattern and record them all in order to enhance the detection precision and reduce the influence of random error. We take each first measurement on the bucket value for each reference pattern, implement the SGI processing and get reconstructed images as the SGI results. We also take the average value of the 1000 measurements on the bucket value for each reference pattern, perform the same operation and get the SGIavg results. The image qualities, CNR, MSE, PSNR and CC, via the measurement number by using SGI, SGIavg, GI and CGI are shown in Fig. 9. The standard object we used here to calculate the qualities is the binary oversampled CGI result, and the error bars are from the results of the divided 4 teams.

 figure: Fig. 9

Fig. 9 Image qualities from experimental results. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.

Download Full Size | PDF

The trend of curves obtained from experimental results are almost the same as those of the above simulations. For CGI and GI, their imaging qualities increase with the number of measurements. For SGI and SGIavg, imaging quality increases faster at small measurement number and drops at large one. Compared with the performance of SGI, the SGIavg results are a little better, because the averaging process reduce the influence of random error in the buckets sampling. Now both the simulations and the experimental results make it clear that the SGI method is easily affected by the noise.

In the process of orthogonalization, new values need to be calculated by subtracting all the previous results. As the operation goes on, the error accumulates quickly. In view of that the error is not a constant but a increasing value, its impact on the system is nonlinear enhancement. Therefore, when the number of measurements is small, the error can be ignored, the image information can be got rapidly by the SGI method. But as the measurement number increases, the speed of the error accumulating is fast, and the quality of the image reconstructed by these data decreases rapidly, so there exists extreme points, in the left of which, the performance from SGI is superior to others. In other words, the SGI only has a certain advantage in small sampling number cases.

5. Group SGI method

To improve the performance of SGI method in large sampling number, we further introduce a Group SGI method. In Fig. 9, the extreme point of SGI in this experiment is at about 200 measurement. As the SGI method shows advantage in this sampling number, we divided the total measurements into several groups, each of which only contains 200 measurements, and we do SGI processing in each group. For the amount of each group is small, the accumulated detection error is not big, and all the measurements can contribute to the image reconstructing.

 figure: Fig. 10

Fig. 10 Image qualities from experimental resultsvia different measurement times by group SGI, CGI and GI. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.

Download Full Size | PDF

With the same experimental data in section 4, we use the modified group SGI method to calculate ghost image. The corresponding results are shown in Fig. 10. It is clear that the image qualities by the group SGI method no longer drop under large measurement numbers, and the CNR, MSE, PSNR all indicate that the group SGI method not only reconstructs the image faster, but also has a better final image quality than GI and CGI results, as shown in Figs. 10(a)–10(c). However, the indicator CC shows a different result. In Fig. 10(d), the CC value by CGI is a little better than that by SGI. A probable reason is that the CGI method itself is quite effective in CC reconstructing and we use the binary oversampled CGI result as the standard object, so the CC by group SGI is difficult to compare with that from CGI.

The group size is a key factor that determines the final performance of Group SGI, and it is affected by the noise level and the object size. In actual GI system, we can collect data before actual imaging to estimate the system noise level, and we can use a known object to calibrate the system and find the optimal sampling number, based on which, unknown objects which have similar size with the calibration object can be well reconstructed by the Group SGI method.

It is worth mentioning that group SGI method also has less computation than the SGI. For N measurements, it takes t = N(N − 1)/2 operations for subtracting projection components. However, as we divide these measurements into d groups, it only takes t′ = N(Nd)/(2d) computations. Generally speaking, the computation of group SGI method is d times smaller.

6. Conclusion

Correlation of the reference patterns greatly restricts the time and quality of ghost imaging, and the non-orthogonality between different patterns and different pixels in each pattern is the main reason for this phenomenon. Gram-Schmidt process can directly make different patterns orthogonal to improve the reconstructing efficiency and enhance the independence of space pixels, and hence reduce the required sampling number and improve the image quality. Simulation results show that, in an ideal environment, data processing in this way has much better performance than CGI and GI, and the required sampling number drops from thousands of times to only a few hundreds.

However, this method is sensitive to noise. The errors induced by noise accumulate at large measurement number. Therefore, the orthogonality of the light field and the robustness of the system need a balance point. One effective way is to divide the original data into several groups and do the orthonormalization for each individual group, and use data from all the groups to reconstruct the object. This group SGI method has better robustness and less computation request than the original SGI method, and presents more potential in practical applications. In fact, this method can also be used in most of the other reconstructing methods as a data preprocessing step and improve their reconstructing efficiency and final performance.

Funding

National Natural Science Foundation of China (61631014, 61801042, 61771067, 61401036, 61531003); National Science Fund for Distinguished Young Scholars of China (61225003); Youth Research and Innovation Program of BUPT (2017RC10, 2015RC12).

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two photon quantum entanglement,” Phys. Rev. A 52, R3429 (1995). [CrossRef]  

2. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. L 94, 063601 (2005). [CrossRef]  

3. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012). [CrossRef]  

4. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6, 26133 (2016). [CrossRef]   [PubMed]  

5. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]   [PubMed]  

6. X. Liu, J. Shi, X. Wu, and G. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8, 5012 (2018). [CrossRef]   [PubMed]  

7. D. Zhang, Y. H. Zhai, L. A. Wu, and X. H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]   [PubMed]  

8. X. F. Liu, X. H. Chen, X. R. Yao, W. K. Yu, G. J. Zhai, and L. A. Wu, “Lensless ghost imaging with sunlight,” Opt. Lett. 39(8), 2314–2317 (2014). [CrossRef]   [PubMed]  

9. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. L 94, 063601 (2005). [CrossRef]  

10. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

11. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009). [CrossRef]  

12. L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh–Hadamard transform,” Photon. Res. 4(6), 240–244 (2016). [CrossRef]  

13. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-Transform Ghost Imaging with Hard X Rays,” Phys. Rev. L 117, 113901 (2016). [CrossRef]  

14. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-Ray Ghost Imaging,” Phys. Rev. Lett. 117, 113902 (2016). [CrossRef]   [PubMed]  

15. R. I. Khakimov, B. M. Henson, D. K. Shin, S. S. Hodgman, R. G. Dall, K. G. H. Baldwin, and A. G. Truscott, “Letter Ghost imaging with atoms,” Nature 540, 100–103 (2016). [CrossRef]   [PubMed]  

16. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6, 6225 (2015). [CrossRef]   [PubMed]  

17. S. M. M. Khamoushi, Y. Nosrati, and S. H. Tavassoli, “Sinusoidal ghost imaging,” Opt. Lett. 40(15), 3452–3455 (2015). [CrossRef]   [PubMed]  

18. X. Xu, E. Li, X. Shen, and S. Han, “Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization,” Chin. Opt. Lett. 13(7), 071101 (2015). [CrossRef]  

19. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5, 9280 (2015). [CrossRef]   [PubMed]  

20. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6, 26133 (2016). [CrossRef]   [PubMed]  

21. W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photon. Res. 3(5), 234–237 (2015). [CrossRef]  

22. X. Li, C. Deng, M. Chen, W. Gong, and S. Han, “Ghost imaging for an axially moving target with an unknown constant speed,” Photon. Res. 3(4), 153–157 (2015). [CrossRef]  

23. J. Li, B. Luo, D. Yang, L. Yin, G. Wu, and H. Guo, “Negative exponential behavior of image mutual information for pseudo-thermal light ghost imaging: observation, modeling, and verification,” Chinese Sci. Bull. 62, 717–723 (2017).

24. D. Z. Cao, J. Xiong, S. H. Zhang, L. F. Lin, L. Gao, and K. Wang, “Enhancing visibility and resolution in Nth-order intensity correlation of thermal light,” Appl. Phys. Lett. 92(20), 201102 (2008). [CrossRef]  

25. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express 18(6), 5562–5573 (2010) [CrossRef]   [PubMed]  

26. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010). [CrossRef]   [PubMed]  

27. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 131110 (2009). [CrossRef]  

28. W. Gong and S. Han, “A method to improve the visibility of ghost images obtained by thermal light,” Phys. Lett. A 374, 1005–1008 (2010). [CrossRef]  

29. C. Wang, W. Gong, X. Shao, and S. Han, “The influence of the property of random coded patterns on fluctuation-correlation ghost imaging,” J. Opt. 18, 065703 (2016). [CrossRef]  

30. P. Zerom, Z. Shi, M. N. O’Sullivan, K. W. C. Chan, M. Krogstad, J. H. Shapiro, and R. W. Boyd, “Thermal ghost imaging with averaged speckle patterns,” Phys. Rev. A 86, 063817 (2012). [CrossRef]  

31. J. Li, D. Yang, B. Luo, G. Wu, L. Yin, and H. Guo, “Image quality recovery in binary ghost imaging by adding random noise,” Opt. Lett. 42(8), 1640–1643 (2017). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Classical pseudo-thermal light ghost imaging and the orthonormalization process.
Fig. 2
Fig. 2 Simulation setup. The reference patterns are captured in the experimental system, while the bucket values are calculated in ideal condition by software.
Fig. 3
Fig. 3 Correlation coefficients (CC) distribution of row vectors in the measure matrix. (a) is the CC between row vectors before processing, (b) is the CC between row vectors after processing, (c) is the statistical histogram of (a), (d) is the statistical histogram of (b).
Fig. 4
Fig. 4 CC distribution of column vectors in the measure matrix. (a) is the CC between the first 4200 column vectors before processing, (b) is the CC between the first 4200 column vectors after processing, (c) is the statistical histogram of (a), (d) is the statistical histogram of (b).
Fig. 5
Fig. 5 Simulation results. The reconstructed results are reshaped back to 140 ×140.
Fig. 6
Fig. 6 Image qualities via different measurement times by SGI, CGI and GI. (a) is lines of CNR, (b) is MSE, (c) is PSNR, (d) is CC.
Fig. 7
Fig. 7 Performance of different methods in different noise levels. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.
Fig. 8
Fig. 8 Optimal sampling numbers via different noise levels for four images. (a) The images. (b) is optimal sampling numbers of CNR, (c) is those of PSNR, (d) is those of CC.
Fig. 9
Fig. 9 Image qualities from experimental results. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.
Fig. 10
Fig. 10 Image qualities from experimental resultsvia different measurement times by group SGI, CGI and GI. (a) is CNR, (b) is MSE, (c) is PSNR, (d) is CC.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

y m = α i = 1 U j = 1 V ϕ i , j m o i , j ,
Y = AX ,
Y = [ y 1 y 2 y N ] , A = [ R 1 R 2 R N ] , X = [ x 1 x 2 x 3 x L ] .
G = 1 N A T ( Y Y )
G = 1 N A T AX 1 N A T Y .
c m n = R ˜ n R m R ˜ n R ˜ n ,
R ˜ 1 = R 1 , R ˜ 2 = R 2 c 21 R ˜ 1 , R ˜ 3 = R 3 c 31 R ˜ 1 c 32 R ˜ 2 , R ˜ N = R N n = 1 N 1 c N n R ˜ i .
R ˜ m = R ˜ m R ˜ m ,
y ˜ m = y m n = 1 m 1 c m n y ˜ n y ˜ m = y ˜ m R ˜ m .
Y ˜ = A ˜ X .
G = 1 N A ˜ T ( Y ˜ Y ˜ ) ,
CNR = G ( 1 ) G ( 0 ) Var [ G ( 1 ) ] + Var [ G ( 0 ) ] ,
MSE = 1 L i = 1 L ( g i x i ) 2 ,
PSNR = 10 × log 10 [ ( 2 k 1 ) 2 MSE ] ,
CC = Cov ( G , X ) Var ( G ) Var ( X ) ,
Q dB = 10 log P ref P noise ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.