Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational-weighted Fourier single-pixel imaging via binary illumination

Open Access Open Access

Abstract

Single-pixel imaging has the ability to generate images at nonvisible wavelengths and under low light conditions and thus has received increasing attention in recent years. Fourier single-pixel imaging (FSI) utilizes deterministic basis patterns for illumination to greatly improve the quality of image reconstruction. However, the original FSI based on grayscale Fourier basis illumination patterns is limited by the imaging speed as the digital micro-mirror devices (DMD) used to generate grayscale patterns operate at a low refresh rate. In this paper, a new approach is proposed to increase the imaging speed of DMD-based FSI without reducing the imaging spatial resolution. In this strategy, the grayscale Fourier basis patterns are split into a pair of grayscale patterns based on positive/negative pixel values, which are then decomposed into a cluster of binary basis patterns based on the principle of decimalization to binary. These binary patterns are used to illuminate the imaged object. The resulting detected light intensities multiply the corresponding weighted decomposed coefficients and are summed, and the results can be used to generate the Fourier spectrum for the imaged object. Finally, an inverse Fourier transform is applied to the Fourier spectrum to obtain the object image. The proposed technique is verified by a computational simulation and laboratory experiments. Both static and dynamic imaging experiments are carried out to demonstrate the proposed strategy. 128 × 128 pixels dynamic scenes at a speed of ~9 frames-per-second are captured under 22 KHz projection rate using a DMD. The reported technique accelerates the imaging speed for DMD-based FSI and provides an alternative approach to improve FSI efficiency.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Single-pixel imaging (SPI) [1–37], also referred to as ghost imaging (GI), or correlated imaging, is based on sampling an object using a sequence of structured light patterns generated by a programmable spatial light modulator (SLM), and the light intensity is measured by a single-pixel detector. The detected signals in combination with knowledge of illumination patterns enable a computational reconstruction of an image using different algorithms. The single-pixel detector used in SPI is inexpensive compared with plane array detectors, such as charge coupled device (CCD), especially for spectrum ranges for which array-based sensor are costly or even unavailable. SPI has the ability to image at nonvisible wavelengths and under low light conditions and thus holds great potential for low-cost and compact imaging. To date, SPI has achieved success in special imaging and various applications, such as microscopy imaging [6, 7], fluorescent imaging [8–10] and gas imaging [11]. However, SPI encounters limitations associated with low imaging quality and efficiency. To push SPI towards practical applications, researchers have made considerable efforts to resolve these issues through various methods, such as differential ghost imaging (DGI) [12], normalized ghost imaging (NGI) [13], high-order ghost imaging (HGI) [14], compressed sampling ghost imaging (CGI) [15], etc. Besides, the illumination pattern model has an important influence on the imaging quality and efficiency. Random patterns have been widely employed for SPI from the outset. However, such patterns require a large number of measurements and a long data-acquisition time for recording signals [16]. Recently, deterministic orthogonal basis patterns have been proposed for greatly improving imaging quality and efficiency. Hadamard single-pixel imaging (HSI) [11, 17–20] and Fourier single-pixel imaging (FSI) [21–25] are two such representative SPI techniques. HSI uses Hadamard basis patterns for illumination. Subsequently, HSI acquires the Hadamard spectrum of the object image and reconstructs the object image by applying an inverse Hadamard transform. Whereas, FSI uses grayscale Fourier basis patterns for illumination, and acquires the Fourier spectrum for the object image, which is then reconstructed by applying an inverse Fourier transform. ZIBANG ZHANG et al. [26] compared the performance of HSI and FSI using a theoretical analysis and experiments. The results show that FSI is more efficient than HSI, while HSI is more noise-robust than FSI.

In SPI, the programmable SLM is a crucial device for generating structured light patterns. Generally, three types of SLMs are available: transmissive Liquid Crystal (LC), Liquid Crystalon Silicon (LCoS) and digital micro-mirror device (DMD). Compared with LC and LCoS, DMD shows the advantages of higher reflectivity, higher light throughput and higher frame rates [38]. The latest digital DMD is able to generate ~22,000 binary patterns per-second, which is much faster than LC and LCoS. Therefore, DMD is a popularly used SLM in SPI systems. Because of the binarized nature of the Hadamard basis patterns, HSI is well compatible with DMD; thus, HSI is perfectly suitable for DMD-based SPI systems. However, since DMD generates grayscale patterns by temporal dithering and works much slower in the grayscale mode, DMD-based FSI system is inherently slow and needs a longer data acquisition time compared with HSI. Thus, to improve the imaging efficiency of DMD-based FSI system, if a transformation can be performed in which the grayscale patterns are equivalently transformed to the corresponding binary patterns. As such, a DMD-based FSI system via binary illumination will obtain high imaging efficiency by taking advantage of the higher binary modulation rate of a DMD. Following this type of approach, ZIBANG ZHANG et al. [27] presented a spatial dithering strategy to increase the speed of FSI by two orders of magnitude. In their strategy, the Fourier basis patterns are binarized based on upsampling and error diffusion dithering and they captured 256 × 256 pixels dynamic scenes at a speed of 10 frames per-second under a 20 KHz projection rate of a DMD. The reported technique accelerates the image acquisition speed of FSI at the expense of the imaging spatial resolution.

Here, we present a novel technique to improve the imaging efficiency of FSI based on a detected signal computational-weighted via binary pattern illumination by a DMD. Such a scheme can be termed signal dithering. The proposed technique makes a tradeoff between spatial resolution and temporal resolution without sacrificing spatial resolution compared with the spatial dithering method in [27], and it increases the speed for DMD-based FSI by one order of magnitude compared with temporal dithering method. Technically, the original grayscale Fourier basis illumination patterns will be split into a pair of grayscale patterns that are then decomposed into a cluster of binary illumination patterns in terms of the principle of decimalization to binary [39]. A single-pixel detector records the reflected light intensities, which arise from the interactions between the imaged object and the binary illumination patterns. The recorded intensities will be multiplied by the factors of the weighted decomposed coefficients, and then summed. The Fourier spectrum for the imaged object can be constructed from the sum of the intensities. Finally, an inverse Fourier transform is applied to the Fourier spectrum to recover the object image. The lay-out of the present paper is organized as follows. In Section 2, we introduce the imaging principles and deduction methods. In Section 3, the quantity simulations and experiments are carried out to evaluate our proposed methods. In Section 4, the conclusions of this work are summarized.

2. Principles and methods

The proposed computational-weighted FSI technique is based on the theorem of the Fourier transform. The discrete 2-D Fourier transform indicates that an M × N digital image f(x,y) in the 2-D spatial domain can be expressed as a weighted sum of M × N 2-D sinusoid structured light patterns by the corresponding Fourier coefficients, which can be denoted as follows:

f(x,y)=1MNu=0,v=0M1,N1F(u,v)[cos(2πxuM+2πyvN)+jsin(2πxuM+2πyvN)],

where (x,y) represents the 2-D Cartesian coordinate, (u, v) is the spatial frequency, and F(u,v) is the Fourier coefficient. According to the above equation, the object image can be reconstructed if the Fourier coefficient of each corresponding frequency is known. The Fourier coefficient F(u,v) can be obtained by the corresponding transform as follows:

F(u,v)=x=0,y=0M1,N1f(x,y)[cos(2πxuM+2πyvN)jsin(2πxuM+2πyvN)].

Thus, when the sinusoid-structured light illumination patterns (called as Fourier basis patterns) are utilized in an SPI system, a single-pixel detector acquires the corresponding light intensities, which will be constructed the Fourier coefficients by a combined operation. The mathematical formula for the detected light intensity Iϕ(u,v) can be denoted as follows:

Iϕ(u,v)=x,yPϕ(x,y;u,v)f(x,y),

where the parameter Pϕ(x,y;u,v) is the illumination pattern and can be expressed as follows:

Pϕ(x,y;u,v)=(2R-1)cos(2πxuM+2πyvN+ϕ),u=0,1,2...M1;v=0,1,2...N1.

Here, the parameter R represents the quantification level. ϕ is the initial phase for the illumination patterns. When the phase is ϕ = 0, the intensities for the grayscale Fourier basis illumination patterns Pϕ=0(x,y;u,v) interacting with the imaged object correspond to the real components of Fourier spectrum of the imaged object; however, when the phase is ϕ = π/2, the intensities for the grayscale Fourier basis illumination patterns Pϕ=π/2(x,y;u,v) interacting with the imaged object correspond to the imaginary components of the Fourier spectrum of the imaged object.

State-of-the-art DMD can generate ~22000 binary patterns per-second. However, when the DMD is used to modulate 8 grayscale patterns (such as Fourier basis patterns), the modulation rate is decreased to ~250Hz.The reason for this decrease is that the grayscale patterns are decomposed to binary patterns, which are displayed according to the weighted time. For example, a grayscale pattern with 256 grayscale levels (8-bit image) will be decomposed into 8 binary patterns, as shown in Fig. 1, and the i-th binary patterns will illuminate according to the weighted time 2^(i-1)*T. (T is the basis time for one binary illumination pattern). The illumination process, which is termed temporal dithering [27], can be expressed as follows:

 figure: Fig. 1

Fig. 1 Traditional method used for generation grayscale Fourier basis illumination patterns by temporal dithering.B0, B1 ... B7 are successive binary illumination patterns that are decomposed from the grayscale Fourier pattern. T is the basis illumination time.

Download Full Size | PDF

Pϕ(x,y;um,vn)=i=1R2(i1)TBϕ,i(x,y;um,vn).

Where Bϕ,i(x,y;um,vn)are binary patterns that are decomposed from the corresponding grayscale Fourier basis pattern. The use of the temporal dithering method to create grayscale patterns requires considerable time to illuminate the imaged target, which results in low imaging efficiency for the DMD-based FSI. In [27], ZIBANG ZHANG et al. proposed a spatial dithering strategy to transfer grayscale illumination to binary illumination and improve the imaging efficiency for DMD-based FSI; however, this strategy reduces the imaging spatial resolution.

When grayscale Fourier Pϕ(x,y;um,vn) pattern with a given spatial frequency (um, vn) illuminates the imaged object f(x,y), the resulting reflected light Iϕ(um,vn) can be collected by a single-pixel detector. This process can be expressed as follows:

Iϕ(um,vn)=x,yPϕ(x,y;um,vn)f(x,y).

When Eq. (5) is incorporated into Eq. (6), we can obtain the following equation:

Iϕ(um,vn)=x,y[i=1R2(i1)TBϕ,i(x,y;um,vn)]f(x,y)=x,y[20TBϕ,1(x,y;um,vn)+...+2(R1)TBϕ,R(x,y;um,vn)]f(x,y)=20x,yTBϕ,1(x,y;um,vn)f(x,y)+...+2(R1)x,yTBϕ,R(x,y;um,vn)f(x,y),=20Iϕ,1(um,vn)+...+2(R1)Iϕ,R(um,vn)

where the parameters Iϕ,1(um,vn)Iϕ,R(um,vn) are the detected intensities corresponding to the binary illumination patterns Bϕ,1(x,y;um,vn)Bϕ,R(x,y;um,vn) interacting with the imaged object under the same exposure/illumination time T, respectively. Equation (7) shows that the weighted sum of the detected intensities for the binary illumination patterns interacting with the imaged object can be equivalent to the intensities for the grayscale Fourier basis illumination patterns interacting with the imaged object. In other words, FSI can be indirectly realized by using binary illumination patterns with DMD. Due to the high binary modulation rate of DMD, this process will greatly improve the imaging efficiency for FSI, which represents the core idea of this paper.

Due to the negative grayscale value for the original Fourier pattern illumination Pϕ(x,y;u,v) cannot be projected directly by DMD; thus we first split the original Fourier pattern illumination Pϕ(x,y;u,v) into a pair of grayscale patterns Pϕ+(x,y;u,v) and Pϕ-(x,y;u,v), which fulfills the following relation:

Pϕ(x,y;u,v)=Pϕ+(x,y;u,v)-Pϕ-(x,y;u,v).

For example, Fig. 2 shows a schematic demonstration of a 3 × 3 pixels grayscale pattern splitting to a pair of grayscale patterns with R = 6.

 figure: Fig. 2

Fig. 2 Splitting of original grayscale Fourier patterns into a pair of grayscale patterns.

Download Full Size | PDF

The particular splitting strategies are described as follows:

For the pattern Pϕ+(x,y;u,v), if the value of the spatial coordinate (x, y) for the grayscale Fourier pattern Pϕ(x,y;u,v) is positive, then the value of the spatial coordinate (x, y) for the grayscale pattern Pϕ+(x,y;u,v) is equivalent to the value of the spatial coordinate (x, y) forPϕ(x,y;u,v); otherwise, the value of the spatial coordinate (x, y) for the grayscale pattern Pϕ+(x,y;u,v) is zero.

For the pattern Pϕ-(x,y;u,v), if the value of the spatial coordinate (x, y) for the grayscale Fourier pattern Pϕ(x,y;u,v) is negative, then the value of the spatial coordinate (x, y) for the grayscale pattern Pϕ-(x,y;u,v) is the absolute value of the corresponding spatial coordinate of Pϕ(x,y;u,v); otherwise, the value of the spatial coordinate (x, y) for the grayscale pattern Pϕ-(x,y;u,v) is zero.

The split patterns Pϕ+(x,y;u,v) and Pϕ-(x,y;u,v) are still grayscale. Thus, the grayscale patterns should be further decomposed to binary patterns. These decomposing processes follow the principle of decimalization to binary. Figure 3 shows a schematic representation of a certain grayscale pattern Pϕ+(shown in Fig. 2) decomposing to corresponding binary illumination patterns. In Fig. 3, the number of decomposed binary patterns is 6. Similarly, the processes of decomposing Pϕ- into binary patterns are equivalent to that for Pϕ+in Fig. 3.

 figure: Fig. 3

Fig. 3 Decomposition of the grayscale pattern into binary illumination patterns.

Download Full Size | PDF

Figure 4 shows a demonstration of the entire decomposing processes. As shown in Fig. 4, Pϕ+(x,y;u,v) and Pϕ-(x,y;u,v) are a pair of grayscale patterns that are split from the original Fourier pattern Pϕ(x,y;u,v); and Bϕ+,i(x,y;u,v) and Bϕ-,i(x,y;u,v) are R binary patterns, which are decomposed from Pϕ+(x,y;u,v) and Pϕ-(x,y;u,v), respectively. When the binary patterns Bϕ+,i(x,y;u,v)are used to successively illuminate the imaged objects, the corresponding light intensities are detected by a single-pixel detector, which are then summed by the weighted coefficients of 2 orders. The sum can be termed Iϕ+(um,vn) and expressed as follows:

 figure: Fig. 4

Fig. 4 Entire process for decomposing the grayscale Fourier patterns to binary patterns. OGP: Original grayscale pattern; DGP: Decomposed grayscale patterns; and BIP: Binary illumination patterns.

Download Full Size | PDF

Iϕ+(um,vn)=20Iϕ+,1(um,vn)+21Iϕ+,2(um,vn)+...+2(R1)Iϕ+,R(um,vn).

The parameters Iϕ+,1(um,vn),Iϕ+,2(um,vn)Iϕ+,R(um,vn) are reflected light intensities corresponding to the binary illumination patterns Bϕ+,1(x,y;um,vn), Bϕ+,2(x,y;um,vn)Bϕ+,R(x,y;um,vn) interacting with the imaged object, respectively. Using the same operations, we can obtain the intensities for Iϕ-(um,vn):

Iϕ-(um,vn)=20Iϕ-,1(um,vn)+21Iϕ-,2(um,vn)+...+2(R1)Iϕ-,R(um,vn).

Similarly, the parameters Iϕ-,1(um,vn), Iϕ-,2(um,vn)Iϕ-,R(um,vn) are the corresponding reflected light intensities for the binary illumination patterns Bϕ-,1(x,y;um,vn), Bϕ-,2(x,y;um,vn)Bϕ-,R(x,y;um,vn) interacting with the imaged object, respectively. For the phase is ϕ = 0, the real component of the Fourier coefficients for the imaged object under a given spatial frequency (um, vn) can be expressed as:

D0(um,vn)=I0+(um,vn)I0-(um,vn)

for the phase is ϕ = π/2, using a similar operation, the imaginary component of the Fourier coefficients under a certain spatial frequency (um, vn) can be obtained and expressed as:

Dπ/2(um,vn)=Iπ/2+(um,vn)Iπ/2-(um,vn).

Due to the differential operation, our method can also restrain background light. When all the detected light intensities under different spatial frequencies are recorded, the entire set of Fourier spectrum can be received. Finally, by applying of an inverse Fourier transform to the complex-valued Fourier spectrum, the image of the object can be reconstructed as follows:

f(x,y)=F1[D0(um,vn)+jDπ/2(um,vn)],

where F−1 denotes the inverse Fourier transform operator. With the prior knowledge that the Fourier spectrum of any real-valued image is conjugated symmetrically, fully sampling an M × N-pixel image using our method takes 2 × M × N × R measurements and 2 × M × N × R binary illumination patterns.

The recorded signal will be multiplied by the weighted decomposed coefficients in our method and then used to construct the Fourier spectrum of the imaged object; therefore, compared with the temporal dithering and spatial dithering methods, our proposed method can be termed signal dithering. If the binary modulation rate of the DMD is 20 KHz, the modulation rate in our proposed method is increased to 2500 Hz to realize the equivalent function of an 8-bit grayscale pattern, which is 10 times higher than that for temporal dithering. A comparison among our proposed signal dithering method and the two other methods is shown in Table. 1. The results show that our strategy significantly reduces the illumination time compared with temporal dithering method and does not sacrifice imaging spatial resolution compared with spatial dithering method.

Tables Icon

Table 1. Comparison among three methods

3. Experiments

3.1 Computational simulations

First, we use four different images (Mandril, Baboon, Cameraman and Peppers) to evaluate our proposed strategy with varying quantization R values in the range from 2 to 8 for the computational simulation. Thus, the pixel values for the original grayscale Fourier basis patterns range from [-3 + 3] to [-255 + 255]. By using different quantization R values, the original grayscale Fourier basis patterns are decomposed to different quantities of binary illumination patterns, which will result in a diversified reconstruction error for the imaging system. For a certain spatial frequency (um, vn) to the original grayscale Fourier basis patterns, the patterns will be decomposed into 4 binary illumination patterns at R = 2 or 16 binary illumination patterns at R = 8. The simulation results are shown in Fig. 5. The quality of the reconstruction images improves with an increase in the R value. The percentage of root mean squared error (RMSE) is used to further quantitatively evaluate the quality of the reconstruction for the different R values, which is calculated by Eq. (14):

RMSE=x,y=1M,N[fr(x,y)fo(x,y)]2M×N,
where fr(x,y) and fo(x,y) are the values of the (x, y)-th pixel in the reconstructed and original images respectively; and N and M are the dimensions of the image. All images are normalized to unity. The smaller the RMSE is, the better the reconstruction quality is. Besides, illumination time is a function of quantization level. Different quantization levels mean different quantities of binary illumination patterns and different measurement time. The RMSEs and illumination time under different quantization levels are shown in Fig. 6, which indicates that the RMSEs differ among the four objects. The illumination time linearly increases with an increment of quantization level. However, the change regulations for the RMSEs of the four objects are nearly consistent. The RMSEs decrease with an increment in the quantization level R. Moreover, when parameter R<6, the change gradients decline abruptly. However, when parameter R = >6, the RMSEs are nearly equivalent and the quality of the reconstructed images is not obviously improved. The simulation conclusions will guide our subsequent experiments.

 figure: Fig. 5

Fig. 5 Reconstructed Fourier spectrum and images for four different objects with a quantization parameter R ranging from 2 to 8. The resolution of the reconstructed images is 128 × 128 pixels. FS: Fourier spectrum (the absolute value of the Fourier spectrum in a logarithm scale), RI: Reconstructed image.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 RMSEs and illumination time under different quantization levels.

Download Full Size | PDF

3.2 Laboratory experiments

The experimental setup is shown in Fig. 7. The setup consists of an illumination system, detection system and imaged object. The illumination system consists of a Texas Instruments DLP Discovery 4100 development kit, lens system, and white LED. The DLP development kit is equipped with a 0.7-inch DMD, which contains 1024 × 768 micro mirrors. The maximum binary modulation rate of the DMD is up to 22.7 KHz. And the size of each mirror is 13.6 × 13.6 μm2. The detection system consists of a photo-diode (Thorlabs PMT-PMM02, active area of 25 mm2) that is employed as a single-pixel detector, collecting lens, data acquisition board (NI DAQ USB-6211), and computer.

 figure: Fig. 7

Fig. 7 Experimental setup.

Download Full Size | PDF

We present two experiments to verify our proposed strategy. The first is a static imaging experiment. The imaged objects are composed of a fabrictoy and an enlarged 1951 USAF resolution test pattern printed on a piece of white paper. In the beginning the original Fourier basis patterns with a spatial resolution 128 × 128 are generated by MATLAB program. Then, the Fourier basis patterns are finally decomposed into a cluster of binary illumination patterns (128 × 128-pixel), which are loaded into the DMD to modulate the white LED light. According to the analysis of the above simulations, illumination time and measurement time linearly increases with an increment of quantization level. Thus, if the modulation rate of the used DMD is constant, the increment of quantization level will lead to reduce the imaging speed/efficiency. Here, a quantization level of R = 6 is used as a tradeoff between imaging efficiency and reconstruction quality. For a certain spatial frequency (um, vn), the total binary illumination patterns are equal to 12. Since the Fourier spectrum of a real-valued signal is conjugated symmetrically, half of the coefficients are redundant. Thus, in this case, the number of measurements for the Fourier spectrum with coverage of 100% is 2 × 128 × 128 × 6 = 196608. With the strategy that the Fourier spectrum is acquired in the order from lower to higher frequencies, the quality of the reconstructed images improves with rapid convergence. Figure 8 shows the sampled Fourier spectrum and the corresponding reconstructed images with the spectrum coverage from 5% to 100%. As shown by the figures, the proposed technique can be used to realize clear images. Note that we show the absolute value of the Fourier spectrum on a logarithm scale to increase their visibility. The images are reconstructed by using the 2-D discrete inverse digital Fourier transform (IDFT) algorithm. No postprocessing was applied to the reconstructed images.

 figure: Fig. 8

Fig. 8 Static imaging with spectrum coverage from 5% to 100% and the corresponding reconstructed images. (A–D): Density distributions for the Fourier spectrum acquired at 5%, 15%, 25%, and 100%, the corresponding reconstructed images, respectively. Scale bar: 1cm.

Download Full Size | PDF

The second experiment is a dynamic imaging experiment in which imaged object is a moving hand. For the experimental imaging of dynamic scenes, we set the quantization parameter R = 2 to improve the imaging speed. For a given spatial frequency (um, vn), the total binary illumination patterns are equal to 4. The resolution of the image is 128 × 128. In general, most information in a natural scene is concentrated at low-frequency components. Thus, we acquire only 313 coefficients in the low-frequency range of the Fourier domain to recover the images. Each image takes 2504 measurements, which enables the capture of ~9 images per-second at the 22 KHz modulation rate of the DMD. The sampling ratio is ~2%( = 313/128*128). The results for the dynamic imaging are shown in Fig. 9. We capture 35 images and reconstruct a ~4 seconds’ video (see Visualization 1). The video is created from the sequence of reconstructed images, with the playback frame rate set to 9. No postprocessing was applied to the reconstructed images and the video. When the low-frequency components and relatively low quantification level are used, our method can also be used to image dynamic scenes.

 figure: Fig. 9

Fig. 9 Dynamic imaging results. (a-h): eight of 35 reconstructed images. (See Visualization 1 for the complete video).

Download Full Size | PDF

4. Discussion and conclusions

We present a novel technique to improve the imaging efficiency of the DMD-based FSI by using computational-weighted method via binary pattern illumination. Technically, the grayscale Fourier basis illumination patterns are replaced by a cluster of binary illumination patterns to project the imaged objects, which takes advantage of the high binary modulation rate of the DMD. The light intensities for the binary illumination patterns interacting with the imaged object are multiplied by weighted decomposed coefficients and then summed, which can be equal to the light intensities of the grayscale Fourier basis illumination patterns interacting with the imaged object. This strategy overcomes the low grayscale patterns modulation rate for the DMD-based FSI system. The simulation and experimental results validated the present approach. Objectively, our method presents a tradeoff between the imaging spatial resolution and temporal resolution. For DMD-based FSI, the proposed method does not limit the imaging spatial resolution compared with spatial dithering method and overcomes the limitation of low illumination efficiency compared with the temporal dithering method. Of course, the use of the three-step phase-shifting algorithm [26, 27] can result in an advantage in which of 25% fewer measurements are required, thereby further reducing the imaging time. The proposed scheme using computational-weighted DMD-based FSI provides an optional imaging solution for 3-D imaging, living microscopy and many other applications. In short, the reported technique accelerates the imaging speed of DMD-based FSI and provides an alternative approach to improve FSI efficiency.

Funding

National Natural Science Foundation of China (Nos. 11404344, 41505019, and 41475001), the CAS Innovation Fund Project (No. CXJJ-17S029) and the Open Research Fund of Key Laboratory of Optical Engineering, Chinese Academy of Sciences (No. 2017LBC007).

References and links

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical Imaging by Means of two-Photon Quantum Entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]   [PubMed]  

2. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]   [PubMed]  

3. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: Comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]   [PubMed]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. S. M. M. Khamoushi, Y. Nosrati, and S. H. Tavassoli, “Sinusoidal ghost imaging,” Opt. Lett. 40(15), 3452–3455 (2015). [CrossRef]   [PubMed]  

6. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2(12), 1049–1052 (2015). [CrossRef]  

7. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

8. M. Ploschner, T. Tyc, and T. Cizmar, “Seeing through chaos in multimode fibres,” Nat. Photonics 9(8), 529–535 (2015). [CrossRef]  

9. T. Cižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3(1), 1027 (2012). [CrossRef]   [PubMed]  

10. N. Tian, Q. Guo, A. Wang, D. Xu, and L. Fu, “Fluorescence ghost imaging with pseudothermal light,” Opt. Lett. 36(16), 3302–3304 (2011). [CrossRef]   [PubMed]  

11. G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017). [CrossRef]   [PubMed]  

12. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]   [PubMed]  

13. B. Q. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

14. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. 34(21), 3343–3345 (2009). [CrossRef]   [PubMed]  

15. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

16. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]   [PubMed]  

17. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]   [PubMed]  

18. T. Vasile, V. Damian, D. Coltuc, and M. Petrovici, “Single pixel sensing for THz laser beam profiler based on Hadamard Transform,” Opt. Laser Technol. 79, 173–178 (2016). [CrossRef]  

19. M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016). [CrossRef]   [PubMed]  

20. M. J. Sun, L. T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017). [CrossRef]   [PubMed]  

21. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectruml, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

22. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]   [PubMed]  

23. B. Xu, H. Jiang, H. Zhao, X. Li, and S. Zhu, “Projector-defocusing rectification for Fourier single-pixel imaging,” Opt. Express 26(4), 5005–5017 (2018). [CrossRef]   [PubMed]  

24. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]   [PubMed]  

25. H. Ren, S. Zhao, and J. Gruska, “Edge detection based on single-pixel imaging,” Opt. Express 26(5), 5501–5511 (2018). [CrossRef]   [PubMed]  

26. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]   [PubMed]  

27. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]   [PubMed]  

28. D. G. Winters and R. A. Bartels, “Two-dimensional single-pixel imaging by cascaded orthogonal line spatial modulation,” Opt. Lett. 40(12), 2774–2777 (2015). [CrossRef]   [PubMed]  

29. F. Devaux, P. A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698–701 (2016). [CrossRef]  

30. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

31. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]   [PubMed]  

32. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26–29 (2016). [CrossRef]  

33. B. Lochocki, A. Gambin, S. Manzanera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single pixel camera ophthalmoscope,” Optica 3(10), 1056–1059 (2016). [CrossRef]  

34. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]   [PubMed]  

35. J. Huang and D. F. Shi, “Multispectruml computational ghost imaging with multiplexed illumination,” J. Optics-Uk. 19(7), 075701 (2017). [CrossRef]  

36. D. F. Shi, J. M. Zhang, J. Huang, Y. J. Wang, K. Yuan, K. F. Cao, C. B. Xie, D. Liu, and W. Y. Zhu, “Polarization-multiplexing ghost imaging,” Opt. Lasers Eng. 102, 100–105 (2018). [CrossRef]  

37. D. Shi, S. Hu, and Y. Wang, “Polarimetric ghost imaging,” Opt. Lett. 39(5), 1231–1234 (2014). [CrossRef]   [PubMed]  

38. D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Efficient Space-Time Sampling with Pixel-Wise Coded Exposure for High-Speed Imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 248–260 (2014). [CrossRef]   [PubMed]  

39. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3D surface measurement with temporal-spatial binary encoding structured illumination,” Opt. Express 24(25), 28549–28560 (2016). [CrossRef]   [PubMed]  

Supplementary Material (2)

NameDescription
Visualization 1       dynamic ingaing of a moving hand
Visualization 1       dynamic ingaing of a moving hand

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Traditional method used for generation grayscale Fourier basis illumination patterns by temporal dithering.B0, B1 ... B7 are successive binary illumination patterns that are decomposed from the grayscale Fourier pattern. T is the basis illumination time.
Fig. 2
Fig. 2 Splitting of original grayscale Fourier patterns into a pair of grayscale patterns.
Fig. 3
Fig. 3 Decomposition of the grayscale pattern into binary illumination patterns.
Fig. 4
Fig. 4 Entire process for decomposing the grayscale Fourier patterns to binary patterns. OGP: Original grayscale pattern; DGP: Decomposed grayscale patterns; and BIP: Binary illumination patterns.
Fig. 5
Fig. 5 Reconstructed Fourier spectrum and images for four different objects with a quantization parameter R ranging from 2 to 8. The resolution of the reconstructed images is 128 × 128 pixels. FS: Fourier spectrum (the absolute value of the Fourier spectrum in a logarithm scale), RI: Reconstructed image.
Fig. 6
Fig. 6 RMSEs and illumination time under different quantization levels.
Fig. 7
Fig. 7 Experimental setup.
Fig. 8
Fig. 8 Static imaging with spectrum coverage from 5% to 100% and the corresponding reconstructed images. (A–D): Density distributions for the Fourier spectrum acquired at 5%, 15%, 25%, and 100%, the corresponding reconstructed images, respectively. Scale bar: 1cm.
Fig. 9
Fig. 9 Dynamic imaging results. (a-h): eight of 35 reconstructed images. (See Visualization 1 for the complete video).

Tables (1)

Tables Icon

Table 1 Comparison among three methods

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

f(x,y)= 1 MN u=0,v=0 M1,N1 F(u,v)[cos( 2πxu M + 2πyv N )+jsin( 2πxu M + 2πyv N )] ,
F(u,v)= x=0,y=0 M1,N1 f(x,y)[cos( 2πxu M + 2πyv N )jsin( 2πxu M + 2πyv N )] .
I ϕ (u,v)= x,y P ϕ (x,y;u,v)f (x,y),
P ϕ (x,y;u,v )=(2 R -1)cos(2πx u M +2πy v N +ϕ),u=0,1,2...M1;v=0,1,2...N1.
P ϕ (x,y; u m , v n )= i=1 R 2 (i1) T B ϕ,i (x,y; u m , v n ).
I ϕ ( u m , v n )= x,y P ϕ (x,y; u m , v n )f (x,y).
I ϕ ( u m , v n )= x,y [ i=1 R 2 (i1) T B ϕ,i (x,y; u m , v n ) ]f (x,y) = x,y [ 2 0 T B ϕ,1 (x,y; u m , v n )+...+ 2 (R1) T B ϕ ,R (x,y; u m , v n ) ]f (x,y) = 2 0 x,y T B ϕ,1 (x,y; u m , v n )f(x,y) +...+ 2 (R1) x,y T B ϕ,R (x,y; u m , v n )f(x,y) , = 2 0 I ϕ,1 ( u m , v n )+...+ 2 (R1) I ϕ,R ( u m , v n )
P ϕ (x,y;u,v)= P ϕ + (x,y;u,v)- P ϕ - (x,y;u,v).
I ϕ + ( u m , v n )= 2 0 I ϕ + ,1 ( u m , v n )+ 2 1 I ϕ + ,2 ( u m , v n )+...+ 2 (R1) I ϕ + ,R ( u m , v n ).
I ϕ - ( u m , v n )= 2 0 I ϕ - ,1 ( u m , v n )+ 2 1 I ϕ - ,2 ( u m , v n )+...+ 2 (R1) I ϕ - ,R ( u m , v n ).
D 0 ( u m , v n )= I 0 + ( u m , v n ) I 0 - ( u m , v n )
D π/2 ( u m , v n )= I π/2 + ( u m , v n ) I π/2 - ( u m , v n ).
f(x,y)= F 1 [ D 0 ( u m , v n )+j D π/2 ( u m , v n )],
RMSE= x,y=1 M,N [ f r (x,y) f o (x,y)] 2 M×N ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.