Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging

Open Access Open Access

Abstract

The information dimension obtained by multispectral ghost imaging is more abundant than in single-band ghost imaging. Existing multispectral ghost imaging systems still meet some shortages, such as complex structure or reconstruction time-consuming. Here, an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging is proposed. It can capture the multispectral image of the target object within one projection cycle with a single-pixel detector while maintaining high imaging efficiency and low time-consuming. The core of the proposed approach is the employed novel encoding strategy which is apt to decode and reconstruct the multispectral image via the Fourier transform. Specifically, cosinusoidal encoding matrices with specific frequency characteristics are fused with the orthogonal Hadamard basis patterns to form the multiplexed structured illumination patterns. A broadband photomultiplier is employed to collect the backscattered signals of the target object interacted by the corresponding structured illumination. The conventional linear algorithm is applied first to recover the mixed grayscale image of the imaging scene. Given the specific frequency distribution of the constructed cosinusoidal encoding matrices, the mixed grayscale image can be converted to the frequency domain for further decoding processing. Then, the pictures of multiple spectral components can be obtained with some manipulations by applying Fourier transform. A series of numerical simulations and experiments verified our proposed approach. The present cosinusoidal encoding multiplexed structured illumination can also be introduced in many other fields of high-dimensional information acquisition, such as high-resolution imaging and polarization ghost imaging.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ghost imaging [130] is a novel imaging scheme that has been validated in both quantum [2] and classical systems [3,4]. The light source is split into two beams in a ghost imaging system. One of the beams irradiates the target object and produces transmitted or scattered signals, and then a single-pixel detector would further receive these signals. The other beam does not interact with the target object, and an array detector detects its spatial distribution. The image of the target object can be retrieved by coincidence calculation with the measurements by the two detectors. Later, Shapiro [5] proposed a computational ghost imaging frame and experimental demonstration by Bromberg et al. [6]. Computational ghost imaging significantly reduces the imaging system redundancy and promotes the development of ghost imaging. Until now, ghost imaging has shown its unique advantages in terahertz [714], infrared [1517], X-ray [18,19], and many other fields.

Multispectral or hyperspectral ghost imaging [2030] is a perfect combination of spectral imaging [3135] and ghost imaging technology. Compared with single-band ghost imaging, multispectral ghost imaging provides richer information, which may improve accuracy and reliability in object recognition, material analysis, and medical diagnosis. The straightforward strategy for multispectral ghost imaging is accomplished by employing multiple detectors [20] or time-divided detection via a filter wheeling to capture images of various spectral channels, then fusing them into a color image. These strategies are confronted with the dilemma of complex structures and a large amount of data. The methods of multiplexed structured illumination were proposed for multispectral ghost imaging with a broadband single-pixel detector in our previous work [21,22]. The mutually orthogonal binary encoding matrices (corresponding to the red, green, and blue colored information, respectively) and random patterns or orthogonal Hadamard basis patterns are multiplexed to form the encoding structured illumination patterns. Compressed sensing algorithms are employed in the reconstructed procedure to recover the multispectral images. Liu et al. [23] demonstrated a color computational ghost imaging method that can achieve the reconstruction of high-fidelity images at a relatively low sampling rate (0.0625) by using a plug-and-play generalized alternating projection algorithm (PnP-GAP). Or, using training by simulated dataset, Yu et al. [24] presented a color computational ghost imaging strategy, which can eliminate the actual workload of acquiring experimental training datasets and reduce the sampling times for imaging experiments. These technologies can be well worked under sub-Nyquist measurements that reduce the online sampling times but increase the cost of the subsequent reconstruction times. Olivieri et al. [25] proposed a novel approach to deep-subwavelength single-pixel imaging based on nonlinear pattern generation and time-resolved field measurements with the Walsh-Hadamard encoding scheme. They demonstrated the feasibility of single-pixel hyperspectral imaging, which paves the way for a new methodology in two-dimensional material characterization. The orthogonal Fourier base patterns were applied to an actively compressive imaging scheme that encodes, condenses, and recovers spatial, spectral, and 3D information of the object simultaneously with information multiplexing [26]. The strategy of 4-step phase-shifting or 3-step phase-shifting is commonly employed in ghost imaging based on a Fourier base to acquire each complex-valued Fourier coefficient of the imaging object [27], which allows the computational ghost imaging requires more measurements.

Low measurements and reconstruction time-consuming are persistent in pursuing goals of multispectral or color ghost imaging. Here, we demonstrate an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. It allows multispectral ghost imaging more efficiently. The orthogonal Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. A single-pixel detector is adapted to collect the reflection signal of the object. During recovery, the mixed grayscale image of the imaged object is first recovered by the conventional linear algorithm and then converted to the Fourier space to recombine the information of each channel. Then, the spectral information of different channels is acquired separately and ready to be fused to generate a multispectral image. Numerical simulations and experiments show that our approach can acquire multispectral information efficiently and synchronously, and rapidly reconstruct target images. The organization of the paper is as follows. In Section 2, the imaging and reconstruction methods are introduced. Numerical simulations and experiments are presented in Section 3 to verify the proposed approach. Finally, Section 4 concludes the paper.

2. Imaging and reconstruction methods

Cosinusoidal encoding multiplexed structured illumination ghost imaging technology can image colored objects by encoding different spectral channels independently. Here, we introduce imaging in three spectral channels as an example. Three N × N cosinusoidal encoding matrices are produced that correspond to the red, green, and blue spectral channels, denoted by Fred, Fgreen, and Fblue, respectively. Each matrix can be calculated through Eq. (1):

$$\begin{array}{l} {F_{red}} = \cos ({2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0}} )\\ {F_{green}} = \cos ({2\pi {f_{{x_2}}}x + 2\pi {f_{{y_2}}}y + {\varphi_0}} )\\ {F_{blue}} = \cos ({2\pi {f_{{x_3}}}x + 2\pi {f_{{y_3}}}y + {\varphi_0}} ). \end{array}$$
Where ${f_{{x_i}}}$ and ${f_{{y_i}}}$ (i = 1,2,3) are the frequencies of the encoding matrices and ${\varphi _0}$ is the initial phase. To distinguish the spectral information of each channel as far as possible and prevent crosstalk, the frequency combination (${f_{{x_i}}}$, ${f_{{y_i}}}$) of the three coding matrices selected here is (0.5, 0), (0, 0.5), and (0.5, 0.5). At this point, the encoding matrices are binary matrices consisting entirely of 1 or −1. The dot products among a Hadamard pattern Hi and the cosinusoidal encoding matrices Fred, Fgreen, and Fblue are denoted by Pri, Pgi, and Pbi:
$$\begin{array}{l} {P_{ri}} = {F_{red}} \cdot {H_i}\\ {P_{gi}} = {F_{green}} \cdot {H_i}\\ {P_{bi}} = {F_{blue}} \cdot {H_i}. \end{array}$$
Pri, Pgi, and Pbi are simultaneously loaded into the projection system at the same time to modulate the broadband light source. The projected light interacts with the imaged object, and the reflected light Di is collected by a single-pixel detector. The process can be expressed as follows:
$${D_i} = \sum\limits_{i = 1}^{N \times N} {({r \cdot {F_{red}} \cdot {H_i} \cdot {m_{red}} + g \cdot {F_{green}} \cdot {H_i} \cdot {m_{green}} + b \cdot {F_{blue}} \cdot {H_i} \cdot {m_{blue}}} )} .$$
Where, the parameters r, g, and b are the spectral response coefficients of the single-pixel detector corresponding to the red, green, and blue illumination light. In a certain system, the parameters r, g, and b are constants and can be obtained through calibration based on the detection intensity. mred, mgreen, and mblue correspond to the red, green, and blue spectral images of the target object, respectively. Equation (3) can be simplified as follow:
$${D_i} = \sum\limits_{i = 1}^{N \times N} {{H_i} \cdot m} .$$
Where m represents the mixed grayscale image of the imaged object. As seen by comparing Eq. (2) and Eq. (3), the mixed grayscale image m can be expressed as follows:
$$m = r \cdot {F_{red}} \cdot {m_{red}} + g \cdot {F_{green}} \cdot {m_{green}} + b \cdot {F_{blue}} \cdot {m_{blue}}.$$
Essentially, Eq. (4) is the basic formula for ghost imaging with the Hadamard illumination patterns Hi. Di is the corresponding recorded light intensity. So, the mixed grayscale image m can be recovered through the conventional linear algorithm:
$$m = \frac{1}{{N \times N}}\sum\limits_{i = 1}^{N \times N} {{D_i} \cdot {H_i},i = 1,2, \ldots N \times N.}$$
Given the specific frequency characteristics of the encoding matrices, the Fourier transform method can be used to decode three spectral components mred, mgreen, and mblue from m. Fourier transform is performed on the mixed grayscale image m to convert it to the frequency domain. The Fourier spectrum of mixed image m is compressed from the spectrum of three spectral components, expressed as follows:
$$\begin{array}{ll} {\cal F}\{m \} &= M = {\cal F}\{{r \cdot \cos ({2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0}} )\cdot {m_{red}}} \}\\ &+ {\cal F}\{{g \cdot \cos ({2\pi {f_{{x_2}}}x + 2\pi {f_{{y_2}}}y + {\varphi_0}} )\cdot {m_{green}}} \}\\ &+ {\cal F}\{{b \cdot \cos ({2\pi {f_{{x_3}}}x + 2\pi {f_{{y_3}}}y + {\varphi_0}} )\cdot {m_{blue}}} \}. \end{array}$$
Where, ${\cal F}$ denotes the Fourier transform operator. Take the red channel as an example:
$$q(x,y) = r \cdot \cos ({2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0}} )\cdot {m_{red}}.$$
According to Euler's formula $\cos x = \frac{1}{2}( {e^{ix}} + {e^{ - ix}}) $, Eq. (8) can be transformed into:
$$\begin{array}{ll} q(x,y) &= \frac{1}{2} \cdot r \cdot \exp [{i(2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0})} ]\cdot {m_{red}}\\ &+ \frac{1}{2} \cdot r \cdot \exp [{ - i(2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0})} ]\cdot {m_{red}}. \end{array}$$
Then, according to the frequency shift theorem, the spectrum of the red channel can be expressed as follows:
$$\begin{array}{ll} {\cal F}\{{q(x,y)} \}&= {\cal F}\left\{ {\frac{r}{2} \cdot \exp [i(2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0})] \cdot {m_{red}}} \right\}\\ &+ {\cal F}\left\{ {\frac{r}{2} \cdot \exp [ - i(2\pi {f_{{x_1}}}x + 2\pi {f_{{y_1}}}y + {\varphi_0})] \cdot {m_{red}}} \right\}\\ &= \frac{r}{2}[{{M_{red}}({{f_x} + {f_{{x_1}}},{f_y} + {f_{{y_1}}}} )+ {M_{red}}({{f_x} - {f_{{x_1}}},{f_y} - {f_{{y_1}}}} )} ]. \end{array}$$
Thus, the Fourier spectrum M of the mixed grayscale image is as follows:
$$\begin{array}{ll} M &= \frac{r}{2} \cdot [{{M_{red}}({{f_x} + {f_{{x_1}}},{f_y} + {f_{{y_1}}}} )+ {M_{red}}({{f_x} - {f_{{x_1}}},{f_y} - {f_{{y_1}}}} )} ]\\ &+ \frac{g}{2} \cdot [{{M_{green}}({{f_x} + {f_{{x_2}}},{f_y} + {f_{{y_2}}}} )+ {M_{green}}({{f_x} - {f_{{x_2}}},{f_y} - {f_{{y_2}}}} )} ]\\ &+ \frac{b}{2} \cdot [{{M_{blue}}({{f_x} + {f_{{x_3}}},{f_y} + {f_{{y_3}}}} )+ {M_{blue}}({{f_x} - {f_{{x_3}}},{f_y} - {f_{{y_3}}}} )} ]. \end{array}$$
Due to the specific frequencies of the encoding matrices, the spectrum information of the three channels is frequency shifted to different high-frequency regions of the Fourier space. Therefore, three pairs of frequency points will appear in the spectrum diagram, corresponding to the most significant coefficients of the spectrum information of the three channels. We move the spectrum information of one channel to the center of the Fourier space. And the spectrum information of the other channels is located at the edge of the spectrum. Then, we use a low-pass filter to extract the applicable coefficients selectively. In this way, the spectrum information of the three channels is separated. Thus, we can perform the inverse Fourier transform to the extracted three spectrum information respectively, three spectral components mred, mgreen, and mblue can be obtained:
$$\begin{array}{l} {m_{red}} = {{\cal F}^{ - 1}}\{{B \cdot {S_{red}}} \}\\ {m_{green}} = {{\cal F}^{ - 1}}\{{B \cdot {S_{green}}} \}\\ {m_{blue}} = {{\cal F}^{ - 1}}\{{B \cdot {S_{blue}}} \}. \end{array}$$
Where ${\cal F}{ ^{ - 1}}$ denotes the inverse Fourier transform operator. Sred, Sgreen, and Sblue refer to the spectra of the corresponding channels after recombination, and B refers to extracting the recombined spectrum by a low-pass filter. Finally, the three spectral components mred, mgreen, and mblue are fused to generate the final multispectral image.

In summary, the multispectral ghost imaging with cosinusoidal encoding multiplexed structured illumination can be illustrated in Fig. 1. The imaging process is divided into three steps. The first one is the generation of modulated illumination patterns. The Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. The second step is similar to traditional ghost imaging, using the constructed illumination patterns to illuminate the target object and then using the conventional linear algorithm to obtain the mixed grayscale image. The third step is to reconstruct the multispectral image. The mixed grayscale image is transformed to the frequency domain by a two-dimensional Fourier transform. Due to the specific frequency settings of the constructed encoding matrices, the spectrum information of the three channels is separated in the Fourier space and extracted separately by a low-pass filter. Then, we take the two-dimensional inverse Fourier transform of the three extracted spectrogram to get the spectral components. Finally, we can fuse the spectral images of each channel to reconstruct the multispectral image.

 figure: Fig. 1.

Fig. 1. The flowchart of the proposed approach. The process of imaging and restoration is mainly divided into three steps: the first step is to generate the multiplexed structured illumination patterns; The second step is the project process and then recover the mixed grayscale image; The third step is to reconstruct the multispectral image. In the third step, the first column shows the mixed grayscale image, and the second column shows the Fourier spectrum, the third column shows the spectrum of each channel after recombination, the fourth column shows the filter, the fifth column shows the spectrum of each channel after filtering (images are shown in logarithmic coordinate), the sixth column shows the spectral components and the last column shows the multispectral image.

Download Full Size | PDF

3. Simulations and experiments

3.1 Numerical simulation

Numerical simulations are carried out to evaluate the proposed approach. The object is the “onion” image cropped into a 256 × 256 resolution. For the proposed method, how extracting the spectrum information of each channel will influence the quality of image restoration. Here, a Butterworth low-pass filter is applied to extract the spectrum information of each channel. The Butterworth low-pass filter [36] model is as follows:

$$B({{f_x},{f_y}} )= \frac{1}{{1 + {{({R({f_x},{f_y})/{R_0}} )}^{2n}}}}.$$
Where, $R({f_x},{f_y})$ is the spatial frequency, ${R_0}$ and n indicate the filter radius and the order of the filter, respectively. The image restoration quality is greatly influenced by the order and radius of the filter. The peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) [37] are employed to evaluate the reconstructed image quality. Higher PSNR and SSIM scores indicate better quality of the restored image. These metrics are calculated as follows:
$$PSNR = 10 \cdot {\log _{10}}\left\{ {{{255}^2} \cdot {{\left( {\sum\limits_{x,y = 1}^{h,w} {{{[{c(x,y) - o(x,y)} ]}^2}/Q} } \right)}^{ - 1}}} \right\},$$
$$SSIM = \frac{{({2{\mu_c}{\mu_o} + {Z_1}} )({2{\sigma_{co}} + {Z_2}} )}}{{({\mu_c^2 + \mu_o^2 + {Z_1}} )({\sigma_c^2 + \sigma_o^2 + {Z_2}} )}}.$$
Where $c(x,y)$ and $o(x,y)$ are the values of the $(x,y)$th pixel in the reconstructed and original images respectively, h and w are the dimensions of the image, and Q = h × w is the number of pixels. In our simulations, all images are normalized to unity. Here, h and w are equal to 256, and $c(x,y)$ corresponds to the spectral components mred, mgreen, and mblue. ${\mu _c}$, ${\mu _o}$ are the averages of the images c, o, respectively. ${\sigma _c}$, ${\sigma _o}$ are the standard deviation of c, o respectively. ${\sigma _{co}}$ is the covariance of c and o. Z1 and Z2 are variables to stabilize the division with the weak denominator (constants). Z1 = (K1L)2, Z2 = (K2L)2, and in general, K1 = 0.01, K2 = 0.03, L = 255. (L is the dynamic range of the pixel value, generally taken as 255). It should be noted that Eqs. (14) and (15) are for a single wavelength. In simulations and experiments, the restoration indexes (PSNR and SSIM) of the spectral components of the three channels are obtained first, and then the PSNR and SSIM of the three channels are averaged to obtain the quality of the reconstructed multispectral image.

Figure 2 shows the PSNR and SSIM scores of the reconstructed image under different filter sizes when the 2nd, 3rd, and 4th order Butterworth low-pass filters are selected. The trends of PSNR and SSIM are similar. As the filter radius gradually increases, the obtained low-frequency component of each spectral channel also increases, and the quality of the restored image slowly peaks. The spectral information of each channel will generate crosstalk as the filter size further increases, alongside the rapid drop in the quality of the reconstructed image. The increase in the order of the Butterworth filter will delay the appearance of the inflection point. The inflection points of the SSIM of the 2nd, 3rd, and 4th order filters appear at radii of 40, 55, and 65 pixels. Therefore, to obtain the expected imaging effect, a proper combination between the size and the order of the filter should be found through parameter optimization. The optimized image reconstruction quality is achieved by a filter of 40-70 pixels radius with 3rd or 4th order, bringing the PSNR of the restored image > 30 dB and the SSIM > 0.90.

 figure: Fig. 2.

Fig. 2. (a) The PSNR curve of the decoded images with different Butterworth filter sizes. (b) The SSIM curve of the decoded images with different Butterworth filter sizes.

Download Full Size | PDF

The reconstruction results using a 4th-order low-pass Butterworth filter of 65 pixels radii are shown in Fig. 3. The PSNR of the reconstructed image reaches 32.2 dB, and the SSIM value surpasses 0.960. The PSNR and SSIM of the multispectral image are averages of the PSNR and SSIM of the three-color spectral channel images. By comparing the brightness of the three spectral components, it can be seen that the red pepper in the middle is bright only in the red channel, the yellow pepper in the lower right corner presents different brightness in the red and green channels, and the white onion in the upper right corner presents bright white in all three channels. On the whole, the spectral information of the target image is reconstructed well.

 figure: Fig. 3.

Fig. 3. Image reconstruction results using Butterworth 4th order low-pass filter at a radius of 65 pixels. (a) Target object. (b) The mixed grayscale image. (c) Fourier spectrum. (d) Butterworth 4th order low-pass filter with a radius of 65 pixels. (e-g) The spectrum of each channel after component recombination. (h-j) Spectral components of red, green, and blue channels. (k) The final reconstruction multispectral image.

Download Full Size | PDF

We further use the evolutionary compressive sensing technique [38] to study the multispectral imaging quality under different measurements. The evolutionary compressive sensing will arrange the detection signal intensity values in descending order and then select the corresponding subset of the Hadamard pattern to reconstruct the image. Figure 4(a-f) show the restoration results using 6.25%, 12.5%, 25%, 50%, 75%, and 100% of the complete set of Hadamard patterns. Even if only the most significant 6.25% of Hadamard patterns were used [Fig. 4(a)], the spectral information of the target image was extracted well. As the fractions of the pattern set increase, so does the quality of the restored multispectral image. The simulation results show that the proposed strategy can reconstruct the multispectral image well. Numerical simulations take 5.45 seconds to restore a 256 × 256 resolution multispectral image from 65536 measurements. As a comparison, we use the random orthogonal encoding multiplexed strategy mentioned in Ref. [22] to simulate the same grayscale imaged object, as shown in Fig. 4(g-l). In Ref. [22], the mutually orthogonal binary encoding matrices and orthogonal Hadamard basis patterns are multiplexed to form the illumination patterns. Compressed sensing algorithms [39] are employed in the reconstructed procedure to recover the multispectral images. The spectral encoded computational ghost imaging method takes 74.02 minutes to restore a 256 × 256 resolution multispectral image from 65536 measurements. Overall, from the recovery quality and time consumption, cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging has advantages over the method mentioned in Ref. [22]. The above simulations were performed in MATLAB R2016a on a Windows 10 laptop equipped with an AMD Ryzen 7 5800H CPU (3.20 GHz) and 16 Gb of RAM.

 figure: Fig. 4.

Fig. 4. Image reconstruction with different fractions of a complete set of Hadamard patterns. (a–f) show the simulation results of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. (g-l) show the simulation results of spectral encoded computational ghost imaging [22], which uses the compressed sensing (CS) optimization algorithm [39] in restoration. Reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75% and 100% respectively of the complete pattern set.

Download Full Size | PDF

3.2 Experimental verification

We experimentally study the imaging performance of the proposed approach in the laboratory. As shown in Fig. 5(a), the experimental setup consists of a digital light projector, imaged object, a compound lens, a single-pixel detector, and a computer system (built-in digitizer). The same 256 × 256 resolution cosinusoidal encoding multiplexed structured illumination patterns as the numerical simulation are employed in the experiment. The digital light projector (TOSHIBA TDP-98) projects a pattern onto the object every 0.2 seconds to obtain a high signal-to-noise ratio. The back-reflected light is collected by a compound lens which is composed of three lenses with same diameter of 50.8 mm and same focal length of 100 mm. The equivalent focal length of the compound lens is 36.6 mm and the length of the compound lens is 30 mm. The distance between the object and the first lens of the compound lens is about 1000 mm, and the distance between the last lens of the compound lens and the detector is about 20 mm, which ensures the light collected by the compound lens images on the sensitive surface of the detector. Photo-electrical conversion is realized by the single-pixel detector (Thorlabs PMM02) and then discretized by a digitizer (ADLINK PCI-9816H). In our experiment, the sampling rate is set to 40 kHz. The restoration process is also the same as described in the methods section. The experimental imaging objects are shown in Fig. 5(b) and Fig. 5(c), which are captured by a camera.

 figure: Fig. 5.

Fig. 5. Experimental setup and the imaging objects. (a) Experimental setup. It consists of a DLP projector, objects, a compound lens, a single-pixel detector, a built-in digitizer, and a computer system. (b) and (c) are the imaging objects taken by a camera.

Download Full Size | PDF

The experimental results of the letter object “USTC” are shown in Fig. 6. The imaging resolution is 256 × 256. We first use the evolutionary compressive sensing technique to restore the mixed grayscale image of the imaging object. The Fourier transform is applied to transform the mixed grayscale images to the frequency domain. We move the spectrum information of three channels to the center of the Fourier domain, respectively. And then, a 3rd-order Butterworth filter with a radius of 45 pixels is employed to extract the recombined spectrums of the three spectral channels. So, the inverse Fourier transform is applied to transform the three spectral channels from the corresponding recombined spectrums. The final multispectral image is fused from the three spectral components. Figure 6(a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100% of the complete Hadamard pattern set, respectively. We take the restored image at 100% measurements as the ground truth of the imaging object to calculate the image restoration indexes (PSNR and SSIM) at different measurements. The results show that even if only the most significant 6.25% of the complete Hadamard pattern set is used, the color information of the target object can be extracted well. The PSNR and SSIM are 31.33 dB and 0.8738, respectively. As the fractions of the pattern set increase, the quality of the fused multispectral image and the reconstructed spectral component images also increase.

 figure: Fig. 6.

Fig. 6. Experimental results of the letter “USTC”. (a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100%, respectively, of the complete Hadamard pattern set. For the results in a–f, the first column shows the mixed grayscale image, the Fourier spectrum, and the Butterworth third-order filter with a radius of 45 pixels, respectively. The second column shows the recombination spectrum of red, green, and blue channels. And the third column shows the spectral components of the three spectral channels. The final fusion multispectral image is displayed in the fourth column.

Download Full Size | PDF

The experimental results of the toy car are shown in Fig. 7. The imaging resolution is 256 × 256. The whole reconstruction process is the same as in the above experiment. As shown in Fig. 7, the current strategy can well capture the complex color object. The minimum PSNR and SSIM are about 30.6 dB and 0.84 at the most significant 6.25% of the complete measurements. The color and reflectivity information of the imaging object has well emerged. As the measurements increase, the quality of the reconstructed spectral images also increases. But the improvement in color information is not remarkable. And we also found that due to the influence of hardware and environmental noise, compared with the simulation results, the reconstructed image of the experiment contains mosaic stripes. Overall, two experimental results imply that the proposed strategy is valid for color computational ghost imaging.

 figure: Fig. 7.

Fig. 7. Experimental results of the toy car. (a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100%, respectively, of the complete Hadamard pattern set. For the results in a–f, the first column shows the mixed grayscale image, the Fourier spectrum, and the Butterworth third-order filter with a radius of 65 pixels, respectively. The second column shows the recombination spectrum of red, green, and blue channels. And the third column shows the spectral components of the three spectral channels. The final fusion multispectral image is displayed in the fourth column.

Download Full Size | PDF

4. Discussions and conclusions

This paper demonstrated an approach of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. It can capture the multispectral image of the target object within one projection cycle while maintaining high imaging efficiency and low time-consuming. Here, the deterministic orthogonal Hadamard basis patterns and cosinusoidal encoding matrices are multiplexed to generate colored illumination patterns. A broadband single-pixel detector is employed to collect the backscattered signals of the target object interacted by the corresponding structured illumination. The conventional linear algorithm is applied first to recover the mixed grayscale image of the imaging scene. Then, owing to the certain frequency composition of the constructed encoding matrices, the mixed grayscale image can be converted to the frequency domain for further decoding processing. Hence, using Fourier transform, the pictures of multiple spectral components can be obtained with some manipulations. And the final multispectral image can be reconstructed by fusing the spectral images of each channel. A series of numerical simulations and experiments verified the proposed approach.

The present work improves the acquisition efficiency of multispectral imaging. This strategy can flexibly structure the encoding matrices according to the amounts of spectral channels and achieve high-quality restoration of multispectral images. More importantly, the restoration process of multispectral images is more advanced. The methods of digital signal processing, including the Fourier transform theory, are applied to demodulate and reconstruct the pictures of the spectral component. These operations accelerate the reconstruction efficiency of multispectral images. The time-consuming is approximately 5.45 seconds to restore a 256×256 resolution multispectral image from 65536 measurements. The time-consuming is about 74.02 minutes to get the 256 × 256 resolution multispectral image for the same imaging object by applying the compressed sensing (CS) algorithm [39] in our previous work [22]. In addition, to our knowledge, CS is an optimization algorithm that will be more time-consuming if the imaging resolution increases. Even worse, CS (such as TVAL3 [40]) may not properly work when the spatial resolution is too large. However, the proposed approach may have some superiority when the imaging resolution increases due to the higher energy concentration in the low-frequency region. The present cosinusoidal encoding multiplexed structured illumination can also be introduced in many other fields of high-dimensional information acquisition, such as high-resolution imaging and polarization ghost imaging. Of course, the experimental results are not satisfying enough compared with the simulation ones. The restoration results show a mosaic structure with color information, which might be affected by environmental noise, object reflectivity, and hardware performance. Future improvements and research shall consider these factors to achieve higher quality imaging.

Funding

Open Project of Advanced Laser Technology Laboratory of Anhui Province (NO. AHL2021ZR01); Foundation of Key Laboratory of Science and Technology Innovation of Chinese Academy of Sciences (NO. CXJJ-20S028); National Natural Science Foundation of China (NO. U20A20214); Youth Innovation Promotion Association of the Chinese Academy of Sciences (NO. 2020438).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74(18), 3600–3603 (1995). [CrossRef]  

2. T. B. Pittman, Y. Shih, D. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

3. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““Two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

4. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]  

5. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

6. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

7. R. I. Stantchev, B. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

8. T. Vasile, V. Damian, D. Coltuc, and M. Petrovici, “Single pixel sensing for THz laser beam profiler based on Hadamard Transform,” Opt. Laser Technol. 79, 173–178 (2016). [CrossRef]  

9. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]  

10. L. Olivieri, J. S. Totero Gongora, A. Pasquazi, and M. Peccianti, “Time-resolved nonlinear ghost imaging,” ACS Photonics 5(8), 3379–3388 (2018). [CrossRef]  

11. J. S. Totero Gongora, L. Olivieri, L. Peters, J. Tunesi, V. Cecconi, A. Cutrona, R. Tucker, V. Kumar, A. Pasquazi, and M. Peccianti, “Route to intelligent imaging reconstruction via terahertz nonlinear ghost imaging,” Micromachines 11(5), 521 (2020). [CrossRef]  

12. S.-C. Chen, Z. Feng, J. Li, W. Tan, L.-H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z.-H. Zhai, Z.-R. Li, C.-W. Qiu, X.-C. Zhang, and L.-G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light: Sci. Appl. 9(1), 99 (2020). [CrossRef]  

13. L. Leibov, A. Ismagilov, V. Zalipaev, B. Nasedkin, Y. Grachev, N. Petrov, and A. Tcypkin, “Speckle patterns formed by broadband terahertz radiation and their applications for ghost imaging,” Sci. Rep. 11(1), 20071 (2021). [CrossRef]  

14. V. Cecconi, V. Kumar, A. Pasquazi, J. S. Totero Gongora, and M. Peccianti, “Nonlinear field-control of terahertz waves in random media for spatiotemporal focusing,” Open Res Europe 2, 32 (2022). [CrossRef]  

15. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

16. M. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

17. H. Zhao, P. Li, Y. Ma, S. Jiang, and B. Sun, “3D single-pixel imaging at the near-infrared wave band,” Appl. Opt. 61(13), 3845–3849 (2022). [CrossRef]  

18. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

19. A.-X. Zhang, Y.-H. He, L.-A. Wu, L.-M. Chen, and B.-B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

20. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

21. J. Huang and D. Shi, “Multispectral computational ghost imaging with multiplexed illumination,” J. Opt. 19(7), 075701 (2017). [CrossRef]  

22. J. Huang, D. Shi, W. Meng, L. Zha, K. Yuan, S. Hu, and Y. Wang, “Spectral encoded computational ghost imaging,” Opt. Commun. 474, 126105 (2020). [CrossRef]  

23. S. Liu, Q. Li, H. Wu, and X. Meng, “Color computational ghost imaging based on a plug-and-play generalized alternating projection,” Opt. Express 30(11), 18364–18373 (2022). [CrossRef]  

24. Z. Yu, Y. Liu, J. Li, X. Bai, Z. Yang, Y. Ni, and X. Zhou, “Color computational ghost imaging by deep learning based on simulation data training,” Appl. Opt. 61(4), 1022–1029 (2022). [CrossRef]  

25. L. Olivieri, J. S. T. Gongora, V. Cecconi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral Single-Pixel Reconstruction at THz Frequencies using Time-Resolved Nonlinear Ghost Imaging,” in 2019 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC, 2019), 1-1.

26. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

27. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

28. B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64(3), 259–264 (2017). [CrossRef]  

29. P. Wang and R. Menon, “Computational multispectral video imaging,” J. Opt. Soc. Am. A 35(1), 189–199 (2018). [CrossRef]  

30. L. Wang and S. Zhao, “Full color single pixel imaging by using multiple input single output technology,” Opt. Express 29(15), 24486–24499 (2021). [CrossRef]  

31. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006). [CrossRef]  

32. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 1–7 (2016). [CrossRef]  

33. C. Hu, S. Yang, M. Chen, and H. Chen, “Quadrature multiplexed structured illumination imaging,” IEEE Photonics J. 12(2), 1–8 (2020). [CrossRef]  

34. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25(15), 17211–17226 (2017). [CrossRef]  

35. K. Dorozynska, V. Kornienko, M. Aldén, and E. Kristensson, “A versatile, low-cost, snapshot multidimensional imaging approach based on structured light,” Opt. Express 28(7), 9572–9586 (2020). [CrossRef]  

36. S. Butterworth, “On the theory of filter amplifiers,” Wireless Engineer 7, 536–541 (1930).

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

38. M. J. Sun, L. T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 1–7 (2017). [CrossRef]  

39. J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. on Image Process. 23(8), 3336–3351 (2014). [CrossRef]  

40. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Comput. Optim. Appl. 56(3), 507–530 (2013). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The flowchart of the proposed approach. The process of imaging and restoration is mainly divided into three steps: the first step is to generate the multiplexed structured illumination patterns; The second step is the project process and then recover the mixed grayscale image; The third step is to reconstruct the multispectral image. In the third step, the first column shows the mixed grayscale image, and the second column shows the Fourier spectrum, the third column shows the spectrum of each channel after recombination, the fourth column shows the filter, the fifth column shows the spectrum of each channel after filtering (images are shown in logarithmic coordinate), the sixth column shows the spectral components and the last column shows the multispectral image.
Fig. 2.
Fig. 2. (a) The PSNR curve of the decoded images with different Butterworth filter sizes. (b) The SSIM curve of the decoded images with different Butterworth filter sizes.
Fig. 3.
Fig. 3. Image reconstruction results using Butterworth 4th order low-pass filter at a radius of 65 pixels. (a) Target object. (b) The mixed grayscale image. (c) Fourier spectrum. (d) Butterworth 4th order low-pass filter with a radius of 65 pixels. (e-g) The spectrum of each channel after component recombination. (h-j) Spectral components of red, green, and blue channels. (k) The final reconstruction multispectral image.
Fig. 4.
Fig. 4. Image reconstruction with different fractions of a complete set of Hadamard patterns. (a–f) show the simulation results of cosinusoidal encoding multiplexed structured illumination multispectral ghost imaging. (g-l) show the simulation results of spectral encoded computational ghost imaging [22], which uses the compressed sensing (CS) optimization algorithm [39] in restoration. Reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75% and 100% respectively of the complete pattern set.
Fig. 5.
Fig. 5. Experimental setup and the imaging objects. (a) Experimental setup. It consists of a DLP projector, objects, a compound lens, a single-pixel detector, a built-in digitizer, and a computer system. (b) and (c) are the imaging objects taken by a camera.
Fig. 6.
Fig. 6. Experimental results of the letter “USTC”. (a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100%, respectively, of the complete Hadamard pattern set. For the results in a–f, the first column shows the mixed grayscale image, the Fourier spectrum, and the Butterworth third-order filter with a radius of 45 pixels, respectively. The second column shows the recombination spectrum of red, green, and blue channels. And the third column shows the spectral components of the three spectral channels. The final fusion multispectral image is displayed in the fourth column.
Fig. 7.
Fig. 7. Experimental results of the toy car. (a-f) are the reconstructed results obtained with the most significant 6.25%, 12.5%, 25%, 50%, 75%, and 100%, respectively, of the complete Hadamard pattern set. For the results in a–f, the first column shows the mixed grayscale image, the Fourier spectrum, and the Butterworth third-order filter with a radius of 65 pixels, respectively. The second column shows the recombination spectrum of red, green, and blue channels. And the third column shows the spectral components of the three spectral channels. The final fusion multispectral image is displayed in the fourth column.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

F r e d = cos ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) F g r e e n = cos ( 2 π f x 2 x + 2 π f y 2 y + φ 0 ) F b l u e = cos ( 2 π f x 3 x + 2 π f y 3 y + φ 0 ) .
P r i = F r e d H i P g i = F g r e e n H i P b i = F b l u e H i .
D i = i = 1 N × N ( r F r e d H i m r e d + g F g r e e n H i m g r e e n + b F b l u e H i m b l u e ) .
D i = i = 1 N × N H i m .
m = r F r e d m r e d + g F g r e e n m g r e e n + b F b l u e m b l u e .
m = 1 N × N i = 1 N × N D i H i , i = 1 , 2 , N × N .
F { m } = M = F { r cos ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) m r e d } + F { g cos ( 2 π f x 2 x + 2 π f y 2 y + φ 0 ) m g r e e n } + F { b cos ( 2 π f x 3 x + 2 π f y 3 y + φ 0 ) m b l u e } .
q ( x , y ) = r cos ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) m r e d .
q ( x , y ) = 1 2 r exp [ i ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) ] m r e d + 1 2 r exp [ i ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) ] m r e d .
F { q ( x , y ) } = F { r 2 exp [ i ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) ] m r e d } + F { r 2 exp [ i ( 2 π f x 1 x + 2 π f y 1 y + φ 0 ) ] m r e d } = r 2 [ M r e d ( f x + f x 1 , f y + f y 1 ) + M r e d ( f x f x 1 , f y f y 1 ) ] .
M = r 2 [ M r e d ( f x + f x 1 , f y + f y 1 ) + M r e d ( f x f x 1 , f y f y 1 ) ] + g 2 [ M g r e e n ( f x + f x 2 , f y + f y 2 ) + M g r e e n ( f x f x 2 , f y f y 2 ) ] + b 2 [ M b l u e ( f x + f x 3 , f y + f y 3 ) + M b l u e ( f x f x 3 , f y f y 3 ) ] .
m r e d = F 1 { B S r e d } m g r e e n = F 1 { B S g r e e n } m b l u e = F 1 { B S b l u e } .
B ( f x , f y ) = 1 1 + ( R ( f x , f y ) / R 0 ) 2 n .
P S N R = 10 log 10 { 255 2 ( x , y = 1 h , w [ c ( x , y ) o ( x , y ) ] 2 / Q ) 1 } ,
S S I M = ( 2 μ c μ o + Z 1 ) ( 2 σ c o + Z 2 ) ( μ c 2 + μ o 2 + Z 1 ) ( σ c 2 + σ o 2 + Z 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.