Abstract

Imaging through scattering media is challenging since the signal to noise ratio (SNR) of the reflection can be heavily reduced by scatterers. Single-pixel detectors (SPD) with high sensitivities offer compelling advantages for sensing such weak signals. In this paper, we focus on the use of ghost imaging to resolve 2D spatial information using just an SPD. We prototype a polarimetric ghost imaging system that suppresses backscattering from volumetric media and leverages deep learning for fast reconstructions. In this work, we implement ghost imaging by projecting Hadamard patterns that are optimized for imaging through scattering media. We demonstrate good quality reconstructions in highly scattering conditions using a 1.6% sampling rate.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging through scattering media has many applications such as in underwater imaging and navigation in foggy environments. However, absorption and scattering [1,2] by scatterers can heavily reduce the signal to noise ratio (SNR) of target signals, making it challenging to use focal plane arrays based on CCD/CMOS technologies. Compared to CCD/CMOS sensors, the single-pixel detector (SPD) such as a photomultiplier tube and a single photon counter has much better sensitivity and SNR performance, making it a preferable detector for imaging through scattering media. However, SPD collects all photons into a single bucket without providing 2D information of objects.

Ghost imaging is an established method for reconstructing 2D images using only a single SPD [35]. In projection-based ghost imaging, the illumination is spatially encoded by a spatial light modulator (SLM) with a pre-know pattern displayed on the SLM. Then, the encoded beam illuminates the object and the reflection is directed into the SPD. During the measurement, the modulation patterns ($\mathbf {M}$) are updated on the SLM and corresponding measurements ($\mathbf {y}$) are made with the SPD. The problem of ghost imaging is to reconstruct the object ($\mathbf {x}$) by solving the inverse problem of $\mathbf {y} = \mathbf {M}\mathbf {x}$. Compressed sensing can be utilized to reconstruct objects with fewer measurements than dictated by Nyquist’s sample theorem [6].

Han et al. [7] first demonstrated that ghost imaging in the presence of scattering media where the scatters greatly reduce the SNR. Bina et al. [8] proposed to reconstruct objects immersed in a turbid media using differential ghost imaging with a backscattering configuration. Tajahuerce et al. [9] demonstrated the computational ghost imaging through a dynamic scattering media. Zhao et al. [10] proposed a passive ghost imaging scheme, which uses caustics illumination patterns to detect large objects in low-light underwater conditions. Le et al. [11] investigated underwater ghost imaging with different turbidity and illumination angles. Although ghost imaging has been demonstrated to image through scattering media, direct reflections from scatterers can severely degrade reconstruction quality.

To suppress direct reflection from scatterering media, polarization sensing has been exploited to improve the ghost imaging reconstructions in underwater imaging [1216]. Polarimetric ghost imaging has also been used to separate reflections from different objects since the polarization of reflections from different surface roughness and materials can be different [1720]. In polarimetric ghost imaging, the illumination is linearly polarized, and there is a linear polarizer in front of the detector. By rotating the linear polarizer, two measurements recording different polarization components are used to minimize scatterers’ direct reflections [16] (more details in Sec. 2.1). Two SPDs with different polarization status can also be used. We refer to this process as a polarization correction. Although polarization correction improves reconstruction quality, it also blocks a portion of light incident on the detector, reducing SNR, and further complicating the problem of imaging through scattering media. Therefore, a ghost imaging reconstruction algorithm that is robust to noise is needed for polarimetric ghost imaging through scattering media.

Deep Learning [21] has been recently used to solve inverse problems [22,23] such as denoising [24] and phase retrieval [25]. Higham et al. [26] also use a three layer convolutional neural network for single pixel imaging, which has comparable reconstructions compared to iterative optimization methods. He et al. [27] and Lyu et al. [28] use deep learning to further improve the quality of reconstructions from iterative optimization in ghost imaging. Compared to previous algorithms, deep learning based methods demonstrate faster and more robust reconstructions.

In this paper, we leverage polarimetric ghost imaging and deep learning to improve the reconstructions for imaging through scattering media. An experimental prototype is built up to verify the proposed method. We summarize our contributions as below:

  • • We propose an end-to-end learning method for imaging through scattering media with polarimetric ghost imaging. The proposed method can reconstruct the object in less than 1ms and produce more robust reconstructions compared to iterative methods, especially in high scattering conditions.
  • • We analyze the effect of spatially encoded illumination patterns in the scattering media, and suggest an optimal sequence of projected Hadamard patterns for imaging through scattering media which demonstrates robust reconstructions in high scattering conditions.
  • • We demonstrate a reconstruction with a very low sampling rate of 1.6%.

2. Methods

In this section, we firstly review the reconstruction with polarimetric ghost imaging through scattering media. We then present an optimal Hadamard sequence for imaging through scattering media. Lastly, we explain the deep learning based reconstruction algorithm.

2.1 Imaging through scattering media with polarimetric ghost imaging

For the ghost imaging scenario considered in this paper, a beam is spatially encoded with a pattern $\mathbf {M}$ displayed on an SLM, which then illuminates an object after first propagating through the scattering media. A portion of photons may be directly scattered back to the detector from the scattering media. Transmitted light continues to propagate and illuminate the object. Broadly, we can classify the reflections to the detector $I_m$ in two parts: direct reflections from the scatters $S$, and reflections from the object $O$, which can be represented as below:

$$I_m = \int_{0}^{\Delta t}\int_{0}^{N_y}\int_{0}^{N_x}M_m(x,y)\left (S(x,y,t)+O(x,y,t) \right ) dxdydt+n_m$$
where $\Delta t$ is the exposure time of the detector. $N_x$ and $N_y$ are the number of pixels in x and y axis. $M_m$ is the measurement matrix for the system with encoding the spatial modulation pattern. We relaxedly assume the absorption of the scattering media is uniform. $n_m$ is the measurement noise.

The scatterers and the object may have different polarization properties, therefore, the degree of polarization for scatterer's and object's reflections can be different [13,14,17,29]. We use this observation in the reconstruction scheme. During the measurement, the illumination beam is linear polarized. We place a linear polarizer in front of the detector, and two measurements ($I^{\parallel , \perp }$) are acquired by rotating the linear polarizer to align ($\parallel$) or be perpendicular ($\perp$) to the polarization status of the illumination beam [16].

$$I^{\parallel,\perp} = I_S^{\parallel,\perp}+I_O^{\parallel,\perp}$$
$$I_S^{\parallel,\perp} = \int_{0}^{\Delta t} \int_{0}^{N_y}\int_{0}^{N_x}M(x,y) S^{\parallel, \perp}(x,y,t) dxdydt$$
$$I_O^{\parallel,\perp} = \int_{0}^{\Delta t} \int_{0}^{N_y}\int_{0}^{N_x}M(x,y) O^{\parallel, \perp}(x,y,t) dxdydt$$
where $I_S^{\parallel ,\perp }$ and $I_O^{\parallel ,\perp }$ represent reflections from scatters and the object to the detector respectively.

The goal is to suppress the scatter’s reflections and achieve the reflection from the object to the detector ($I_O = I_O^{\parallel }+I_O^{\perp }$). We refer this step as the polarization correction.

$$I_O = \frac{1}{\beta_S-\beta_O}\cdot \left(I^{\perp}\cdot(1+\beta_S)-I^{\parallel}\cdot(1-\beta_S)\right)$$
where $\beta _O = (I_O^{\parallel }-I_O^{\perp })/(I_O^{\parallel }+I_O^{\perp })$ and $\beta _S = (I_S^{\parallel }-I_S^{\perp })/(I_S^{\parallel }+I_S^{\perp })$ are the polarization degrees for the object’s reflection and the scatters’ reflection respectively. We use $\beta _O$ of around 0.3 and $\beta _S$ of around 0.6 in the experiment. We provide a detail derivation of Eq. (5) in Appendix 1.

If we look closely at the noise variance after this polarization correction in Eq. (6), it actually accumulates the noise which complicates the reconstruction [13].

$$\sigma_{I_O}^2 = \left (\frac{1+\beta_S}{\beta_S-\beta_O}\right)^2\cdot \sigma_{I^{\perp}}^2+\left (\frac{1-\beta_S}{\beta_S-\beta_O} \right)^2\cdot \sigma_{I^{\parallel}}^2$$

Assume the measurement matrix of the system as $\textbf {A}$, the detector measurement can be simplified as: $I_S+I_O = \mathbf {A}(\mathbf {S}+\mathbf {O})$. If we remove scatterers’ reflections from detector readout, the removal is also applied in the reconstruction. We then reconstruct the object using the measurement $I_O$ (Eq. (5)) containing only the object’s reflection.

$$\mathbf{I_O} = \textbf{A}\mathbf{O}$$
where $\mathbf {I_O} \in \mathbb {R}^{J\times 1}$, $\mathbf {A} \in \mathbb {R}^{J\times K}$ ($J \leq K$), $\mathbf {O} \in \mathbb {R}^{K\times 1}$. $J$ is the number of measurements with different spatially encoding patterns on the SLM, and $K$ is the size of the vector form of the reconstruction. The reconstruction is an ill-posed problem, we therefore formulate it as an optimization problem with a regularizer for the penalty.
$$\mathbf{O} = \arg\!\min||\mathbf{I_O}-\textbf{A}\mathbf{O}||_2^2 + \lambda||\Phi (\mathbf{O})||_1$$
where a TV regularizer $\Phi (\mathbf {O})$ is applied and $\lambda$ is the weight. The TwIST algorithm [30] is used for the reconstruction.

2.2 Spatial encoded patterns for the illumination

In this work, we used Hadamard patterns to encode the illumination. The sequence of Hadamard pattern is generated using the Walsh Hadamard transform. The dimension of the reconstruction is 128 $\times$ 128. If we define the sample rate as $J/(128\times 128)$, a sequence of 16384 Hadamard patterns is needed for the sample rate of 1 (one Hadamard pattern corresponds to one SPD measurement).

Dense scattering can cause blur in the projected illumination patterns. Blurring artifacts in the illumination restrict the spatial frequencies reaching the object surface and therefore the maximum spatial frequencies in the object that can be recovered. The point spread function (PSF) of the illumination can be approximated as consisting of the ballistic PSF and the Gaussian scattered PSF. In weak scattering conditions, the ballistic PSF provides a significant contribution. Previous work has established that high frequency components of the illumination patterns still reach to the object [12,13,31,32], and this work follows in this regime.

As discussed above, one of the goals of this paper is to reconstruct the object reflectance using very low sampling rates. Since these low sampling rates exploit only a fraction of possible Hadamard patterns and the polarization process further reduces the SNR of the reflection, it is desirable to determine an ’optimal sequence’ of Hadamard patterns that is robust to noise in high scattering conditions and delivers the best possible result.

We simulate SPD measurements with different Hadamard patterns for averaging 200 natural images from the STL-10 dataset [33]. We reorder the Hadamard patterns with delivering energy (display with absolute values) from high to low marked with the red line in Fig. 1(a), which has a similar strategy with previous work [26]. We display the pattern containing relatively high energies in Fig. 1(b), which demonstrates lower spatial frequencies compared to the patterns carrying low energies (Fig. 1(c)). It becomes clear that patterns displaying relatively low spatial frequencies are a good choice since they preserve information in the presence of strong scattering and are relatively robust against decreased SNRs.

 figure: Fig. 1.

Fig. 1. (a). We simulate SPD measurements with the first 3000 Hadamard frames (delivering energy from high to low) for averaging of 200 natural images. The red line marks the average intensity and the black line represents the standard deviation. X-axis marks different Hadamard patterns and y-axis represents amplitude. We display an example of Hadamard patterns with high energies in (b) and low energies in (c).

Download Full Size | PPT Slide | PDF

In both simulation and experiments, we pick the first 255 and 450 Hadamard patterns shown in Fig. 1(a) with sampling rates of $1.6\%$ and 2.7%, and compare their results with those from the first 1500 and 3000 Hadamard patterns (sampling rates of 9.5% and 18.3%) in Fig. 1(a).

2.3 Reconstruction with deep learning

We developed a convolutional neural network to provide high quality 2D reconstructions from small numbers of noisy ghost imaging measurements. The training is performed with STL-10 dataset [33], where 90000 images are used for the training. Since the size of the reconstruction is 128$\times$128 in both simulations and experiments, we resize all the images in the dataset to be 128$\times$128. For each training image (or scene), we simulate its SPD measurement with different Hadamard patterns (more details in Section 3). For the training, no extra noise is added into the simulated SPD measurements.

As shown in Fig. 2, the input to the neural network is SPD measurements. If we train for the sampling rate of 1.6% (corresponding to 255 different Hadamard patterns), the input to the neural network is then a vector of 1$\times$255. In training, the batch size is 30, and the input is then 30$\times$255 for each batch. Firstly, a fully connected (FC) layer is used and followed by a rectified linear unit (ReLU). As discussed in Section 2.1, every element in $\mathbf {X}$ contributes to the single pixel detector measurement ($\mathbf {y}$). The FC layer directly maps the measurement to the scene, which can be treated as an inverse transform for the measurement ($\mathbf {y}$) to estimate the unknown object ($\mathbf {X}$) following the linear measurement matrix ($\mathbf {A}$). It is noticed that the weights of FC layer can be fixed during training which reduces the number of trainable parameters with the best linear completion scheme [34]. For simplicity, we use the native FC layer and all the weights are optimized during training. The output of the FC layer is 30$\times$1$\times$128$\times$128, and is directed into a convolutional layer containing a batch normalization and an activation ReLU function. The kernel of this convolution is 9 $\times$ 9 with a padding of 4. The output of this layer has 64 channels (30$\times$64$\times$128$\times$128). Then, an eighteen-layer residual network [35] is used for further processing with preserving the size (30$\times$64$\times$128$\times$128). The kernel size used in the residual block is 3 $\times$ 3. The output of the residual block goes through another convolutional layer (kernel size of 3$\times$3), and then is concatenate with the input to the residual block. The concatenation then goes through the last convolutional layer (kernel size of 9$\times$9 with a padding of 4) with output feature of 1, and this is the final reconstruction with a size of 128$\times$128.

 figure: Fig. 2.

Fig. 2. Proposed convolutional Neural network: The input to the neural network is the SPD measurements with the polarization correction. The first layer is a fully connected (FC) layer and a rectified linear unit (ReLU). Following the FC layer, there are multiple convolutional layers (Conv2d) and an eighteen residual layer (ResNet). We lastly use a convolutional layer to generate the output. BN: batch normalization.

Download Full Size | PPT Slide | PDF

In training, ADAM optimization [36] is used with a learning rate of 0.001, and coefficients for computing running averages of gradient and its square as 0.9 and 0.999. Twenty epochs are used for the training process. The neural network is installed with the PyTorch framework, and the training process is performed on a computer with NVIDIA TITAN X GPU [37]. We compare the mean squared error between the output and the ground truth as the lost function:

$$\mathcal{L} = \frac{1}{n}\sum_{i=1}^{n}(f(\mathbf{I_O}^{i},\mathbf{\Theta})-\mathbf{O}^{i})^2$$
where $f(\cdot )$ represents the activation functions. $\mathbf {\Theta }$ are the weights for the neural network. $\mathbf {I_O}$ is the single pixel detector measurements, and $\mathbf {O}$ is the object for reconstruction.

We retrain the neural network for other sampling rates of 18.3%, 9.5% and 2.7%, and the training time can vary from 9 to 12 hours depending on the sampling rate.

3. Simulation

In simulations, we assume the polarization correction completely removes back reflections from the scattering medium and simply reduces the SNR of measurements. Increasing the density of the scatterers results in stronger absorption and scattering which further reduces SNR. Therefore, we apply different SNRs to simulate measurements under different scattering conditions.

For each Hadamard pattern displayed on the SLM, we simulate its SPD measurement by multiplying the scene with the Hadamard pattern and summarizing all the pixels into one readout. Different white Gaussian noises of 10, 15, 20, 25dB are added to simulate different scattering conditions.

We simulate the SPD measurements for ‘Lena’ with different Hadamard patterns in low and high scattering conditions (or equivalently SNRs). For each sampling rate, we reconstruct ‘Lena’ with independently trained neural networks and compare against rule based compressed sensing reconstruction [30]. The results are shown in Fig. 3, and reconstructions from deep learning (DL) in Figs. 3(a) and (e) demonstrate better visual results compared to those from compressed sensing (CS) in Figs. 3(c) and (g) especially for low sampling rates as shown in in Figs. 3(b) and (d) and (f) and (g). From the quantitative comparison shown in the table, DL can reconstruct the object in less than 1ms. In high scattering conditions (SNR=10dB), DL performs more robust reconstructions than CS in terms of peak signal to noise ratio (PSNR) and structural similarity index (SSIM).

 figure: Fig. 3.

Fig. 3. Simulated reconstructions using deep learning and compressed sensing in low and high scattering conditions: In a low scattering condition (SNR=25dB), we compare the reconstructions with DL and CS for the sampling rate of 18.3% (a, c) and 2.7% (b, d). In a high scattering condition (SNR=10dB), we also compare the reconstructions with DL and CS for the sampling rate of 18.3% (e, g) and 2.7% (f, h). A quantitative comparison is shown in the table.

Download Full Size | PPT Slide | PDF

We further reconstruct for ‘Cameraman’ using DL with different sequences of Hadamard patterns (or sampling rates) we select. We compare the performance in different scattering conditions as shown in Fig. 4(a). As we can see, high sampling rate (18.3%) help visualize more details of the object in low scattering conditions. If we compare the reconstructions with sampling rates of 9.5% and 2.7% in high scattering conditions (SNR=10 or 15dB), higher sampling rate does not help improve the reconstruction quality as shown in Figs. 4(b) and (c). Moreover, reconstructions with sampling rates of 2.7% and 1.6% demonstrate robustness to different scattering conditions, which suggests that these Hadamard sequences are preferable for imaging through strongly scattering media.

 figure: Fig. 4.

Fig. 4. Simulated reconstructions using DL: We reconstruct ‘Cameraman’ with DL for different sampling rates (18.3%, 9.5%,2.7% and 1.6%) under different scattering conditions (SNR=25,20,15,and 10dB) as shown in (a). We also quantitatively compare the performance of SSIM (b) and PSNR (c).

Download Full Size | PPT Slide | PDF

4. Experiment

4.1 Experimental setup

The experimental setup is shown in Fig. 5(a). The light source is a customized LED, and the LED’s emission is collimated and projected on to the digital micromirror device (DMD, Texas Instrument Discovery 4100). The beam is modulated with the Hadamard pattern displayed on the DMD. It then goes through a linear polarizer (Daheng Optics, GCL-05) to be polarized via a relay lens. The polarized beam then goes through a lens (Nikon AF-S 24-120mm f/4G ED VR) and illuminates the object through the scattering media.

 figure: Fig. 5.

Fig. 5. Experimental setup: (a) The illumination beam is spatially encoded with a Hadamard pattern displayed on the SLM, and then polarized with a linear polarizer. The reflection is collected with an SPD with another linear polarizer in front. For each Hadamard pattern, we acquire two measurements by rotating the polarizer in front of the SPD. The object is a toy skull with a rough surface (b). We demonstrate the CS reconstruction (c) using the sampling rate of 1 within clear water.

Download Full Size | PPT Slide | PDF

The scattering media in the tank is a mix of water and milk. Different scattering conditions (7FTU, 20FTU, 32FTU and 40FTU. FTU: formazine turbidity unit [16,38]) are created by adding different amounts of milk drops into the water. A toy skull with a rough surface is placed behind the water tank as the target. The distance between the light source and the object is about one meter. The reflection from the object goes through the scattering media, and is then collected by an SPD (Hamamatsu H10493-012). We use another linear polarizer in front of the SPD, and measure twice by rotating its polarization to be the same or perpendicular to that in the illumination path. In practice, we can also use two SPDs with different linear polarizers to capture simultaneously.

The optical system in the illumination path is optimized so that the Hadamard pattern with high spatial frequency is not cut off. We demonstrate the CS reconstruction using the sampling rate of 1 with clear water as shown in Fig. 5(c), where details of the object can be visualized.

In experiments, different sampling rates of 18.3%, 9.5%, 2.7%, 1.6% (same with simulation) are tested. To create measurement with (1, −1) Hadamard pattern, we sequentially encode (1,0) pattern and (0,1) pattern on the DMD and then subtract the measurement of (0,1) pattern from that of (1,0) pattern [39,40].

4.2 Reconstructions with deep learning and compressed sensing

We preprocess the SPD measurements by applying the polarization correction as described by Eq. (5). We then perform DL and CS reconstructions with the sampling rate of 18.3% (Fig. 6(a)) and 2.7% (Fig. 6(b)) under different scattering conditions.

 figure: Fig. 6.

Fig. 6. DL and CS reconstructions: The scattering level is from 7FTU, 20FTU, 32FTU, 40FTU. We perform the reconstructions with the sampling rate of 18.3%(a) and 2.7%(b). Upper rows are DL reconstructions, and bottom rows are CS reconstructions. Zoom in for better visualization.

Download Full Size | PPT Slide | PDF

The reconstruction with DL is orders of magnitude faster than iterative retrieval algorithms. Moreover, DL reconstructions (Figs. 6(a) and (b): upper rows) demonstrate better visual results compared to CS reconstructions (Figs. 6(a) and (b): bottom rows) with the closeup as an example. When the density of the scatterers increases, we observe that the CS reconstruction degrades dramatically. For example, it is very hard to separate the reconstruction of the skull from background noise when the density of scatterers is high (right columns in Figs. 6(a) and (b)). On the other hand, deep learning can still reconstruct the object with a reasonable quality such as the reconstruction with the sampling rate of 2.7% under 40 FTU. This demonstrates the robustness of our deep learning algorithm in imaging through scattering media.

4.3 Hadamard sequences for deep learning reconstructions

In low scattering (7 FTU), high sampling rates result in higher quality reconstructions, as expected. For example, we can observe the nose and eyes of the skull with a high level of detail in Fig. 7(a) while these details are not visualized in low sample rate measurement as shown in Fig. 7(d). This suggests that the Hadamard sequences with higher spatial frequencies result in better reconstruction quality in low scattering. Does this expectation still hold in high scattering?

 figure: Fig. 7.

Fig. 7. Reconstructions with different Hadamard sequences (or sampling rates) in low and high scattering conditions: We compare the DL reconstructions with different sampling rates of 18.3%, 9.5%, 2.7%, and 1.6% in scattering conditions of 7 FTU (a-d) and 40 FTU (e-h).

Download Full Size | PPT Slide | PDF

In high scattering (40 FTU), reconstructions with the sampling rates of 18.3% and 9.5% degrade significantly compared to Figs. 7(a) and (b). On the other hand, reconstructions with a low sample rate of 2.7% 7(g) or 1.6% 7(h) can preserve the quality in high scattering conditions.

We further analyze the SPD measurements with different Hadamard frames in the high scattering condition in Fig. 8. We can observe the same phenomena with that shown in the simulation (Fig. 1(a)), where certain Hadamard patterns provide higher energies (display with absolute values). As mentioned earlier, the input to the neural network is the SPD ‘output’ ($I$) after the polarization correction from $I^{\parallel }$ (Fig. 8(a)) and $I^ {\perp }$ (Fig. 8(b)). The noise is accumulated after applying the polarization correction as shown in Fig. 8(c), which further reduces the SNR. Therefore, it is preferable to pick up the Hadamard sequences providing high energies for sensing and reconstruction, which is more robust to the noise and still provides valuable information. In other words, this suggests that the sequence of Hadamard patterns providing high energies can be chose for high scattering conditions. At the same time, the selected sequence of Hadamard patterns has a lower sampling rate (2.7% or 1.6%) which reduces the acquisition time.

 figure: Fig. 8.

Fig. 8. Experimental SPD measurements with the first 3000 Hadamard frames: We display the SPD measurements of $I^\parallel$ (a) and $I^\perp$ (b). We perform the polarization correction from the two measurements and achieve the final SPD measurement (c) for the post processing. In each plot, x-axis is the index of the Hadamard pattern, and y-axis represents the amplitude.

Download Full Size | PPT Slide | PDF

5. Discussion

In high scattering conditions, reconstructions from ghost imaging with high sampling rates (e.g., 18.3%) degrade significantly. However, high sampling rates are needed to reconstruct more details of the object. Can we introduce some prior information about scattering media such as the SNR or scattering properties into the training procedure? If so, the neural network may learn the optimized reconstruction for high sampling rates in high scattering conditions.

To evaluate our hypothesis, we apply an extra white Gaussian noise of 25dB and 10dB to the previous 90000 STL-10 images, and there is a total of 270000 images for the training. We retrain the neural network for the sampling rate of 18.3% using the same parameters, and test for ‘Lena’ and the experimental measurement.

For ‘Lena’, compared to previous reconstructions in the upper row of Fig. 9(a), reconstructions using a network trained with noisy images are improved greatly in high scattering conditions (low SNRs) as shown in the bottom row of Fig. 9(a). Moreover, SSIM (Fig. 9(c)) and PSNR (Fig. 9(d)) in reconstructions using a network trained with noisy images have been greatly improved (red lines) compared to those in previous reconstructions (dark lines).

 figure: Fig. 9.

Fig. 9. Reconstructions with previous and new training: Reconstructions for ‘Lena’ (a) and experimental measurement (b) with the sampling rates of 18.3%: Upper row demonstrates reconstructions with previous training; Bottom row is reconstructions with new training. We also show SSIM (c) and PSNR (d) under different SNRs (scattering conditions) for ‘Lena’.

Download Full Size | PPT Slide | PDF

We also test for the experimental measurements. In the sampling rate of 18.3%, training with noisy images helps improve the performance in high scattering conditions as shown in Fig. 9(b) (bottom row) compared to previous results (Fig. 9(b), upper row).

In both simulation and experiments, training with noisy data enables increases reconstruction quality for high scattering conditions, while also decreasing reconstruction image quality for lower scattering conditions. This indicates that a good characterization of measurement noise can lead to significant improvements in performance for deep learning based reconstructions [41]. In this work, we simulate SPD measurements under different scattering conditions by adding different Gaussian noise, which may not represent the scatter properties very well. A physical rendering might be used to generate a dataset with considering the scatter’s properties and power reductions, which can better represent the detector response when imaging through scattering media. With this new dataset, we can potentially further improve the reconstruction quality with high sampling rates in high scattering conditions. This dataset may be profitable for imaging through scattering media especially when using deep learning for reconstructions.

6. Conclusion

In this paper, we demonstrate a deep neural network can help reconstructions for imaging through scattering media with polarimetric ghost imaging in both simulated and real experiments. Deep learning demonstrates more efficient and robust reconstructions in high scattering conditions compared to iterative reconstructions. We believe the proposed method can be used as a fast and robust imaging tool for imaging through scattering media such as underwater imaging and imaging through tissues.

Appendix 1: Derivation for Eq. (5)

As mentioned previously, we have the degree of polarization for scatterers ($\beta _S$) and the object ($\beta _O$) and the two measurements ($I^{\perp }$ and $I^{\parallel }$) as:

$$\beta_O = (I_O^{\parallel}-I_O^{\perp})/(I_O^{\parallel}+I_O^{\perp})$$
$$\beta_S = (I_S^{\parallel}-I_S^{\perp})/(I_S^{\parallel}+I_S^{\perp})$$
$$I^{\perp} = I_O^{\perp}+I_S^{\perp}$$
$$I^{\parallel} = I_O^{\parallel}+I_S^{\parallel}$$

From Eq. (10) and Eq. (11), we can have:

$$I_O^{\parallel} = (1+\beta_O)/(1-\beta_O)I_O^{\perp}$$
$$I_S^{\parallel} = (1+\beta_S)/(1-\beta_S)I_S^{\perp}$$

We take these into Eq. (13) as:

$$I^{\parallel} = (1+\beta_O)/(1-\beta_O)I_O^{\perp}+(1+\beta_S)/(1-\beta_S)I_S^{\perp}$$

By combining Eq. (12) and Eq. (16), we can have:

$$I_O^{\perp} = \frac{(1+\beta_S)(1-\beta_O)I^{\perp}-(1-\beta_S)(1-\beta_O)I^{\parallel}}{2(\beta_S-\beta_O)}$$

By combining Eq. (14) and Eq. (17), we can have:

$$\begin{aligned}I_O &= I_O^{\parallel}+I_O^{\perp} = \frac{1+\beta_O}{1-\beta_O}I_O^{\perp}+I_O^{\perp} = \frac{2}{1-\beta_O}I_O^{\perp}= \frac{(1+\beta_S)I^{\perp}-(1-\beta_S)I^{\parallel}}{\beta_S-\beta_O} \\ &= \frac{1}{\beta_S-\beta_O}\cdot \left(I^{\perp}\cdot(1+\beta_S)-I^{\parallel}\cdot(1-\beta_S)\right) \end{aligned}$$

Funding

Defense Advanced Research Projects Agency (REVEAL Program (HR0011-16-C- 0028)); National Science Foundation CAREER Award (IIS-1453192); National Natural Science Foundation of China (61501077); Fundamental Research Funds for Central Universities of the Central South University (3132018186, 3132020202).

Disclosures

The authors declare no conflicts of interest.

References

1. A. Ishimaru, “Wave propagation and scattering in random media and rough surfaces,” Proc. IEEE 79(10), 1359–1366 (1991). [CrossRef]  

2. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

3. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

4. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

5. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

6. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

7. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

8. M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013). [CrossRef]  

9. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945–16955 (2014). [CrossRef]  

10. M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

11. M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express 25(19), 22859–22868 (2017). [CrossRef]  

12. Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1 (IEEE, 2004), pp. I.

13. F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018). [CrossRef]  

14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]  

15. S. Kartazayeva, X. Ni, and R. Alfano, “Backscattering target detection in a turbid medium by use of circularly and linearly polarized light,” Opt. Lett. 30(10), 1168–1170 (2005). [CrossRef]  

16. H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020). [CrossRef]  

17. D. Shi, S. Hu, and Y. Wang, “Polarimetric ghost imaging,” Opt. Lett. 39(5), 1231–1234 (2014). [CrossRef]  

18. Y. Zhu, J. Shi, Y. Yang, and G. Zeng, “Polarization difference ghost imaging,” Appl. Opt. 54(6), 1279–1284 (2015). [CrossRef]  

19. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). [CrossRef]  

20. J. Guan and J. Zhu, “Target detection in turbid medium using polarization-based range-gated technology,” Opt. Express 21(12), 14152–14158 (2013). [CrossRef]  

21. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

22. A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018). [CrossRef]  

23. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

24. J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.

25. C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

26. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018). [CrossRef]  

27. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018). [CrossRef]  

28. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

29. L. Mullen, B. Cochenour, W. Rabinovich, R. Mahon, and J. Muth, “Backscatter suppression for underwater modulating retroreflector links using polarization discrimination,” Appl. Opt. 48(2), 328–337 (2009). [CrossRef]  

30. J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007). [CrossRef]  

31. M. E. Hanafy, M. C. Roggemann, and D. O. Guney, “Detailed effects of scattering and absorption by haze and aerosols in the atmosphere on the average point spread function of an imaging system,” J. Opt. Soc. Am. A 31(6), 1312–1319 (2014). [CrossRef]  

32. B. B. Dor, A. Devir, G. Shaviv, P. Bruscaglioni, P. Donelli, and A. Ismaelli, “Atmospheric scattering effect on spatial resolution of imaging systems,” J. Opt. Soc. Am. A 14(6), 1329–1337 (1997). [CrossRef]  

33. A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.

34. N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).

35. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

36. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

37. NVIDIA, “Titanx,” https://www.nvidia.com/en-us/geforce/products/10series/titan-x-pascal/ (2019).

38. M. Sadar, “Turbidity instrumentation–an overview of today’s available technology,” in Turbidity and Other Sediment Surrogates Workshop, (Federal Interagency Subcommittee on Sedimentation, Reno, NV, 2002).

39. M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015). [CrossRef]  

40. F. Li, H. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “Cs-tof: High-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017). [CrossRef]  

41. H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. A. Ishimaru, “Wave propagation and scattering in random media and rough surfaces,” Proc. IEEE 79(10), 1359–1366 (1991).
    [Crossref]
  2. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
    [Crossref]
  3. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
    [Crossref]
  4. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
    [Crossref]
  5. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
    [Crossref]
  6. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  7. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011).
    [Crossref]
  8. M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
    [Crossref]
  9. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945–16955 (2014).
    [Crossref]
  10. M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.
  11. M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express 25(19), 22859–22868 (2017).
    [Crossref]
  12. Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1 (IEEE, 2004), pp. I.
  13. F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018).
    [Crossref]
  14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
    [Crossref]
  15. S. Kartazayeva, X. Ni, and R. Alfano, “Backscattering target detection in a turbid medium by use of circularly and linearly polarized light,” Opt. Lett. 30(10), 1168–1170 (2005).
    [Crossref]
  16. H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
    [Crossref]
  17. D. Shi, S. Hu, and Y. Wang, “Polarimetric ghost imaging,” Opt. Lett. 39(5), 1231–1234 (2014).
    [Crossref]
  18. Y. Zhu, J. Shi, Y. Yang, and G. Zeng, “Polarization difference ghost imaging,” Appl. Opt. 54(6), 1279–1284 (2015).
    [Crossref]
  19. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006).
    [Crossref]
  20. J. Guan and J. Zhu, “Target detection in turbid medium using polarization-based range-gated technology,” Opt. Express 21(12), 14152–14158 (2013).
    [Crossref]
  21. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  22. A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
    [Crossref]
  23. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
    [Crossref]
  24. J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.
  25. C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).
  26. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
    [Crossref]
  27. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
    [Crossref]
  28. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
    [Crossref]
  29. L. Mullen, B. Cochenour, W. Rabinovich, R. Mahon, and J. Muth, “Backscatter suppression for underwater modulating retroreflector links using polarization discrimination,” Appl. Opt. 48(2), 328–337 (2009).
    [Crossref]
  30. J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
    [Crossref]
  31. M. E. Hanafy, M. C. Roggemann, and D. O. Guney, “Detailed effects of scattering and absorption by haze and aerosols in the atmosphere on the average point spread function of an imaging system,” J. Opt. Soc. Am. A 31(6), 1312–1319 (2014).
    [Crossref]
  32. B. B. Dor, A. Devir, G. Shaviv, P. Bruscaglioni, P. Donelli, and A. Ismaelli, “Atmospheric scattering effect on spatial resolution of imaging systems,” J. Opt. Soc. Am. A 14(6), 1329–1337 (1997).
    [Crossref]
  33. A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.
  34. N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).
  35. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.
  36. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
  37. NVIDIA, “Titanx,” https://www.nvidia.com/en-us/geforce/products/10series/titan-x-pascal/ (2019).
  38. M. Sadar, “Turbidity instrumentation–an overview of today’s available technology,” in Turbidity and Other Sediment Surrogates Workshop, (Federal Interagency Subcommittee on Sedimentation, Reno, NV, 2002).
  39. M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
    [Crossref]
  40. F. Li, H. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “Cs-tof: High-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017).
    [Crossref]
  41. H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
    [Crossref]

2020 (1)

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

2018 (6)

F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018).
[Crossref]

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
[Crossref]

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

2017 (3)

2015 (3)

Y. Zhu, J. Shi, Y. Yang, and G. Zeng, “Polarization difference ghost imaging,” Appl. Opt. 54(6), 1279–1284 (2015).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

2014 (3)

2013 (3)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

J. Guan and J. Zhu, “Target detection in turbid medium using polarization-based range-gated technology,” Opt. Express 21(12), 14152–14158 (2013).
[Crossref]

2012 (1)

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

2011 (1)

2009 (3)

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

L. Mullen, B. Cochenour, W. Rabinovich, R. Mahon, and J. Muth, “Backscatter suppression for underwater modulating retroreflector links using polarization discrimination,” Appl. Opt. 48(2), 328–337 (2009).
[Crossref]

2008 (2)

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

2007 (1)

J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

2006 (1)

2005 (1)

1997 (1)

1991 (1)

A. Ishimaru, “Wave propagation and scattering in random media and rough surfaces,” Proc. IEEE 79(10), 1359–1366 (1991).
[Crossref]

Alfano, R.

Andrés, P.

Ba, J.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Bai, L.

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Bina, M.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

Blum, C.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Bromberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Bruscaglioni, P.

Chen, E.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.

Chen, H.

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Chen, S.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

Chenault, D. B.

Clemente, P.

Coates, A.

A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.

Cochenour, B.

Cossairt, O.

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Devir, A.

Donelli, P.

Dong, G.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

Dor, B. B.

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Ducros, N.

N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).

Durán, V.

Edgar, M. P.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Ferri, F.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Figueiredo, M. A.

J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

Gatti, A.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Goldstein, D. L.

Gong, W.

Guan, J.

Guney, D. O.

Gupta, H.

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

Han, P.

Han, S.

Hanafy, M. E.

He, K.

F. Li, H. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “Cs-tof: High-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017).
[Crossref]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

He, Y.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

Higham, C. F.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Hu, S.

Huang, S.

Iliadis, M.

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

Irles, E.

Ishimaru, A.

A. Ishimaru, “Wave propagation and scattering in random media and rough surfaces,” Proc. IEEE 79(10), 1359–1366 (1991).
[Crossref]

Ismaelli, A.

Jin, K. H.

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

Jitrik, O.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Kang, C.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

Kanugo, J.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Karpel, N.

Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1 (IEEE, 2004), pp. I.

Kartazayeva, S.

Katsaggelos, A. K.

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

Katz, O.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

Lagendijk, A.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Lancis, J.

Lanzagorta, M.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Le, M.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, H.

A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.

Li, F.

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

F. Li, H. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “Cs-tof: High-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017).
[Crossref]

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Li, X.

Li, Y.

Liu, F.

Liu, J.

M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express 25(19), 22859–22868 (2017).
[Crossref]

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

Lucas, A.

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

Lugiato, L.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Lyu, M.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Magatti, D.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Mahon, R.

McCann, M. T.

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

Metzler, C. A.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

Molina, R.

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

Molteni, M.

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Mosk, A. P.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Mullen, L.

Mur, A. L.

N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).

Murray-Smith, R.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Muth, J.

Ng, A.

A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.

Nguyen, H. Q.

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

Ni, X.

Padgett, M.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Padgett, M. J.

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Parashar, A.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Pediredla, A.

Peyrin, F.

N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).

Rabinovich, W.

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

Roggemann, M. C.

Sadar, M.

M. Sadar, “Turbidity instrumentation–an overview of today’s available technology,” in Turbidity and Other Sediment Surrogates Workshop, (Federal Interagency Subcommittee on Sedimentation, Reno, NV, 2002).

Schechner, Y. Y.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1 (IEEE, 2004), pp. I.

Schniter, P.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

Shao, X.

Shapiro, J. H.

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

Shaviv, G.

Shaw, J. A.

Shi, D.

Shi, J.

Silberberg, Y.

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Situ, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Soldevila, F.

Sun, B.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tajahuerce, E.

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tian, L.

Tian, Z.

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

Treibitz, T.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

Tyo, J. S.

Uhlmann, J.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Unser, M.

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

Van Putten, E. G.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Veeraraghavan, A.

F. Li, H. Chen, A. Pediredla, C. Yeh, K. He, A. Veeraraghavan, and O. Cossairt, “Cs-tof: High-resolution compressive time-of-flight imaging,” Opt. Express 25(25), 31096–31110 (2017).
[Crossref]

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

Venegas-Andraca, S. E.

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Vos, W. L.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Wang, G.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express 25(19), 22859–22868 (2017).
[Crossref]

Wang, H.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, Y.

Wei, Y.

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Wu, H.

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

Xie, J.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.

Xu, L.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.

Xu, W.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

Xu, Z.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

M. Le, G. Wang, H. Zheng, J. Liu, Y. Zhou, and Z. Xu, “Underwater computational ghost imaging,” Opt. Express 25(19), 22859–22868 (2017).
[Crossref]

Xue, Y.

Yang, K.

Yang, Y.

Yeh, C.

Zeng, G.

Zhang, A.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

Zhang, G.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

Zhao, M.

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Zheng, H.

Zhou, Y.

Zhu, J.

Zhu, S.

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

Zhu, Y.

Appl. Opt. (3)

IEEE Signal Process. Mag. (2)

A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using deep neural networks for inverse problems in imaging: beyond analytical methods,” IEEE Signal Process. Mag. 35(1), 20–36 (2018).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. Med. Imag. (1)

H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn-based projected gradient descent for consistent ct image reconstruction,” IEEE Trans. Med. Imag. 37(6), 1440–1453 (2018).
[Crossref]

IEEE Trans. on Image Process. (1)

J. M. Bioucas-Dias and M. A. Figueiredo, “A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

J Eur. Opt. Soc-Rapid. (1)

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” J Eur. Opt. Soc-Rapid. 10, 15041 (2015).
[Crossref]

J. Opt. Soc. Am. A (2)

J. Soc. Inf. Disp. (1)

H. Wu, M. Zhao, F. Li, Z. Tian, and M. Zhao, “Underwater polarization-based single pixel imaging,” J. Soc. Inf. Disp. 28(2), 157–163 (2020).
[Crossref]

Nature (2)

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Opt. Express (4)

Opt. Lett. (4)

Optica (1)

Phys. Rev. A (2)

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009).
[Crossref]

Phys. Rev. Lett. (1)

M. Bina, D. Magatti, M. Molteni, A. Gatti, L. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110(8), 083901 (2013).
[Crossref]

Proc. IEEE (1)

A. Ishimaru, “Wave propagation and scattering in random media and rough surfaces,” Proc. IEEE 79(10), 1359–1366 (1991).
[Crossref]

Sci. Rep. (3)

C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Science (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref]

Other (10)

M. Zhao, J. Uhlmann, M. Lanzagorta, J. Kanugo, A. Parashar, O. Jitrik, and S. E. Venegas-Andraca, “Passive ghost imaging using caustics modeling,” in Radar Sensor Technology XXI, vol. 10188 (International Society for Optics and Photonics, 2017), p. 101880H.

Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1 (IEEE, 2004), pp. I.

J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, (2012), pp. 341–349.

C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prdeep: Robust phase retrieval with a flexible deep network,” arXiv preprint arXiv:1803.00212 (2018).

A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, (2011), pp. 215–223.

N. Ducros, A. L. Mur, and F. Peyrin, “A completion network for reconstruction from compressed acquisition,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI2020), (2020).

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).

NVIDIA, “Titanx,” https://www.nvidia.com/en-us/geforce/products/10series/titan-x-pascal/ (2019).

M. Sadar, “Turbidity instrumentation–an overview of today’s available technology,” in Turbidity and Other Sediment Surrogates Workshop, (Federal Interagency Subcommittee on Sedimentation, Reno, NV, 2002).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a). We simulate SPD measurements with the first 3000 Hadamard frames (delivering energy from high to low) for averaging of 200 natural images. The red line marks the average intensity and the black line represents the standard deviation. X-axis marks different Hadamard patterns and y-axis represents amplitude. We display an example of Hadamard patterns with high energies in (b) and low energies in (c).
Fig. 2.
Fig. 2. Proposed convolutional Neural network: The input to the neural network is the SPD measurements with the polarization correction. The first layer is a fully connected (FC) layer and a rectified linear unit (ReLU). Following the FC layer, there are multiple convolutional layers (Conv2d) and an eighteen residual layer (ResNet). We lastly use a convolutional layer to generate the output. BN: batch normalization.
Fig. 3.
Fig. 3. Simulated reconstructions using deep learning and compressed sensing in low and high scattering conditions: In a low scattering condition (SNR=25dB), we compare the reconstructions with DL and CS for the sampling rate of 18.3% (a, c) and 2.7% (b, d). In a high scattering condition (SNR=10dB), we also compare the reconstructions with DL and CS for the sampling rate of 18.3% (e, g) and 2.7% (f, h). A quantitative comparison is shown in the table.
Fig. 4.
Fig. 4. Simulated reconstructions using DL: We reconstruct ‘Cameraman’ with DL for different sampling rates (18.3%, 9.5%,2.7% and 1.6%) under different scattering conditions (SNR=25,20,15,and 10dB) as shown in (a). We also quantitatively compare the performance of SSIM (b) and PSNR (c).
Fig. 5.
Fig. 5. Experimental setup: (a) The illumination beam is spatially encoded with a Hadamard pattern displayed on the SLM, and then polarized with a linear polarizer. The reflection is collected with an SPD with another linear polarizer in front. For each Hadamard pattern, we acquire two measurements by rotating the polarizer in front of the SPD. The object is a toy skull with a rough surface (b). We demonstrate the CS reconstruction (c) using the sampling rate of 1 within clear water.
Fig. 6.
Fig. 6. DL and CS reconstructions: The scattering level is from 7FTU, 20FTU, 32FTU, 40FTU. We perform the reconstructions with the sampling rate of 18.3%(a) and 2.7%(b). Upper rows are DL reconstructions, and bottom rows are CS reconstructions. Zoom in for better visualization.
Fig. 7.
Fig. 7. Reconstructions with different Hadamard sequences (or sampling rates) in low and high scattering conditions: We compare the DL reconstructions with different sampling rates of 18.3%, 9.5%, 2.7%, and 1.6% in scattering conditions of 7 FTU (a-d) and 40 FTU (e-h).
Fig. 8.
Fig. 8. Experimental SPD measurements with the first 3000 Hadamard frames: We display the SPD measurements of $I^\parallel$ (a) and $I^\perp$ (b). We perform the polarization correction from the two measurements and achieve the final SPD measurement (c) for the post processing. In each plot, x-axis is the index of the Hadamard pattern, and y-axis represents the amplitude.
Fig. 9.
Fig. 9. Reconstructions with previous and new training: Reconstructions for ‘Lena’ (a) and experimental measurement (b) with the sampling rates of 18.3%: Upper row demonstrates reconstructions with previous training; Bottom row is reconstructions with new training. We also show SSIM (c) and PSNR (d) under different SNRs (scattering conditions) for ‘Lena’.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

I m = 0 Δ t 0 N y 0 N x M m ( x , y ) ( S ( x , y , t ) + O ( x , y , t ) ) d x d y d t + n m
I , = I S , + I O ,
I S , = 0 Δ t 0 N y 0 N x M ( x , y ) S , ( x , y , t ) d x d y d t
I O , = 0 Δ t 0 N y 0 N x M ( x , y ) O , ( x , y , t ) d x d y d t
I O = 1 β S β O ( I ( 1 + β S ) I ( 1 β S ) )
σ I O 2 = ( 1 + β S β S β O ) 2 σ I 2 + ( 1 β S β S β O ) 2 σ I 2
I O = A O
O = arg min | | I O A O | | 2 2 + λ | | Φ ( O ) | | 1
L = 1 n i = 1 n ( f ( I O i , Θ ) O i ) 2
β O = ( I O I O ) / ( I O + I O )
β S = ( I S I S ) / ( I S + I S )
I = I O + I S
I = I O + I S
I O = ( 1 + β O ) / ( 1 β O ) I O
I S = ( 1 + β S ) / ( 1 β S ) I S
I = ( 1 + β O ) / ( 1 β O ) I O + ( 1 + β S ) / ( 1 β S ) I S
I O = ( 1 + β S ) ( 1 β O ) I ( 1 β S ) ( 1 β O ) I 2 ( β S β O )
I O = I O + I O = 1 + β O 1 β O I O + I O = 2 1 β O I O = ( 1 + β S ) I ( 1 β S ) I β S β O = 1 β S β O ( I ( 1 + β S ) I ( 1 β S ) )

Metrics