Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive phase object classification using single-pixel digital holography

Open Access Open Access

Abstract

A single-pixel camera (SPC) is a computational imaging system that obtains compressed signals of a target scene using a single-pixel detector. The compressed signals can be directly used for image classification, thereby bypassing image reconstruction, which is computationally intensive and requires a high measurement rate. Here, we extend this direct inference to phase object classification using single-pixel digital holography (SPDH). Our method obtains compressed measurements of target complex amplitudes using SPDH and trains a classifier using those measurements for phase object classification. Furthermore, we present a joint optimization of the sampling patterns used in SPDH and a classifier to improve classification accuracy. The proposed method successfully classified phase object images of handwritten digits from the MNIST database, which is challenging for SPCs that can only capture intensity images.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

A single-pixel camera (SPC) is a computational imaging system that obtains images using a single-pixel detector [13]. SPCs are particularly useful when image sensors are unfeasible owing to their cost, light sensitivity, or response timing. Their applications include terahertz imaging [46], short-wave infrared imaging [710], microscopy [1113], imaging through scattering media [1418], and three-dimensional sensing [1921]. SPCs have a spatially multiplexed camera architecture that optically acquires linear projections of a target scene. Consequently, highly compressed signals of the target scene can be obtained, which enables image acquisition and reconstruction at sub-Nyquist rates.

Direct inference using measurements of SPCs, which eliminates the need for image reconstruction, has recently attracted significant attention. In several imaging applications, solving the inference tasks on images takes priority over high-quality image reconstruction. Moreover, direct inference from SPC measurements can bypass image reconstruction, which is computationally intensive and requires a high measurement rate. This type of inference in the compressed-signal domain has been investigated in the field of compressed sensing [2228] and is sometimes called compressive classification [22], compressed learning [2426], or compressive signal processing [23]. Recent studies have demonstrated image classification [2935], face recognition [3638], object tracking [3941], and cell classification [42] using SPC measurements without image reconstruction.

However, most previous studies on SPC-based direct inference prioritized the optical intensity of a target object as the parameter for classification over its phase distribution. Phase distributions reflect the optical path length of a target scene, which is essential for analyzing optically transparent objects, such as biological cells or gases, and the three-dimensional structure of targets [43]. Several SPC-based phase imaging techniques have been proposed, such as those using Mach-Zehnder [4448] or Michelson interferometers [49,50], common-path interferometers [13,5154], coherent diffractive imaging with phase retrieval [55,56], the transport-of-intensity equation [57], wavefront sampling [58], or machine learning [59]. However, to the best of our knowledge, these techniques have rarely been used for compressive classification.

Here, we extend SPC-based direct inference to phase object classification using single-pixel digital holography (SPDH) [4446,49,50]. Phase objects primarily induce phase shifts and minimally affect light intensity; therefore, their classification is difficult using SPCs that only capture intensity images. Therefore, we use SPDH, which is a complex-amplitude imaging technique that uses an SPC architecture and phase-shifting digital holography. Our method obtains compressed measurements of target complex amplitudes using SPDH and trains a classifier using those measurements for phase object classification. Furthermore, we present a joint optimization of the sampling patterns and classifier used in SPDH-based phase object classification to improve classification accuracy. We simulate and experimentally test our method on a phase object image dataset comprising images of handwritten digits from the MNIST database [60]. The results show that our method can classify phase objects using a simple neural network. Furthermore, they confirm that the proposed joint optimization improves the classification accuracy at low measurement rates and noise robustness.

2. Methods

2.1 Single-pixel digital holography

SPDH is a complex-amplitude imaging method that combines a single-pixel camera and digital holography. Figure 1 shows a schematic of an SPDH setup. The laser beam is divided into two beams. A spatial light modulator (SLM) modulates one beam in the object arm to form a set of structured sampling patterns on the object plane. The light transmitted through the object interferes with the other beam in the reference arm, thereby generating an interference pattern. The single-pixel detector measures the spatial sum of the interference pattern through a lens. Suppose that the object, sampling patterns, and reference beam are pixelated into $D$ pixels. Let $\boldsymbol {x} \in \mathbb {C}^{D}$, $e^{j\phi }\boldsymbol {h}_k \in \mathbb {C}^{D} \,(k=1,\ldots,K)$, and $\boldsymbol {r} \in \mathbb {C}^{D}$ be the complex amplitude of the object, $k$th sampling pattern with $\phi$ phase shift on the object plane, and complex amplitude of the reference beam at the same distance from the single-pixel detector as the object, respectively. The intensity measured with the $k$th sampling pattern is expressed as:

$$y^{\phi}_{k} = \sum_{d=1}^{D} |e^{j\phi} h_{kd} x_d + r_d|^{2},$$
where $h_{kd}$, $x_d$, and $r_d$ are the $d$th elements of $\boldsymbol {h}_k$, $\boldsymbol {x}$, and $\boldsymbol {r}$, respectively. The vector form of the measurements $\boldsymbol {y}^{\phi } = [y^{\phi }_1, \ldots, y^{\phi }_K]^{\top }$ can be expressed as:
$$\boldsymbol{y}^{\phi} = \|\boldsymbol{r}\|_2^{2} + |\boldsymbol{H}|^{2} |\boldsymbol{x}|^{2} + 2\operatorname{Re}(\boldsymbol{r}^{*} \circ e^{j\phi}\boldsymbol{Hx}) = \boldsymbol{i}_0 + 2\operatorname{Re}(\boldsymbol{r}^{*} \circ e^{j\phi}\boldsymbol{Hx}),$$
where $\boldsymbol {H} = [\boldsymbol {h}_1, \ldots, \boldsymbol {h}_K]^{\top }$, $|\cdot |^{2}$ is the element-wise squared absolute value, $\circ$ is the Hadamard product, and $\operatorname {Re}(\cdot )$ is an operator that represents the real part of the measurements. The first term $\boldsymbol {i}_0$ is the zero-order term, and the second term contains the phase information of the object. We retrieve the complex measurements $\boldsymbol {z} = \boldsymbol {Hx}$ using a phase-shifting technique. If we use a 4-step phase-shifting technique with phase shifts $\phi =0$, $\pi /2$, $\pi$, and $3\pi /2$, the complex measurements are retrieved as follows:
$$\boldsymbol{z} = \boldsymbol{Hx} = \frac{1}{4} \left[(\boldsymbol{y}^{0} - \boldsymbol{y}^{\pi}) + j (\boldsymbol{y}^{3\pi/2} - \boldsymbol{y}^{\pi/2})\right],$$
where we assume $\boldsymbol {r}=\boldsymbol {1}$ for simplicity. The complex amplitude $\boldsymbol {x}$ can be computationally reconstructed from the complex measurements $\boldsymbol {z}$ by solving the linear system of equations. When Hadamard or Fourier patterns are used for the sensing matrix $\boldsymbol {H}$, the image reconstruction can be performed using the fast inverse transform [61]. We can also use optimization methods, such as $\ell _1$ regularization methods, for reconstruction when the target object is sparse or compressible.

 figure: Fig. 1.

Fig. 1. Schematic of single-pixel digital holography. The laser beam is divided into two beams by a beam splitter (BS). A spatial light modulator (SLM) modulates the beam in the object arm and forms sampling patterns on the object plane. A single-pixel detector measures the spatial sum of the interference pattern between the object and reference beams.

Download Full Size | PDF

2.2 Compressive phase object classification

We introduce phase object classification using compressive SPDH measurements to eliminate the need for image reconstruction. SPC measurements are a linear projection of a target scene, and we can directly use them for image classification or object detection without image reconstruction. We extend this direct inference to phase object classification using SPDH. SPDH can obtain compressed measurements of target complex amplitudes and is sensitive to the phase distribution. Thus, it can be used for phase object classification, whereas SPCs can only capture the intensity distribution of an object.

Figure 2 shows a schematic of compressive phase object classification using SPDH. Phase objects are measured using SPDH, and the measurements are directly input to the classifier, which outputs a class label. The classifier should be trained on data comprising SPDH measurements so that it outputs the correct labels. This training set can be acquired using an SPDH optical setup, or it may be synthesized via simulations using phase object images and the SPDH measurement model (Eq. 2). In this study, we trained the classifier $\mathcal {N}_W$ with learnable parameters $W$ as follows:

$$\hat{W} = \mathop{\arg\,\min}\limits_{W} \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}(\mathcal{N}_{W}(\operatorname{Re}(\boldsymbol{z}_i)), \boldsymbol{d}_i),$$
where $\boldsymbol {z}_i$ and $\boldsymbol {d}_i$ are complex measurements of SPDH and their corresponding labels in the dataset, and $\mathcal {L}$ is a loss function. Although the input to the classifier can be the complex measurements $\boldsymbol {z}$, here, we used $\operatorname {Re}(\boldsymbol {z}) = \frac {1}{4}(\boldsymbol {y}^{0} - \boldsymbol {y}^{\pi })$, which is the real part of the complex measurements retrieved using two-step phase-shifting to reduce the number of measurements and measurement time. We conducted preliminary experiments and confirmed that classification using two-step phase-shifting provided sufficient accuracy both in the simulation and our optical setup. Note that we can also use unprocessed intensity measurements $\boldsymbol {y}^{\phi }$ as the classifier input; however, the classification accuracy was low in our setup owing to the zero-order term.

 figure: Fig. 2.

Fig. 2. Schematic of compressive phase object classification using SPDH. The classifier outputs a class label from the SPDH measurements. To train the classifier, we can use data acquired using the optical setup or synthesized via simulations using phase object images and the SPDH measurement model. Training with simulated data can jointly optimize the sampling patterns with the classifier. The optimized patterns can be used in optical-setup measurements.

Download Full Size | PDF

To improve the classification accuracy, we propose a joint optimization of the sampling patterns used in SPDH and a classifier. The proposed method follows the same scheme proposed in recent studies on SPCs [3234,62] and compressed sensing [26]. The SPDH measurement process is related to dimensionality reduction, and the complex measurements are a linear projection onto the subspace, the basis of which is a set of sampling patterns. Thus, optimizing the sampling pattern realizes efficient dimensionality reduction and improves the classification accuracy, even if the number of measurements is small. The proposed method trains the patterns $\boldsymbol {H}$ and classifier $\mathcal {N}_{W}$ using a phase object image dataset as follows:

$$\{\hat{\boldsymbol{H}}, \hat{W}\} = \mathop{\arg\,\min}\limits_{\boldsymbol{H} \in \Omega, W} \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}(\mathcal{N}_{W}(\operatorname{Re}\{\boldsymbol{H}\boldsymbol{x}_i\}), \boldsymbol{d}_i),$$
where $\boldsymbol {x}_i$ and $\boldsymbol {d}_i$ are a phase object image and its corresponding label in the dataset, and $\Omega$ is a constraint on the patterns $\boldsymbol {H}$. Note that this optimization process can only use simulated measurement data, and the optimized patterns can be used for the measurements with the optical setup. In this study, we optimized only the phase distributions of the patterns instead of their complex amplitudes; this is performed by adding only the phase values to the parameter list for the optimization. Furthermore, we quantized the phase values to four levels (0, $\pi /2$, $\pi$, and $3\pi /2$) using a straight-through estimator [63] in the backward pass to replay them accurately in our optical setup using the binary off-axis holograms, as shown in Section 3.2.

3. Results

3.1 Simulation

We evaluated the simulated compressive phase object classification using SPDH. Our training and test sets were created from the MNIST handwritten digit dataset [60] and Fashion-MNIST dataset [64], both of which comprise 60,000 training images and 10,000 test images from 10 categories. We generated the phase object image datasets by converting images $\boldsymbol {g}_i \, (i=1, \ldots, N)$ from the datasets to phase object images $\boldsymbol {x}_i = \exp [j (\pi \boldsymbol {g}_i + \psi _i \boldsymbol {1})]$, where $\exp (\cdot )$ is the element-wise exponential function, and $\psi _i \in [0, 2\pi )$ is a global random phase shift that differs for each image. This phase object image has a unit amplitude; therefore, we cannot classify it using SPCs that can only capture the target intensity. We used three types of sampling patterns: Hadamard patterns, random patterns, and patterns optimized using joint optimization for evaluation. The resolution of the sampling patterns was set as 32 $\times$ 32 pixels. We used a simple classifier comprising two fully-connected (FC) layers with a rectified linear unit (ReLU) between them (Fig. 2). The first FC layer outputs 100 features, and the second outputs the score of each class. The class label with the highest score is then selected. Cross-entropy loss was used as the loss function, and the model was trained using adaptive moment estimation (Adam) with 50 epochs. The classification accuracy was evaluated for the number of patterns $K=16$, 64, and 256.

Figure 3 shows phase distributions of the Hadamard patterns, random patterns, and patterns optimized for the phase MNIST and Fashion-MNIST datasets with $K=16$. Although the optimized patterns appear random, they are more structured than random patterns.

 figure: Fig. 3.

Fig. 3. Phase distributions of Hadamard patterns (left), random patterns (center left), and patterns optimized for the phase MNIST (center right) and Fashion-MNIST (right) using joint optimization with $K=16$. The resolution of these patterns is 32 $\times$ 32 pixels.

Download Full Size | PDF

Table 1 shows the classification accuracy based on the simulated data. We performed classification using images that were reconstructed from SPDH measurements with Hadamard patterns using the inverse Hadamard transform (IHT) to evaluate the effect of image reconstruction. The classification accuracies with and without image reconstruction are similar, and compressive classification can bypass image reconstruction without sacrificing classification accuracy. All the patterns successfully classified the phase object images with high accuracy, and the optimized pattern achieved the highest accuracy. Furthermore, the difference in the classification accuracy between the patterns was significant when the number of measurements was low.

Tables Icon

Table 1. Classification accuracy of simulated compressive phase object classification using SPDH on the phase MNIST and Fashion-MNIST datasets. $K$ indicates the number of patterns. We evaluated the effect of image reconstruction via classification using images reconstructed from SPDH measurements with Hadamard patterns using the inverse Hadamard transform (IHT).

3.2 Experimental setup

Figure 4 shows the SPDH optical setup. The light source was a He-Ne laser with a wavelength of 632.8 nm. We formed sampling patterns using a DMD (digital micromirror device) module (V-7001, ViALUX, Germany) with 1,024 $\times$ 768 pixels, a 13.7 $\mathrm {\mu }$m pixel pitch, and a 22 kHz switching rate for the 1-bit binary patterns. The binary off-axis holograms were displayed using this setup, which is similar to that used in some previous studies [65,66]. The binary hologram encoding the slowly varying complex amplitude $A(x,y)\exp [j\phi (x,y)]$ was computed as follows:

$$\begin{aligned}H(x,y) &= \frac{1}{2} + \frac{1}{2} \operatorname{sgn}\left( \cos\left[2\pi \frac{x}{x_0} + \pi p(x,y) \right] - \cos[\pi w(x,y)] \right), \end{aligned}$$
$$\begin{aligned}w(x,y) &= \frac{1}{\pi} \sin^{{-}1}[A(x,y)], \quad p(x,y) = \frac{1}{\pi} \phi(x,y), \end{aligned}$$
where $\operatorname {sgn}$ is the sign function, and $x_0$ is the carrier periodicity. The first-order diffracted beam reproduced the encoded complex amplitude. We used a 4-$f$ system comprising two achromatic lenses with focal lengths of 100 mm and an aperture with a diameter of 0.8 mm that transmits only the first-order beam. We used a Si amplified photodetector (PDA100A2, Thorlabs, USA) to measure the intensities of the interference patterns, and we sampled its output voltages using a USB oscilloscope (Analog Discovery 2, Digilent Inc., USA). The target object was a transmissive liquid crystal SLM (LC-SLM, LC 2012, Holoeye Photonics AG, Germany) with 1,024 $\times$ 768 pixels and a 36 $\mathrm {\mu }$m pixel pitch that provided phase modulation using a polarizer and analyzer. We used the LC-SLM in phase modulation mode and displayed phase images on it.

 figure: Fig. 4.

Fig. 4. Experimental SPDH setup. The target object is a transmissive LC-SLM in phase modulation mode. The DMD displays binary amplitude holograms and forms complex amplitude patterns on the target plane. M1-2: Mirror, L1-4: Lens, BS1-2: Beam splitter, SF: Spatial filter, and PD: Photodetector.

Download Full Size | PDF

We measured the phase target that displayed an MNIST image using the SPDH setup to demonstrate its phase imaging capability. For comparison, we measured the same phase target while blocking the reference beam. We used this configuration because it can only capture an intensity image, not a phase image. Thus, the configuration provides us with the requisite image for comparison. We refer to this configuration as an intensity-only SPC configuration. We used 16 $\times$ 16 Hadamard patterns ($K=256$) and reconstructed the images using IHT. Figure 5 shows the intensity image obtained with the intensity-only SPC configuration and the amplitude and phase images obtained with the SPDH setup. The LC-SLM primarily modulates the phase of the incoming light; therefore, identifying the image content in the amplitude and intensity images is difficult (Fig. 5 left and center). By contrast, the image of the digit 8 can be easily recognized as the phase image (Fig. 5 right). This result indicates that classifying target images in our setup without phase information is challenging.

 figure: Fig. 5.

Fig. 5. Reconstructed images of the phase target using the SPDH setup with 16 $\times$ 16 Hadamard patterns. From left to right: the intensity image obtained with the intensity-only SPC configuration and the amplitude and phase images obtained with SPDH.

Download Full Size | PDF

3.3 Experimental results

We experimentally demonstrated compressive phase object classification using SPDH measurements. We used the SPDH optical setup shown in Fig. 4 and displayed the same MNIST phase images as those used in Section 3.2 on the LC-SLM to collect the real dataset. The same patterns (Hadamard, random, and optimized patterns) that were used for the simulation were also used for real data collection. The optimized patterns were trained using the simulated training set. To demonstrate the effectiveness of using phase information, we collected real data using the intensity-only SPC configuration with Hadamard patterns, as described in Section 3.2, which cannot capture the target phase information. The real dataset comprised 8,334 training images and 1,666 test images for each pattern. We trained the simulation classifier on the real training set and evaluated its classification accuracy on the real test set.

Table 2 shows the classification accuracy exhibited by the classifier when used on the real dataset. The LC-SLM marginally modulated the amplitude of the incoming light; therefore, the intensity-only SPC configuration could classify target images with low accuracy. However, the classification accuracy differed distinctly between the intensity-only SPC configuration and SPDH. Similar to the simulation, there was no significant difference in classification accuracy with and without image reconstruction via IHT, and classification using the optimized patterns achieved the highest accuracy. However, the accuracy of the random patterns was lower than that of the Hadamard patterns, which can be attributed to the inability of our setup to accurately replay random patterns. In addition, the classification accuracy using the real dataset was lower than that using the simulated dataset, which can be attributed to the severe corruption of the real dataset due to noise.

Tables Icon

Table 2. Classification accuracy of compressive phase object classification experiment using SPDH on the phase MNIST dataset. $K$ is the number of patterns. Intensity-only refers to the intensity-only SPC configuration with Hadamard patterns. We evaluated the effect of image reconstruction via classification using images reconstructed from SPDH measurements with Hadamard patterns through the inverse Hadamard transform (IHT).

Figure 6 shows several phase images of class 8 reconstructed from SPDH measurements using the 16 $\times$ 16 Hadamard patterns. The background phase distributions on the phase images fluctuate, owing to the instability of our setup, which degrades the classification accuracy in the case of the real dataset. This degradation can be partially addressed by increasing the size of the real training set. In conclusion, the results indicate that the proposed joint optimization is effective, even when measurements are acquired in a severely noisy environment.

 figure: Fig. 6.

Fig. 6. Several phase images of class 8 reconstructed from SPDH measurements using the 16 $\times$ 16 Hadamard patterns. The phase images are severely noisy because the background phase distributions fluctuate owing to the instability of the experimental setup.

Download Full Size | PDF

4. Conclusion

We demonstrated phase object classification using compressed measurements acquired using SPDH. This method enables phase object classification while bypassing image reconstruction, which requires high computational costs and a high measurement rate. The simulation and experimental results showed that we could successfully classify phase object images constructed from the MNIST dataset using SPDH measurements. Furthermore, we proposed a joint optimization of sampling patterns and a classifier to improve the classification accuracy. The results indicate that the proposed optimization is particularly effective when the number of measurements is small and the measurement environment is noisy. Phase distribution provides several advantages over light intensity, particularly for transparent objects, such as biological cells or gases. Therefore, our proposed compressive classification provides a new analysis technique for life sciences and remote sensing.

Although this study proves that phase object images can be successfully classified using compressed SPDH measurements, the current form of the proposed method is limited. The classification accuracy of the experiment was lower than that of the simulation owing to the instability of our experimental setup. Therefore, further investigation is required to develop a stable SPDH setup and to improve the classification accuracy on real datasets, which will be the focus of our future work.

Funding

Japan Society for the Promotion of Science (JP22K17908); The Mazda Foundation (19KK-271); Kayamori Foundation of Informational Science Advancement (K31-XXIV-543); Hokuriku Bank Research Grant for Young Scientists.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

2. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

3. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: A review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

4. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014). [CrossRef]  

5. R. I. Stantchev, B. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

6. R. I. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020). [CrossRef]  

7. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

8. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

9. G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017). [CrossRef]  

10. S. D. Johnson, D. B. Phillips, Z. Ma, S. Ramachandran, and M. J. Padgett, “A light-in-flight single-pixel camera for use in the visible and short-wave infrared,” Opt. Express 27(7), 9829–9837 (2019). [CrossRef]  

11. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. 109(26), E1679–E1687 (2012). [CrossRef]  

12. K. Guo, S. Jiang, and G. Zheng, “Multilayer fluorescence imaging on a single-pixel detector,” Biomed. Opt. Express 7(7), 2425–2431 (2016). [CrossRef]  

13. Y. Liu, J. Suo, Y. Zhang, and Q. Dai, “Single-pixel phase and fluorescence microscope,” Opt. Express 26(25), 32451–32462 (2018). [CrossRef]  

14. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

15. E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through dynamic scattering media by single-pixel photodetection,” Opt. Express 22(14), 16945–16955 (2014). [CrossRef]  

16. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23(11), 14424–14433 (2015). [CrossRef]  

17. R. Dutta, S. Manzanera, A. Gambín-Regadera, E. Irles, E. Tajahuerce, J. Lancis, and P. Artal, “Single-pixel imaging of the retina through scattering media,” Biomed. Opt. Express 10(8), 4159–4167 (2019). [CrossRef]  

18. K. Soltanlou and H. Latifi, “Three-dimensional imaging through scattering media using a single pixel detector,” Appl. Opt. 58(28), 7716–7726 (2019). [CrossRef]  

19. G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3D imaging,” Appl. Opt. 50(31), 5917–5920 (2011). [CrossRef]  

20. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

21. N. Radwell, S. D. Johnson, M. P. Edgar, C. F. Higham, R. Murray-Smith, and M. J. Padgett, “Deep learning optimized single-pixel LiDAR,” Appl. Phys. Lett. 115(23), 231101 (2019). [CrossRef]  

22. M. A. Davenport, M. F. Duarte, M. B. Wakin, J. N. Laska, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “The smashed filter for compressive classification and target recognition,” in Electronic Imaging 2007, C. A. Bouman, E. L. Miller, and I. Pollak, eds. (San Jose, CA, USA, 2007), p. 64980H.

23. M. Davenport, P. Boufounos, M. Wakin, and R. Baraniuk, “Signal Processing With Compressive Measurements,” IEEE J. Sel. Top. Signal Process. 4(2), 445–460 (2010). [CrossRef]  

24. R. Calderbank, S. Jafarpour, and R. E. Schapire, “Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain,” Technical Report (2009).

25. R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in Compressed Sensing, Y. C. Eldar and G. Kutyniok, eds. (Cambridge University Press, Cambridge, 2012), pp. 439–484.

26. E. Zisselman, A. Adler, and M. Elad, “Compressed Learning for Image Classification: A Deep Neural Network Approach,” in Handbook of Numerical Analysis, vol. 19 (Elsevier, 2018), pp. 3–17.

27. O. Maillard and R. Munos, “Compressed Least-Squares Regression,” Adv. Neural Inf. Process. Syst. 22 (2009).

28. S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, Phoenix, AZ, USA, 2016), pp. 1913–1917.

29. Y. Li, C. Hegde, A. C. Sankaranarayanan, R. Baraniuk, and K. F. Kelly, “Compressive image acquisition and classification via secant projections,” J. Opt. 17(6), 065701 (2015). [CrossRef]  

30. P. Latorre-Carmona, V. J. Traver, J. S. Sánchez, and E. Tajahuerce, “Online reconstruction-free single-pixel image classification,” Image Vis. Comput. 86, 28–37 (2019). [CrossRef]  

31. S. Jiao, J. Feng, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Optical machine learning with incoherent light and a single-pixel detector,” Opt. Lett. 44(21), 5186–5189 (2019). [CrossRef]  

32. Z. Zhang, X. Li, S. Zheng, M. Yao, G. Zheng, and J. Zhong, “Image-free classification of fast-moving objects using “learned” structured illumination and single-pixel detection,” Opt. Express 28(9), 13269–13278 (2020). [CrossRef]  

33. H. Fu, L. Bian, and J. Zhang, “Single-pixel sensing with optimal binarized modulation,” Opt. Lett. 45(11), 3111–3114 (2020). [CrossRef]  

34. J. Bacca, L. Galvis, and H. Arguello, “Coupled deep learning coded aperture design for compressive image classification,” Opt. Express 28(6), 8528–8540 (2020). [CrossRef]  

35. T. Bu, S. Kumar, H. Zhang, I. Huang, and Y.-P. Huang, “Single-pixel pattern recognition with coherent nonlinear optics,” Opt. Lett. 45(24), 6771–6774 (2020). [CrossRef]  

36. P. K. Baheti and M. A. Neifeld, “Adaptive feature-specific imaging: A face recognition example,” Appl. Opt. 47(10), B21–B31 (2008). [CrossRef]  

37. S. Lohit, K. Kulkarni, P. Turaga, J. Wang, and A. C. Sankaranarayanan, “Reconstruction-free inference on compressive measurements,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (IEEE, Boston, MA, USA, 2015), pp. 16–24.

38. L.-C. Huang, M. A. Neifeld, and A. Ashok, “Face recognition with non-greedy information-optimal adaptive compressive imaging,” Appl. Opt. 55(34), 9744–9755 (2016). [CrossRef]  

39. Z. Zhang, J. Ye, Q. Deng, and J. Zhong, “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35401 (2019). [CrossRef]  

40. D. Shi, K. Yin, J. Huang, K. Yuan, W. Zhu, C. Xie, D. Liu, and Y. Wang, “Fast tracking of moving objects using single-pixel imaging,” Opt. Commun. 440, 155–162 (2019). [CrossRef]  

41. Q. Deng, Z. Zhang, and J. Zhong, “Image-free real-time 3-D tracking of a fast-moving object using dual-pixel detection,” Opt. Lett. 45(17), 4734–4737 (2020). [CrossRef]  

42. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360(6394), 1246–1251 (2018). [CrossRef]  

43. M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, “Quantitative Phase Imaging,” in Progress in Optics, vol. 57 (Elsevier, 2012), pp. 133–217.

44. P. Clemente, V. Durán, E. Tajahuerce, V. Torres-Company, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86(4), 041803 (2012). [CrossRef]  

45. P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38(14), 2524–2527 (2013). [CrossRef]  

46. H. González, L. Martínez-León, F. Soldevila, M. Araiza-Esquivel, J. Lancis, and E. Tajahuerce, “High sampling rate single-pixel digital holography system employing a DMD and phase-encoded patterns,” Opt. Express 26(16), 20342–20350 (2018). [CrossRef]  

47. X. Hu, H. Zhang, Q. Zhao, P. Yu, Y. Li, and L. Gong, “Single-pixel phase imaging by Fourier spectrum sampling,” Appl. Phys. Lett. 114(5), 051102 (2019). [CrossRef]  

48. D. Wu, J. Luo, G. Huang, Y. Feng, X. Feng, R. Zhang, Y. Shen, and Z. Li, “Imaging biological tissue with high-throughput single-pixel compressive holography,” Nat. Commun. 12(1), 4712 (2021). [CrossRef]  

49. L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express 25(5), 4975–4984 (2017). [CrossRef]  

50. Y. Endo, T. Tahara, and R. Okamoto, “Color single-pixel digital holography with a phase-encoded reference wave,” Appl. Opt. 58(34), G149–G154 (2019). [CrossRef]  

51. S. Shin, K. Lee, Y. Baek, and Y. Park, “Reference-Free Single-Point Holographic Imaging and Realization of an Optical Bidirectional Transducer,” Phys. Rev. Appl. 9(4), 044042 (2018). [CrossRef]  

52. S. Shin, K. Lee, Z. Yaqoob, P. T. C. So, and Y. Park, “Reference-free polarization-sensitive quantitative phase imaging using single-point optical phase conjugation,” Opt. Express 26(21), 26858–26865 (2018). [CrossRef]  

53. K. Ota and Y. Hayasaki, “Complex-amplitude single-pixel imaging,” Opt. Lett. 43(15), 3682–3685 (2018). [CrossRef]  

54. R. Liu, S. Zhao, P. Zhang, H. Gao, and F. Li, “Complex wavefront reconstruction with single-pixel detector,” Appl. Phys. Lett. 114(16), 161901 (2019). [CrossRef]  

55. R. Horisaki, H. Matsui, R. Egami, and J. Tanida, “Single-pixel compressive diffractive imaging,” Appl. Opt. 56(5), 1353–1357 (2017). [CrossRef]  

56. R. Horisaki, H. Matsui, and J. Tanida, “Single-pixel compressive diffractive imaging with structured illumination,” Appl. Opt. 56(14), 4085–4089 (2017). [CrossRef]  

57. K. Komuro, Y. Yamazaki, and T. Nomura, “Transport-of-intensity computational ghost imaging,” Appl. Opt. 57(16), 4451–4456 (2018). [CrossRef]  

58. F. Soldevila, V. Durán, P. Clemente, J. Lancis, and E. Tajahuerce, “Phase imaging by spatial wavefront sampling,” Optica 5(2), 164–174 (2018). [CrossRef]  

59. K. Komuro, T. Nomura, and G. Barbastathis, “Deep ghost phase imaging,” Appl. Opt. 59(11), 3376–3382 (2020). [CrossRef]  

60. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE 86(11), 2278–2324 (1998). [CrossRef]  

61. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

62. C. F. Higham, R. Murray-Smith, M. J. Padgett, and M. P. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018). [CrossRef]  

63. Y. Bengio, N. Léonard, and A. Courville, “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation,” http://arxiv.org/abs/1308.3432 (2013).

64. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms,” http://arxiv.org/abs/1708.07747 (2017).

65. M. Mirhosseini, O. S. Maga na-Loaiza, C. Chen, B. Rodenburg, M. Malik, and R. W. Boyd, “Rapid generation of light beams carrying orbital angular momentum,” Opt. Express 21(25), 30196–30211 (2013). [CrossRef]  

66. Y.-X. Ren, R.-D. Lu, and L. Gong, “Tailoring light with a digital micromirror device: Tailoring light with a digital micromirror device,” Ann. Phys. 527(7-8), 447–470 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Schematic of single-pixel digital holography. The laser beam is divided into two beams by a beam splitter (BS). A spatial light modulator (SLM) modulates the beam in the object arm and forms sampling patterns on the object plane. A single-pixel detector measures the spatial sum of the interference pattern between the object and reference beams.
Fig. 2.
Fig. 2. Schematic of compressive phase object classification using SPDH. The classifier outputs a class label from the SPDH measurements. To train the classifier, we can use data acquired using the optical setup or synthesized via simulations using phase object images and the SPDH measurement model. Training with simulated data can jointly optimize the sampling patterns with the classifier. The optimized patterns can be used in optical-setup measurements.
Fig. 3.
Fig. 3. Phase distributions of Hadamard patterns (left), random patterns (center left), and patterns optimized for the phase MNIST (center right) and Fashion-MNIST (right) using joint optimization with $K=16$. The resolution of these patterns is 32 $\times$ 32 pixels.
Fig. 4.
Fig. 4. Experimental SPDH setup. The target object is a transmissive LC-SLM in phase modulation mode. The DMD displays binary amplitude holograms and forms complex amplitude patterns on the target plane. M1-2: Mirror, L1-4: Lens, BS1-2: Beam splitter, SF: Spatial filter, and PD: Photodetector.
Fig. 5.
Fig. 5. Reconstructed images of the phase target using the SPDH setup with 16 $\times$ 16 Hadamard patterns. From left to right: the intensity image obtained with the intensity-only SPC configuration and the amplitude and phase images obtained with SPDH.
Fig. 6.
Fig. 6. Several phase images of class 8 reconstructed from SPDH measurements using the 16 $\times$ 16 Hadamard patterns. The phase images are severely noisy because the background phase distributions fluctuate owing to the instability of the experimental setup.

Tables (2)

Tables Icon

Table 1. Classification accuracy of simulated compressive phase object classification using SPDH on the phase MNIST and Fashion-MNIST datasets. K indicates the number of patterns. We evaluated the effect of image reconstruction via classification using images reconstructed from SPDH measurements with Hadamard patterns using the inverse Hadamard transform (IHT).

Tables Icon

Table 2. Classification accuracy of compressive phase object classification experiment using SPDH on the phase MNIST dataset. K is the number of patterns. Intensity-only refers to the intensity-only SPC configuration with Hadamard patterns. We evaluated the effect of image reconstruction via classification using images reconstructed from SPDH measurements with Hadamard patterns through the inverse Hadamard transform (IHT).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

y k ϕ = d = 1 D | e j ϕ h k d x d + r d | 2 ,
y ϕ = r 2 2 + | H | 2 | x | 2 + 2 Re ( r e j ϕ H x ) = i 0 + 2 Re ( r e j ϕ H x ) ,
z = H x = 1 4 [ ( y 0 y π ) + j ( y 3 π / 2 y π / 2 ) ] ,
W ^ = arg min W 1 N i = 1 N L ( N W ( Re ( z i ) ) , d i ) ,
{ H ^ , W ^ } = arg min H Ω , W 1 N i = 1 N L ( N W ( Re { H x i } ) , d i ) ,
H ( x , y ) = 1 2 + 1 2 sgn ( cos [ 2 π x x 0 + π p ( x , y ) ] cos [ π w ( x , y ) ] ) ,
w ( x , y ) = 1 π sin 1 [ A ( x , y ) ] , p ( x , y ) = 1 π ϕ ( x , y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.