## Abstract

Multimodal microscopes either use multiple cameras or a single camera to multiplex different modes spatially. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel imaging scheme, we employ structured illumination to encode the phase and fluorescence of the sample into two single-pixel detection arms, and then conduct reconstruction computationally from the illumination patterns and recorded correlated measurements. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel imaging scheme. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of an etched glass with given etching depth and then imaging the phase and fluorescence of the quantum dot sample. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and material science.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Fluorescence microscopy serves as a powerful tool for biomedical observation and diagnosis, especially when combined with confocal and two-photon techniques [1]. By probing the proteins with fluorophores, one could monitor the physiological state of the specimens, including live cells, which provides functional information for biomedical imaging. However, fluorescence microscopy generally lacks the structural/morphological information of the specimens, since most thin specimens do not absorb or scatter light significantly, i.e., they are transparent or translucent [2]. Complementarily, quantitative phase imaging is an emerging technique for structural characterization of transparent specimens quantitatively. The combination of fluorescence and phase imaging would provide both the functional and structural features for more accurate and flexible observation and diagnosis in a wide range of fields, from basic science to clinical applications.

Conventional methods for simultaneous fluorescence and phase imaging either use multiple well-aligned cameras or a single camera with spatial multiplexing [3, 4]. The former needs expertise demanding alignment and the latter trades off spatial resolution for multiple capture modes [5]. However, single-pixel imaging offers an alternative solution for alignment-free, full-resolution multimodal imaging [6, 7]. By exploiting sequential structured illuminations and correlated measurements with the sample, one could recover the original two-dimensional (2D) information of the sample computationally and compressively [8]. The point-scanning imaging method, such as confocal laser scanning microscopy [1] and optical trapping/focusing [9, 10] falls into the general form of single-pixel imaging where the illumination is a single point focus and scanning is used to acquire an image. We refer single-pixel imaging here to a specific form, i.e., imaging with wide-field structured illuminations and single-pixel detection, where mechanical scanning is not required.

Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using two single-pixel detectors. Single-pixel imaging has been applied for dual-or multiple-modality tasks thanks to the high-sensitivity, wide-spectrum-range and low-cost property of single-pixel detectors. Recent advances include four-angle illuminations for three-dimensional imaging [11], dual wavelengths for infrared and visible imaging [12, 13], multiple-wavelength illumination/detection for color or multispectral imaging [6,14–18], and spatial-frequency multiplexing for multispectral 3D imaging [7]. However, simultaneous fluorescence and quantitative phase imaging using single-pixel detectors without mechanical scanning has not been reported yet, to the best of our knowledge.

Previous single-pixel phase imaging method generally requires phase-mode spatial light modulation and two-beam interferometry [19–21]. However, phase-mode spatial light modulation is relatively slow with a typical refreshing rate of 60 Hz, which limits the acquisition time to a few minutes. Recently, González et al. used the high-speed binary amplitude modulator, i.e., the digital micromirror device (DMD) for phase modulation [22] and obtained single-pixel digital holography within a few seconds [23]. However, these two-beam interferometry methods suffer from phase stability issues [21], since single-pixel imaging requires thousands of sequential measurements and the corrupted phases would damage the reconstruction. Recently, Stockton et al. combined spatial frequency projections with single-point detection to obtain the phase of a line, which requires rotating disk to generate line focus with varying spatial frequencies and line scan to obtain the phase of a 2D sample [24]. Mechanical implementations like rotation and line scanning though compatible to imaging in spectra where spatial light modulators are not available, are undesirable because they could bring about issues like mechanical vibrations, non-uniformity along the focal line scanning dimension, and increased electrical complexity for synchronization in case of high-speed acquisition. A similar approach using the DMD for complex amplitude modulation [25–27] and single-point detection/projection has been reported to form an optical transducer [28]. But no multimodality of this approach is demonstrated for its passive imaging scheme. Therefore, a multimodal single-pixel imaging method (including phase imaging modality) for stable and direct 2D acquisition without mechanical scanning is highly desirable.

By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. We employ active imaging, i.e., complex structured illuminations and add a bucket detector for fluorescence imaging to achieve simultaneous fluorescence and phase imaging. It is worth noting that these two modes are conducted in a parallel manner and share the same structured illuminations, which precludes expertise demanding alignment of multiple cameras. Moreover, this method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer [6, 14–17]. We further envision that this cost-efficient multimodal system might be extended for imaging in spectra where array detectors are expensive or even unavailable [12, 29–32], which might find broad applications in biomedical science and material science.

## 2. Single-pixel phase and fluorescence imaging

Single-pixel imaging or the single-pixel camera [8] encodes the sample’s transmission or reflection information into its correlated intensity measurements with sequential structure illuminations. The measurement can be recorded with a single-pixel detector, and then used to reconstruct the sample information computationally together with the illumination patterns.

In this section, we introduce the proposed single-pixel phase and fluorescence imaging method. First, we briefly summarize the forward model and the reconstruction method of single-pixel imaging. Then, we introduce the single-pixel imaging method using reference-free phase-shifting interferometry and complex amplitude modulation. Finally, we show that the phase-shifting patterns can be used for fluorescence imaging simultaneously.

#### 2.1. Single-pixel imaging

For coherent illumination, a 2D sample with complex field $S(\overrightarrow{r})$ illuminated by a spatially modulated pattern ${P}_{k}(\overrightarrow{r})$, where *k* is the index of the pattern (*k* = 1, ⋯, *M*) and $\overrightarrow{r}$ is the spatial coordinate vector, the measured intensity can be expressed as

*P*(

_{k}*i*,

*j*) and

*S*(

*i*,

*j*) are the discrete field of $|{P}_{k}(\overrightarrow{r}){|}^{2}$ and $|S(\overrightarrow{r}){|}^{2}(i=1,\cdots ,m,\phantom{\rule{0.2em}{0ex}}j=1,\cdots ,n,\phantom{\rule{0.2em}{0ex}}N=m\cdot n)$, respectively, and Σ

_{(}

_{i}_{,}

_{j}_{)}denotes the summation of the whole 2D matrix with subindex (

*i*,

*j*). After vectorizing the sample field and illumination field, the forward model of single-pixel imaging can be expressed as a linear equation where

**∈ ℂ**

*S**is the vectorized sample field*

^{N}*S*(

*i*,

*j*),

**∈ ℂ**

*P*

^{M}^{×}

*is the row-wise rearrangement of the vectorized illumination field*

^{N}*P*(

_{k}*i*,

*j*) for each

*k*, and

**∈ ℂ**

*I**is the column-wise rearrangement of each measurements*

^{M}*I*. If the measurement matrix (or sensing matrix)

_{k}**is the full orthogonal bases, such as the Fourier transform bases [33] or Walsh-Hadamard transform bases [34, 35], the sample field can be reconstructed via the corresponding inverse transform**

*P*The reconstruction can be conducted from either over-sampled/full-sampled or under-sampled measurements. For the former setting, Moore-Penrose inverse or pseudoinverse can be applied for reconstruction. For the latter case, one can use compressive sensing methods for reliable recovery based on the statistics that natural scenes are inherently sparse inspecific domains [36, 37]. In this work, we use Walsh-Hadamard transform bases instead of Fourier transform bases as the sensing matrix to avoid introducing three-or four-step spatial shifts of the sinusoidal patterns [33] and slow grayscale patterning. Besides, we adopt under-sampled measurements to obtain only a proportion of the Walsh-Hadamard transform spectrum with high coefficients, and impose the inverse transform for reconstruction [6, 12, 18, 35].

#### 2.2. Single-pixel phase imaging

We use the reference-free method for single-pixel phase imaging by measuring the intensity of the center point at the Fourier plane, which is firstly reported by Stockton et al. [24] for line-scan phase imaging and then extended by Shin et al. [28] as an optical transducer. We adapt the passive imaging scheme in [28] to active imaging mode to obtain the fluorescence information of the sample at the same time. We refer to this method as the single-point detection method for phase imaging following [24, 28].

The forward model is different from conventional single-pixel imaging, since the original total-intensity measurements would wash out the phase information induced both by the sample and the structured illuminations. For single-pixel phase imaging, we measure the single-point intensity at the center of the Fourier plane of the correlated field between the sample and the illuminations [24, 28],which can be expressed as

*P*

_{k}_{,}

*= (e*

_{ϕ}^{j}

*·*

^{ϕ}*H*+ 1)/2 with

_{k}*H*being one basis of Walsh-Hadamard transform consisting of {+1, −1} elements, the single-point intensity can be rewritten as phase-shifting interferometry form

_{k}Here ${s}_{k}={\displaystyle {\iint}_{\Omega}{H}_{k}(\overrightarrow{r})S(\overrightarrow{r}){\mathrm{d}}^{2}\overrightarrow{r}}$ is the integration of the complex field of the sample coded by the illumination, $r={\displaystyle {\iint}_{\Omega}S(\overrightarrow{r}){\mathrm{d}}^{2}\overrightarrow{r}}$ is a constant, which is the reference beam for the interferometry. With three-step phase-shifting, that is *ϕ* = 0, *π*/2, *π*, the complex integration can be recovered by

*r*

^{∗}denotes the complex conjugate of

*r*. So far, the complex amplitude of the complex integration is recovered, and the inverse Walsh-Hadamard transform can be applied to further reconstruct the complex amplitude of the sample field as Eq. (4) shows. Thus, combining complex spatial modulation with single-point detection resolves phase information.

#### 2.3. Single-pixel fluorescence imaging

For fluorescence detection, the total intensity of the fluorescence field is measured. Because fluorescence light is incoherent, the forward model follows Eq. (1),

Fluorescence detection can be conducted parallelly with phase detection by shared Walsh-Hadamard pattern illuminations. For *P _{k}*

_{,}

*= (e*

_{ϕ}^{j}

*·*

^{ϕ}*H*+ 1)/2 and

_{k}*ϕ*= 0 or

*π*, all elements of

*P*

_{k}_{,}

*are either 0 or 1, we have*

_{ϕ}Therefore, the fluorescence integration can be recovered with two phase shifts

Similarly, the fluorescence intensity can be reconstructed via the inverse Walsh-Hadamard transform as Eq. (4) shows. Note that Eq. (7) and Eq. (10) look similar, but are different in principle: Eq. (7) is based on phase-shifting interferometry, while Eq. (10) contains subtraction of two phase shifts because each element of the Walsh-Hadamard matrix is either +1 or −1 and negative intensity cannot be displayed on the binary spatial light modulator.

In summary, both the phase and fluorescence of the sample can be reconstructed simultaneously. Since the fluorescence detection arm only uses two of the three-step phase shifts, the total sequential measurements remain the same as the phase detection arm and fluorescence information can be acquired via total-intensity fluorescence detection.

## 3. Experimental setup

The scheme of the experimental setup is shown in Fig. 1. We use a single-frequency 488 nm laser (Sapphire 488SF-100mW, Coherent) here. The incident light is expanded and collimated by a spatial filter, consisting of an objective lens, a pinhole, and a collimating lens. The diameter of the output beam is around 20 mm to match the size of the DMD (V-7001, ViALUX GmbH). The pixel resolution of the DMD is 1024 × 768 with a pixel size of 13.68 µm. The DMD has a maximum refreshing rate of 22.7 kHz for binary patterns. We use the total internal reflection (TIR) prism to compensate for the 12^{◦} rotation angle of the micromirror and rotate the DMD to 45^{◦} to ensure the light reflected by the DMD parallel to the optical axis of the incident light, which is widely used in digital light projection with the DMD. A 4f system is built to enable the DMD with complex amplitude modulation using Lee hologram [22, 26]. The 4f system consists of two lenses (L_{1} and L_{2}) and a diaphragm at the Fourier plane with a diameter of ∼1 mm, which directly blocks all the other orders except for the +1st order, as shown in the inset of Fig. 1. The focal length of L_{1} and L_{2} is 150 mm. Three patterns used in the experiment are shown in top left of Fig. 1, where 0, *π*/2, *π* represents three phase shifts (in terms of light path length, instead of spatial shifts of sinusoidal patterns) of the Walsh-Hadamard pattern, and the inset is the cross part of the *ϕ* = *π*/2 pattern containing phase difference between the left part and the right part. Then the desired complex amplitude field interacts with the sample field at the sample plane, which is conjugate with the DMD plane, as indicated by the dashed line in Fig. 1. The tube lens (TTL180-A for Olympus, Thorlabs) has an effective focal length of 180 mm. The objective lens (Plan N 10x/0.25, Olympus) has a magnification of 10x and numerical aperture (NA) of 0.25. For larger field-of-view in the quantum dot experiment, we do not use the tube lens and objective lens, and put the sample directly at the conjugate plane of the DMD in the L_{1}−L_{2} 4f system.

The interaction field is detected by two arms. The first is the phase imaging arm consisting of a lens L_{3}, a single-mode fiber (with a core diameter of ∼4 µm), and a Si avalanche photodetector (APD120A2, Thorlabs) to measure the intensity of light exit from the single-mode fiber, which is not shown in Fig. 1. The second is the fluorescence arm consisting of a dichroic mirror (DM, MDF-GFP, Thorlabs), a fluorescence filter (FELH0500, Thorlabs), a lens (L4), and a photomultiplier tube (PMT1001/M, Thorlabs). The signals of the two photodetectors are digitalized by a four-channel 14-bit analog-to-digital converter (PCI8514, ART Technology) parallelly. Finally, the raw data is post-processed to retrieve the phase and fluorescence of the sample. Because the DMD for spatial-resolved encoding in single-pixel imaging and the pixelated detector for direct spatial-resolved capturing in conventional photography are reciprocal [38], the images of different modalities in single-pixel imaging are automatically aligned since they use the same spatial-resolved element [11]. In this way, the recovered phase and fluorescence images are inherently aligned thanks to single-pixel detection.

## 4. Results

We experimentally demonstrate the phase imaging ability on an etched glass sample. The pixel resolution of the result is 128 × 128, and the total number of projected patterns for each sample is 49, 152. The data acquisition for each sample takes 2.5 seconds, with the refreshing rate of the DMD set to 20 kHz. We use 6 × 6 pixel binning of the DMD to increase illumination intensity at the sample plane, i.e., 768 × 768 of the DMD pixels are used in the experiment. Note that 2 × 2 pixel binning is also feasible (and pixel binning of at least 2 × 2 is usually used in Lee hologram to obtain complex-amplitude modulation with high fidelity [22, 26, 27]), we use large pixel binning considering the modulation efficiency and detection sensitivity, because the Lee hologram method has the efficiency of complex field modulation of ∼5% and the intensities of single-point detection and fluorescence detection are relatively low. Then we show simultaneous phase and fluorescence of the quantum dot sample. The pixel resolution is 256 × 256 and the pixel binning is 3 × 3. The total acquisition time for full-sampling is ∼10 seconds, not including the loading time of the binary patterns and switching time among four pattern groups because of the limited number of pre-load patterns (87, 380 *<* 256 × 256 × 3). For under-sampling with the sampling ratio of 30%, the acquisition time is 3 seconds.

#### 4.1. Phase imaging

The sample used in the phase imaging experiment is an etched glass including three letters “100” at 400nm etching depth. The phase reconstruction result is shown in Fig. 2, and the physical sample surface is estimated from the phase difference of the sample

Here $\phi (\overrightarrow{r})$ denotes the phase difference introduced by the sample, *λ* is the wavelength of the incident light 488 nm in this experiment, and Δ*n* is the difference of the refractive index between the glass and air 0.4630 in the experiment.

As shown in Figs. 2(a) and 2(b), the smooth part of the raw sample phase matches well with the background phase. The estimated height is about 400 nm for the three characters “100”, which agrees well with the fabrication and the result of diffraction phase microscopy (DPM) [39], as shown in Fig. 2(d). Note that the background phase is fixed and needs to be calibrated only once. All the phases here are unwrapped using Goldstein phase unwrapping method [40]. The artifacts of the subtracted phase are due to the slightly misaligned center-frequency points at the Fourier plane when measuring the raw phase and the background phase. The scale bar in Fig. 2 is 500 µm.

#### 4.2. Simultaneous phase and fluorescence imaging

In this experiment, we use the quantum dot sample to demonstrate our simultaneous phase and fluorescence imaging ability. The quantum dot is 20% solution of methylammonium lead bromine (CH_{3}NH_{3}PbBr_{3}). We drop the solution on a piece of pre-cleaned slide glass to form dots or characters, and the results of simultaneous phase and fluorescence imaging are shown in Fig. 3. The sample includes five letters (“FM” and “QPM”) written with quantum dot solution on the slide glass. The letters “FM” denote fluorescence microscopy and these two letters are written with the quantum dot emitting green fluorescence. The letters “QPM” denote quantitative phase microscopy and do not emit fluorescence. Figure 3 shows that phase and fluorescence imaging offer complementary information. Phase imaging contains the structural information, that is the height of the written letters shown in Fig. 3(b). Fluorescence imaging contains the functional information, that is the peripheral of the “FM” letters are fluorescent shown in Fig. 3(a). Both phase and fluorescence images are full-sampled with Walsh-Hadamard bases. It is worth noting that the positions of the contours of both fluorescence and phase images match well without manual alignment, as shown in Fig. 3(c) attributing to the single-pixel image mechanism: the sample position in the image is merely determined by relative position between the sample and structured illuminations. Here we do not subtract the background phase to avoid background misalignment artifacts between two imaging arms. The scale bar in Fig. 3 is 2 mm.

#### 4.3. Compressive phase and fluorescence imaging

We further show that we can recover the phase and fluorescence compressively from under-sampled measurements. We sample a proportion of the Walsh-Hadamard spectrum with high coefficients (assuming the coefficients of nature images concentrate at low spatial frequencies) as stated in [6, 12, 18] and apply the inverse Walsh-Hadamard transform to reconstruct the phase and fluorescence images. We use the “zigzag” strategy to sample the 2D coefficients of Walsh-Hadamard transform space, in order to get as many high spatial frequencies as possible, as suggested in [35]. The sample is the same as the quantum dot sample used in Sec. 4.2.

The reconstructed phase and fluorescence images, peak signal-to-noise ratio (PSNR) of the undersampled fluorescence and phase images are shown in Figs. 4(a), 4(b) and 4(c), respectively. PSNR of an image ** I** with respect to a reference image

**is calculated as $\text{PSNR}=20\cdot {\mathrm{log}}_{10}({\text{MAX}}_{\mathbf{I}}/\sqrt{\text{MSE}})$, where MAX**

*R**is the maximum possible pixel value of the image*

_{I}**MSE is the mean square error of the image**

*I,***with respect to the reference**

*I***, i.e., $\text{MSE}=\frac{1}{mn}{\displaystyle {\sum}_{j=1}^{m}{[\mathbf{I}(i,j)-\mathbf{R}(i,j)]}^{2}}$. We use the full-sampled, that is sampling ratio of 100% as the ground truth for quantitative evaluation, since the real ground truth is nearly inaccessible. It can be seen that sampling ratio of 30% already give good estimation (over 30 dB in terms of PSNR) of the scene, for both fluorescence and phase imaging, as shown in Fig. 4(a). Besides, the quantitative indices support the observation. The reconstructed PSNR of both fluorescence and phase images are over 30 dB (PSNR of over 30 dB is generally acceptable in most scenarios), as shown in Figs. 4(b) and 4(c), respectively.**

*R*Biological observations and diagnosis could benefit from simultaneous phase and fluorescence modalities and compressive imaging, especially for long-term imaging of slow phenomena [2]. Simultaneous phase and fluorescence modalities could provide both structural/morphological and functional information of the sample [3–5], which provides a comprehensive tool for the study of embryo development of model organisms, like *Danio rerio*. Besides, a single-pixel detector could be very sensitive, so the power of the laser source could be limited to a very low, safe and bio-compatible level. In this way, it would impose little impact on the development of organisms, which is beneficial for long-term imaging. The reduced number of measurements could further reduce the total amount of light energy on the organisms, and contribute to faster acquisition of a single frame.

## 5. Discussion and conclusion

We report a simultaneous phase and fluorescence imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. This is the first attempt to explore these microscopic multi-modalities via single-pixel imaging without mechanical scanning, to the best of our knowledge. This approach is capable of multimodal imaging without expertise demanding alignment of multiple cameras or reduced spatial resolution due to spatial multiplexing. We tested the phase imaging accuracy with an etched glass, and demonstrated the ability of simultaneous phase and fluorescence imaging on a quantum dot sample, where the fluorescence and phase information matches well with each other. Besides, we experimentally show that we can get good estimation with low sampling ratio, which is promising for high-speed imaging and detection tasks [41, 42]. This method can be extended to other wavelengths, where array detectors are expensive or even unavailable, such as infrared imaging [12, 13, 28], Terahertz imaging [29, 30, 43], and X-ray imaging [31, 32].

Two main challenges are remaining for further pursuit. The first is the trade-offs of the acquisition speed, pixel resolution and quality of the reconstructed image, since single-pixel imaging (SPI) is a sequential imaging modality. The limitation lies in the refreshing speed of the spatial light modulator (SLM), denoted as *f*_{SLM}. The acquisition speed *f*_{SPI} is proportional to the refreshing speed of the SLM, and inversely proportional to the total number of pixels *N* and the sampling ratio *η*, that is *f*_{SPI} = *f*_{SLM}/(*η* · *N*). One could increase the speed by either breaking the refreshing limit of the SLM, like spatial sweeping [41], or decreasing the sampling ratio while maintaining acceptable reconstruction image quality, like compressive [6, 18, 20] and adaptive [44–46] sampling strategies. To this end, compressive sensing for complex-valued 2D signals is highly desirable since phase information lies in the complex-amplitude signal and limited work has been devoted to that problem, to the best of our knowledge. Another challenge is the light efficiency issue of this method, especially for wide-field fluorescence microscopy. As discussed in Section 4, the efficiency of complex field modulation using Lee hologram is ∼5% because of the diffraction effect of the DMD and only the +1st order beam in use. Though single-pixel detectors with increased sensitivity, like single-photon counting modules, or increased laser power could compensate for that purpose, it could add to the cost of the system, which might limit its usage in budget-thrift applications. This issue could be tackled by binary complex-amplitude modulation methods with higher efficiency or using fast spatial phase modulator (though slower than the DMD), like the fast liquid crystal on silicon (LCoS) [47] or the ferroelectric LCoS [48].

Moreover, with additional single-pixel detectors or a spectrometer, this system could be extended for multispectral fluorescence microscopy [6, 18]. Furthermore, we envision that this cost-efficient multimodal system might find broad applications in biomedical science and material science.

## Funding

National Natural Science Foundation of China (NSFC) (61327902, 61722110, 61631009, 61671265, 61627804).

## Acknowledgments

The authors thank You Zhou and Liheng Bian for inspiring discussions. The authors would like to share special thanks to Prof. Xing Sheng and Zhao Shi from the Department of Electronic Engineering, Tsinghua University for preparing the etched glass sample, and Prof. Haizheng Zhong and Linghai Meng from the School of Materials Science and Engineering, Beijing Institute of Technology for preparing the quantum dot sample. We are grateful to the anonymous reviewers for constructive suggestions and criticism.

## References

**1. **J. Pawley, *Handbook of Biological Confocal Microscopy*(SpringerUS, 2006).

**2. **G. Popescu, *Quantitative phase imaging of cells and tissues* (McGraw-Hill, 2011).

**3. **Y. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Diffraction phase and fluorescence microscopy,” Opt. Express **14**, 8263–8268 (2006). [CrossRef] [PubMed]

**4. **S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Spatial frequency-domain multiplexed microscopy for simultaneous, single-camera, one-shot, fluorescent, and quantitative-phase imaging,” Opt Lett **40**, 4839–4842 (2015). [CrossRef] [PubMed]

**5. **S. Chowdhury, W. J. Eldridge, A. Wax, and J. A. Izatt, “Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy,” Biomed. Opt. Express **8**, 2496–2518 (2017). [CrossRef] [PubMed]

**6. **V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc Natl Acad Sci U S A **109**, E1679–E1687 (2012). [CrossRef] [PubMed]

**7. **Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica **5**, 315–319 (2018). [CrossRef]

**8. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, S. Ting, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. **25**, 83–91 (2008). [CrossRef]

**9. **K. C. Neuman and S. M. Block, “Optical trapping,” Rev Sci Instrum **75**, 2787–2809 (2004). [CrossRef]

**10. **T. Čižmár, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics **4**, 388–394 (2010). [CrossRef]

**11. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science **340**, 844–847 (2013). [CrossRef] [PubMed]

**12. **N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica **1**, 285–289 (2014). [CrossRef]

**13. **M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci Rep **5**, 10669 (2015). [CrossRef] [PubMed]

**14. **S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt Express **21**, 23068–23074 (2013). [CrossRef] [PubMed]

**15. **L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci Rep **6**, 24752 (2016). [CrossRef] [PubMed]

**16. **Y. Wang, J. Suo, J. Fan, and Q. Dai, “Hyperspectral computational ghost imaging via temporal multiplexing,” IEEE Photonics Technol. Lett. **28**, 288–291 (2016). [CrossRef]

**17. **Z. Li, J. Suo, X. Hu, C. Deng, J. Fan, and Q. Dai, “Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation,” Sci Rep **7**, 41435 (2017). [CrossRef] [PubMed]

**18. **Q. Pian, R. Yao, N. Sinsuebphon, and X. Intes, “Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging,” Nat. Photonics **11**, 411–414 (2017). [CrossRef] [PubMed]

**19. **P. Clemente, V. Durán, E. Tajahuerce, V. Torres-Company, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A **86**, 041803 (2012). [CrossRef]

**20. **P. Clemente, V. Durán, E. Tajahuerce, P. Andres, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt Lett **38**, 2524–2527 (2013). [CrossRef] [PubMed]

**21. **L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital holography with phase-encoded illumination,” Opt. Express **25**, 4975–4984 (2017). [CrossRef] [PubMed]

**22. **W.-H. Lee, “III computer-generated holograms: Techniques and applications,” Prog. Opt. **16**, 119–232 (1978). [CrossRef]

**23. **H. GonzDuránlez, L. Martínez-León, F. Soldevila, M. Araiza-Esquivel, E. Tajahuerce, and J. Lancis, “High-speed single-pixel digital holography,” in *SPIE Optical Metrology*, vol. 10333P. Ferraro, S. Grilli, M. Ritsch-Marte, and C. K. Hitzenberger, eds. (SPIE, 2017), p. 103330G.

**24. **P. A. Stockton, J. J. Field, and R. A. Bartels, “Single pixel quantitative phase imaging with spatial frequency projections,” Methods **136**, 24–34 (2018). [CrossRef]

**25. **B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary masks,” Appl. Opt. **5**, 967–969 (1966). [CrossRef] [PubMed]

**26. **W.-H. Lee, “Binary synthetic holograms,” Appl. Opt. **13**, 1677–1682 (1974). [CrossRef] [PubMed]

**27. **S. A. Goorden, J. Bertolotti, and A. P. Mosk, “Superpixel-based spatial amplitude and phase modulation using a digital micromirror device,” Opt Express **22**, 17999–18009 (2014). [CrossRef] [PubMed]

**28. **S. Shin, K. Lee, Y. Baek, and Y. Park, “Reference-free single-point holographic imaging and realization of an optical bidirectional transducer,” Phys. Rev. Appl . **9**, 044042 (2018). [CrossRef]

**29. **C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics **8**, 605–609 (2014). [CrossRef]

**30. **R. I. Stantchev, B. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field Terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. **2**, e1600190 (2016). [CrossRef] [PubMed]

**31. **H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. **117**, 113901 (2016). [CrossRef] [PubMed]

**32. **D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-ray ghost imaging,” Phys. Rev. Lett. **117**, 113902 (2016). [CrossRef] [PubMed]

**33. **Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat Commun **6**, 6225 (2015). [CrossRef] [PubMed]

**34. **M. Harwit and N. J. A. Sloane, *Hadamard transform optics*(Elsevier, 1979).

**35. **Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express **25**, 19619–19639 (2017). [CrossRef] [PubMed]

**36. **E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Transactions on Inf. Theory **52**, 5406–5425 (2006). [CrossRef]

**37. **E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Inf. Theory **52**, 489–509 (2006). [CrossRef]

**38. **P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Transactions on Graph. (TOG) **24**, 745–755 (2005). [CrossRef]

**39. **G. Popescu, T. Ikeda, R. R. Dasari, and M. S. Feld, “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. **31**, 775–777 (2006). [CrossRef] [PubMed]

**40. **R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: Two-dimensional phase unwrapping,” Radio Sci. **23**, 713–720 (1988). [CrossRef]

**41. **Y. Wang, Y. Liu, J. Suo, G. Situ, C. Qiao, and Q. Dai, “High speed computational ghost imaging via spatial sweeping,” Sci Rep **7**, 45325 (2017). [CrossRef] [PubMed]

**42. **S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science **360**, 1246–1251 (2018). [CrossRef] [PubMed]

**43. **R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica **4**, 989–992 (2017). [CrossRef]

**44. **M. Abetamann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci Rep **3**, 1545 (2013). [CrossRef] [PubMed]

**45. **Z. Li, J. Suo, X. Hu, and Q. Dai, “Content-adaptive ghost imaging of dynamic scenes,” Opt. Express **24**, 7328–7336 (2016). [CrossRef] [PubMed]

**46. **D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. **3**, e1601782 (2017). [CrossRef] [PubMed]

**47. ** Meadowlark Optics, “1920 x 1152 spatial light modulator,” https://www.meadowlark.com/1920-1152-spatial-light-modulator-p-119. Online; accessed: 04-September-2018.

**48. ** Forth Dimension Displays, “Ferroelectric Liquid Crystal on Silicon (FLCoS),” https://www.forthdd.com/products/spatial-light-modulators/. Online; accessed: 04-September-2018.