## Abstract

We demonstrate a single-pixel imaging (SPI) method that can achieve pixel resolution beyond the physical limitation of the spatial light modulator (SLM), by adopting sinusoidal amplitude modulation and frequency filtering. Through light field analysis, we observe that the induced intensity with a squared value of the amplitude contains higher frequency components. By filtering out the zero frequency of the sinusoidal amplitude in the Fourier domain, we can separate out the higher frequency components, which enables SPI with higher resolving ability and thus beyond the limitation of the SLM. Further, to address the speed issue in grayscale spatial light modulation, we propose a fast implementation scheme with tens-of-kilohertz refresh rate. Specifically, we use a digital micromirror device (DMD) working at the full frame rate to conduct binarized sinusoidal patterning in the spatial domain and pinhole filtering eliminating the binarization error in the Fourier domain. For experimental validation, we build a single-pixel microscope to retrieve 1200 × 1200-pixel images via a sub-megapixel DMD, and the setup achieves comparable performance to array sensor microscopy and provides additional sectioning ability.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Single-pixel imaging (SPI) is an emerging imaging technique that reconstructs the target 2D or 3D visual information from a number of 1D encoded measurements [1–3]. It uses a low-cost single-pixel detector to capture light, and then retrieves scene’s image computationally. The simplification of the sensing device expands the imaging system’s spectrum [4–6], which enables SPI to work at the wavelength bands where array sensors are costly or unavailable, such as infrared and terahertz [7–9]. Besides, SPI attracts a lot of attention in imaging through scattering media [10–13] because of its high tolerance to distortion between samples and detectors, and in photo-counting imaging because of its high signal-to-noise ratio [14].

In SPI, a programmable spatial light modulator (SLM) is generally used to encode the scene’s spatial information [15, 16]. The SLM’s pixel count is the major determinant of the retrieved image’s pixel count. However, in the non-visible spectrum range, the pixel count of SLMs are usually insufficient. For example, the pixels of a commercially available infrared SLM are no more than 1000 × 1000. In longer wavelength such as terahertz, SLMs are much less being developed, and high pixel-count fabrication is quite difficult so far [17]. Therefore, it is important to bypass the pixel count limit of SLMs for conducting SPI with more information.

In conventional imaging, some works apply sub-pixel shift at the sensor side, which would obtain resolution enhancement after post-processing [18]. However, for SPI, the sub-pixel shift of the camera should correspond to the sub-pixel shift of the pattern due to Helmholtz reciprocity [19, 20]. To achieve resolution beyond the SLM’s pixel count limitation via such sub-pixel shifting method, a high-precision motorized stage is necessary for precise micro-scanning [21], which is of high cost and complexity in implementation. Besides, extra time is needed for SLM’s mechanical shifting and thus the imaging speed is reduced.

In this paper, we introduce a frequency extended SPI approach that brings 4-fold increasing to the SLM’s equivalent pixel count without micro-scanning. As analyzed and observed in the following, the generated intensity is the square of the SLM’s sinusoidal modulation function and consists of mixed fundamental and doubled frequency. Conventional recovering methods could only make use of the resolving ability of the fundamental frequency component, while the higher frequency components are canceled and the corresponding resolving ability is not exploited. To fully exploit the system’s resolving ability, we propose a Fourier domain filtering approach to separate different frequency components and retrieve both lower and higher frequency components encoded scene information. Further, to achieve the high modulation speed of binary patterns, we combine binarized sinusoidal patterning with pinhole filtering in the Fourier domain to generate the continuous sinusoidal field. In our implementation, both sinusoidal modulation and filtering in the Fourier domain (either frequency separation or binary-to-grayscale transform) are multiplexed within a single DMD. Using this method, we can retrieve megapixel images using a sub-megapixel modulator. We apply this technique to build a single-pixel microscope setup and achieve promising results on biological samples. Compared with the images produced by a conventional microscope, our system performs better sectioning ability. The proposed method helps to reduce the physical limitations of high pixel-count SPI and establishes SPI as an effective imaging technique for practical applications.

## 2. Theoretical model

The general scheme of SPI is shown in Fig. 1(a). The spatial information of the sample is carried by the modulated light field and then collected by a single-pixel detector. As aforementioned, the pixel count of the retrieved image is determined by that of the SLM. For the newly introduced grayscale sinusoidal modulation, the pixel count is proportional to the highest frequency of the sinusoidal intensity generated by the projector [19]. On the other hand, considering that for grayscale amplitude modulation, the intensity derived by square calculation would display a different distribution including both fundamental and higher frequency components [22,23]. Here we conduct grayscale sinusoidal amplitude modulation and propose an optical Fourier domain filtering method that could separate different frequency components and enable to retrieve more scene details with the higher frequency components. Our method could bypass the pixel count (i.e., frequency) limitation from the SLM.

In the theoretical model, we use an *M* ×
*N*-pixel grayscale spatial light modulator (SLM) to
modulate the amplitude of the incident plane-wave field with a sinusoidal
transmissivity. The amplitude and intensity of the outgoing field is
described as

*x*

_{0},

*y*

_{0}) being the 2D coordinate of the field, and (

*f*,

_{x}*f*) ∈ [−

_{y}*M*/2,

*M*/2] × [−

*N*/2,

*N*/2] denotes the spatial frequency of the fringe along

*x*-axis and

*y*-axis.

The field interacts with the target object that owns reflectivity or
transmissivity *S*(*x*_{0},
*y*_{0}), then is collected by a single-pixel
detector with the integrated intensity being

*N*

_{env}is the environmental noise and Ω denotes the illuminated area.

Note that *I*_{gray} has the fundamental frequency
(lowest but non-zero frequency of the periodic signal)
(*f _{x}*,

*f*) and doubled frequency (2

_{y}*f*, 2

_{x}*f*), which indicates that

_{y}*I*

_{gray}can encode the scene information beyond (

*f*,

_{x}*f*). Now, the problem turns into separating of these two components.

_{y}**Separation of component**(*f*,_{x}*f*). We could separate the fundamental frequency term (_{y}*f*,_{x}*f*) by canceling the second harmonic frequency (2_{y}*f*, 2_{x}*f*) term using phase shift [19]. By assigning different values to_{y}*ϕ*with an increasing step*π*/2 (i.e., 0,*π*/2,*π*, 3*π*/2), we can cancel the (2*f*, 2_{x}*f*) term in the_{y}*I*_{gray}as well as noise*N*_{env}and extract the sample’s frequency component as follows$$\begin{array}{l}T({f}_{x},{f}_{y},0)-T({f}_{x},{f}_{y},\pi )+j\left(T\left({f}_{x},{f}_{y},\frac{\pi}{2}\right)-T\left({f}_{x},{f}_{y},\frac{3\pi}{2}\right)\right)\\ =4\mathcal{F}\left[S{\left({x}_{0},{y}_{0}\right)}^{2},{f}_{x},{f}_{y}\right].\end{array}$$Here*ℱ*[*g*(*x*,*y*),*f*,_{x}*f*] represents the Fourier transform of function_{y}*g*(*x*,*y*) at (*f*,_{x}*f*) and_{y}*j*is the imaginary unit. By forming SLM modulation with (*f*,_{x}*f*) traversing all the frequencies available in the SLM, we can retrieve_{y}*S*(*x*_{0},*y*_{0}) by taking the inverse Fourier transform. The retrieved spatial spectrum and intensity by above method are shown in Fig. 2(b). We can see that limited by low (*f*,_{x}*f*), the retrieved result suffers from insufficient resolution and blurry structures._{y}**Separation of component**(2*f*, 2_{x}*f*). It is infeasible to separating the (2_{y}*f*, 2_{x}*f*) term by phase shifting because we could not cancel the fundamental frequency (_{y}*f*,_{x}*f*) term. To address this issue, we go back to Eq. (1). The field with sinusoidal amplitude has three major frequencies (+1, 0, −1) in the Fourier domain. If we block the zero-frequency and transform the field from the Fourier domain back to the spatial domain, we can get a field with amplitude and intensity as_{y}$$\begin{array}{l}{U}_{\text{double}}({x}_{0},{y}_{0},\varphi )=\text{cos}(2\pi {f}_{x}{x}_{0}+2\pi {f}_{y}{y}_{0}+\varphi )\\ {I}_{\text{double}}({x}_{0},{y}_{0},\varphi )=\frac{1}{2}+\frac{1}{2}\text{cos}(2\pi 2{f}_{x}{x}_{0}+2\pi 2{f}_{y}{y}_{0}+2\varphi ).\end{array}$$This procedure is depicted in Fig. 1(b). The field interacts with the target and is then measured as*T′*(*f*,_{x}*f*,_{y}*ϕ*). We still use four-step phase shift but with a step of*π*/4, and get a similar expression to Eq. (3)$$\begin{array}{l}{T}^{\prime}({f}_{x},{f}_{y},0)-{T}^{\prime}\left({f}_{x},{f}_{y},\frac{\pi}{2}\right)+j\left({T}^{\prime}\left({f}_{x},{f}_{y},\frac{\pi}{4}\right)-{T}^{\prime}\left({f}_{x},{f}_{y},\frac{3\pi}{4}\right)\right)\\ =\mathcal{F}\left[S{\left({x}_{0},{y}_{0}\right)}^{2},2{f}_{x},2{f}_{y}\right].\end{array}$$It tells that we can retrieve the (2*f*, 2_{x}*f*) frequency component from 4 sinusoidal modulations with (_{y}*f*,_{x}*f*) frequency, thus the method is called “frequency extension”._{y}

After extracting both (*f _{x}*,

*f*) and (2

_{y}*f*, 2

_{x}*f*) components, for an

_{y}*M*×

*N*-pixel modulator, we can retrieve up to 2

*M*× 2

*N*pixels for the target scene, while conventional SPI is limited to

*M*×

*N*, as shown in Fig. 2(a). The captured result of SPI after applying frequency extension is shown in Fig. 2(c). From the comparison between the results with and without frequency extension, one can see that the reconstructed spatial spectrum with frequency extension is largely extended, as illustrated by the white square. The comparison also shows the frequency extension result is of higher resolution, much richer details, and sharper microstructures.

## 3. Experimental setup

Gray-scale SLM is of low frame rate, which largely reduces the speed of SPI. To deal with this problem, we propose a fast sinusoidal amplitude modulation approach by making use of DMD’s high binary amplitude modulation speed and properties of the light field with sinusoidal amplitude [24–26]. Specifically, we decompose the process of sinusoidal modulation into two successive steps: binary amplitude modulation in the spatial domain and pinhole filtering in the Fourier domain [24–26]. In the spatial domain, we use Floyd-Steinberg dithering method [27,28] to generate a sinusoidal-like binary pattern on a DMD to approximate the grayscale sinusoidal modulation in Eq. (1) as

*FD*{·} means Floyd-Steinberg dithering operators, which generate a binary representation of the grayscale image by adding the residual quantization error of a pixel onto its neighboring pixels. Such binarized amplitude is shown in Fig. 5(a). We use a lens with focal length

*f*

_{1}to form the spatial spectrum of

*U*

_{binary}

*δ*(

*ξ*,

*η*) is the point source function with the 2D coordinate (

*ξ*,

*η*) of the Fourier plane.

*N*

_{bin}(

*ξ*,

*η*) is the noise induced by binarization. As shown in Eq. (7), three-point sources, named (+1, 0, −1) frequency components are coincident with the frequency of the dithered free sinusoidal amplitude

*U*

_{gray}.

In the Fourier domain, we use a second DMD to select (+1, 0, −1) frequency
components and block noise
*N*_{bin}(*ξ*, *η*)
by forming three pinholes, thus the sinusoidal amplitude in Eq. (1) is generated after the
field transformed back to the spatial domain [29]. In this way, we achieve sinusoidal amplitude
modulation at the refresh rate of binary amplitude modulation. For
frequency extension, only two pinholes are used to select (+1, −1)
frequency components thus the zero-frequency is also filtered out.

To ensure synchronization of dithering modulation and pinhole filtering, we use a single DMD to modulate the light field in spatial and Fourier domains simultaneously. We use a part of the DMD to display the dithered pattern and another part to display pinhole mask. Every dithered pattern on the left region of the DMD has a corresponding pinhole mask on the right, as shown in Fig. 4(b). In other words, together with a pair of lenses for Fourier and inverse-Fourier transform, the proposed approach needs only a binary SLM to generate sinusoidal amplitudes. In our experiment setup, the DMD can work up to 22.7 kHz for binary modulation, which means that we can achieve 22.7 kHz sinusoidal modulation. The comparison between the theoretical model and the implementation is shown in Fig. 3.

To prove the concept, we build a single-pixel microscope, as diagrammed in
Fig. 4(a). Note that this setup
uses active illumination, however, our method is also applicable for the
passive configuration [30, 31]. We use a laser diode (Thorlabs DPSS
Laser Diodes, 532 nm) as the light source. The laser beam passes a spatial
filter system and is expanded into a 3 cm diameter collimated beam.
Because the margin of the beam has much lower intensity than the center,
we set the beam size larger than the DMD size (1.4 cm ×1.0 cm) and block
the outer region for better illumination uniformity. The beam arrives at
the left region of the DMD (Texas Instrument DLP Discovery 4100, 0.7 XGA
DMD), which modulates the incoming beam with dithered sinusoidal
amplitude, as shown in Fig. 4(a).
To eliminate the approximation error and achieve high-quality sinusoidal
amplitude, the light beam is then collected by an
*f*_{1} = 100 mm concave mirror (CM) and reaches
the right region of DMD. Here we set the distance from the CM to the DMD’s
right region being exactly *f*_{1}, thus the
spatial spectrum of the input light field with dithered amplitude is
formed at the DMD plane. The distance between (+1, 0, −1) frequency
components are proportional to ${f}_{1}\cdot \sqrt{{f}_{x}^{2}+{f}_{y}^{2}}$, so the setting of
*f*_{1} should be small enough to keep all the
three dominant frequencies in the active area of the DMD. Then, we use
pinhole masks to discard high order frequency components caused by
dithering approximation and system noise. The pinhole masks are set as
follows: to capture the scene frequency within the limit from the DMD
region that playing dithered patterns in Fig. 4(b), we use a three-pinhole mask to let all the dominant
frequencies pass through; for the acquisition of frequency components
beyond the limitation (i.e., frequency extension), only +1 and −1
components are selected by a two-pinhole mask. By calibration, the radius
of the laser spot focused on the right side of the DMD is about 200 μm,
corresponding to 15 micromirrors in the DMD. We set the pinhole size
coincident with the spot size, as shown in Fig. 4(b). Thereafter, the beam leaving the DMD reforms a
sinusoidal amplitude after passing a lens *f*_{2} =
200 mm (L1). A telescope consisting of TL1 (*f*_{3}
= 200 mm) and (Obj1, 4×, NA = 0.13, Nikon MRH00041H) relays the sinusoidal
light field onto the sample plane. The power incident to the DMD, reaching
the sample under three-pinhole masks and under two-pinhole masks are
around 25 mW, 17 mW and 5 mW, respectively. Scene information is then
relayed by the second telescope (TL2 and Obj2), and finally collected by a
lens *f* = 75 mm (L2) and a single-pixel detector (Thorlabs
PDA100A-EC Silicon photodiode).

## 4. Megapixel imaging

First, we use the setup to validate the proposed model. In this experiment,
we apply different pinhole masks in Figs.
5(b)–5(d) to the dithered pattern with frequency
*f _{x}* =

*f*= 25 in Fig. 5(a), and capture the intensity of the spatial spectrum on the DMD’s right region and the final generated fringes on the sample plane to visualize the modulation procedure. The captured results show that the distribution of the spatial spectrum matches well with the predicted in Sec. 2 and Sec. 3. Besides, a closer observation supports the key points of the proposed approach and advantages of our neat implementation: (i) The dithering algorithm achieves fast modulation at expenses of noise in both spatial and Fourier domain, but allocates most energy at the (0, +1, −1) frequency components and thus keeps the modulation energy efficient as shown in Fig. 5(b), which agrees with Eq. (7). (ii) After pinhole filtering, the spatial spectrum becomes much cleaner and produces grayscale fringes in the spatial domain, as shown in Figs. 5(c) and 5(d). (iii) After blocking the zero-frequency component of the spatial spectrum, we can generate a denser fringe, as shown in Fig. 5(d). The theoretical predictions of the fringes on the sample plane by Eq. (1), (4) and (7) are provided on the bottom for comparison, and the high coincidence strongly validates the correctness and effectiveness of the theoretical analysis.

_{y}Next, we conduct an experiment to validate the accuracy of the sinusoidal modulation by DMD combined with pinhole filtering. Specifically, we quantitatively compare the final imaging results under three settings: direct grayscale sinusoidal modulation (2.5 kHz), dithering approximation without pinhole filtering (20 kHz) and dithering approximation with pinhole filtering (20 kHz). In this experiment, we do the comparison without doing frequency extension (i.e., three pinholes are used for pinhole filtered dithered patterns). For direct grayscale modulation, we use the gray mode of our 0.7 DMD to project 8-bit grayscale sinusoidal patterns. The retrieved image using 8-bit grayscale sinusoidal patterns is set as the benchmark. All the reconstructed images are converted to 8-bit images for fair comparison. The reconstruction in spatial and Fourier domains by these three ways with the same pixel resolution (384 × 384 pixels) are shown in Fig. 6. Compared with the benchmark result in Fig. 6(c), the result by dithered patterns in Fig. 6(a) suffers from high-frequency noise in the Fourier domain and noise in the spatial domain. On the contrary, the result using pinhole filtered dithered patterns in Fig. 6(b) has slight noise in the spatial spectrum and shows high-quality visual results in the spatial domain. To further evaluate the accuracy of our approach, we quantitatively compare the reconstructed images of these three methods in terms of structural similarity (SSIM) score [32]. Using the benchmark reconstruction as the reference, SSIM score of results by dithering patterns with and without pinhole filtering are 0.8577 and 0.7166, respectively. From both visual and quantitative comparison, we can see that our high-speed filtered dithering modulation is of high fidelity to the direct grayscale sinusoidal modulation.

Next, we demonstrate the effectiveness of our approach for megapixel SPI. Using the proposed frequency extension method, we reconstruct 1200 ×1200-pixel images by 640×640 micromirrors in the DMD for dithering modulation. The data acquisition takes about 150 seconds under 20 kHz refresh rate of the DMD.

In order to characterize the quality of our frequency extension method, we
use a part of the 1951 USAF Resolution Test Target as the target scene.
The result is shown in Fig. 7(a),
with a field of view of 4.7 mm × 4.7 mm. In terms of resolvability, we can
see that the fringes remain resolved up to Group 6, Element 1, of which
the line width is 7.81 μm. The intensity fluctuation along marked profiles
are shown in Fig. 7(b) for proving
a high contrast. To verify the frequency extension method quantitatively,
we calculate the theoretical resolution of our system. For the sinusoidal
SPI with 1200 × 1200 pixels, the highest frequency is
(*f _{x}* = 600,

*f*= 600), which means there are 600 fringes in the 4.7 mm × 4.7 mm area. The theoretical resolution, i.e., the spatial period of the fringe, is calculated as 7.83 μm. Above calculation shows that the experimental and theoretical resolution are coincident, which validates the effectiveness of the proposed method.

_{y}We also demonstrate the performance of our approach in biological imaging.
The retrieved microstructures of a thin slice of dog’s taste buds are
shown in Fig. 8(a). The total
sample is about 6 μm along *z*-axis. For a better
understanding, we compare the SPI results with different pixel counts, as
well as results captured by a wide-field microscope with a commercial CMOS
(JAI GO-5000C) and the same objective, as displayed in Figs. 8(b) and 8(c). We find that the imaging
quality is largely improved as the pixel count increases. The 1200 ×
1200-pixel reconstructed images can achieve performance comparable to
results captured directly by CMOS.

We further use our system to observe a thicker sample, which is about 14 μm
along *z*-axis. In the experiment, we use two 20×
objectives (Obj1 and Obj2, both are Nikon MRH00201, NA = 0.5) to observe
structures of rabbit’s vein section. The imaging result of our method is
displayed in Fig. 9(a). Compared
with the result by wide-field microscope in Fig. 9(c), we can see that our SPI technique can
capture the axial slice without out-of-focus signals, while the in-focus
signals and out-of-focus signals are mixed together with a wide-field
microscope. To validate our system’s ability of eliminating out-of-focus
signals, we also compare the result from a confocal microscope (Zeiss
LSM700 with a 20×, NA = 0.8 objective) in Fig. 9(b). The confocal image is captured at depth ∼ 8 μm from the
coverslip. The coincidence with the result of the confocal microscope
validates our system’s sectioning ability, which largely facilitates the
observation of biological samples.

Here we provide a mathematical analysis of the sectioning ability. Assuming that only the in-focus plane is modulated by the sinusoidal field [33], the total intensity collected by the single-pixel detector is

*S*

_{out}and

*S*

_{in}represent the out-of-focus and in-focus part respectively, and

*N*

_{env}denotes noise. The out-of-focus part in Eq. (8) does not change with the phase shift of the fringe, so using four-phase-shift we can get

## 5. Discussion

The pixel counts are increased by 1.88 × 1.88 folds in Fig. 2(c) and Sec. 4. There are several reasons that
the increase does not achieve theoretical upper bound (i.e., 4-fold).
Firstly, when (*f _{x}*,

*f*) becomes higher, one gets denser fringes with more very bright or very dark pixels, and the Floyd-Steinberg dithering method would cause more artifacts [34]. Secondly, when working in the frequency extension mode, the fringe is shifted by

_{y}*π*/4 instead of

*π*/2, and a smaller shift step causes larger quantization errors.

In our configuration, some pixels of the DMD have to be used for pinhole
filtering, thus the highest frequency of the generated sinusoidal fringe
will be reduced. However, this reduction can be eliminated by careful
configuration. As derived in Sec. 3, the filtering mask in the DMD
consists of two or three discrete pinholes. The distance between these
pinholes depends on the focal length *f*_{1}, so
the mask size can be much smaller than the area used for modulating
dithered patterns in the DMD by carefully choosing
*f*_{1}. Besides, the frequency components are
discrete in the Fourier domain, thus a small number of micromirrors are
sufficient for the mask generation. Therefore, we can use a small DMD
region for the pinhole mask, which slightly reduces the final pixel
count.

Results by our single-pixel microscope show speckles and interference induced artifacts because of the coherent laser source. Such issues can be eliminated by using a laser source with lower coherence or use random laser illumination [35].

In our experiment, the outgoing light from the sample is safely within the response range of the photodiode and digitalized by a 12-bit acquisition card. The dynamic range of the results could be further improved by using a acquisition card with higher bit depth.

## 6. Conclusion

In this paper, we proposed a frequency extension method that can increase the upper limit of conventional SPI’s pixel count by four times. Such increasing benefits from the adopted sinusoidal amplitude modulation which generates intensity with higher frequency than the SLM’s limit, and frequency filtering which unmixes the lower and higher frequency successfully. Fundamentally, our method transforms the challenge of high pixel-count SPI from one that is coupled with fabrication to one that is solvable through modulation. In our compact setup, we successfully retrieve megapixel images by a sub-megapixel SLM and demonstrate its performance on the biological microscope. Similarities between imaging results by our system and a confocal microscope show the sectioning ability of our system. Our method enables SPI with a higher pixel count and makes SPI more effective in biological imaging.

The proposed method is highly useful for SPI in spectrum bands where high pixel-count SLMs are not commercially available, such as far-IR and terahertz. For example, in the Terahertz imaging, one can achieve 128 × 128-pixel imaging using a 64 × 64-pixel metamaterial modulator to conduct dithered modulation and another 15 × 15-pixel modulator for pinhole filtering [9]. The ratio between the pixel counts of the reconstructed image and that of modulators is 128 × 128/(64 × 64 + 15 × 15) = 3.8, which shows that we largely exceed the pixel count limit from spatial light modulators.

Besides, as a general modulation approach, the proposed frequency extension SPI can be applied to various cases, such as imaging under extremely low illumination [36] or capturing weak signals such as fluorescence [37], and imaging through scattering media. Utilizing the wide spectrum response and high SNR of a single-pixel detector [38], our current setup also has potentials on multicolor fluorescence imaging.

## Funding

National Natural Science Foundation of China (NSFC) (No. 61327902, 6172200426 and 61631009).

## Acknowledgments

We thank Xu Zhang for help with using the confocal microscope. We thank Hao Xie and Lingjie Kong for critical feedback on the manuscript.

## References and links

**1. **M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via
compressive sampling,” IEEE Signal Process.
Mag. **25**, 83–91
(2008). [CrossRef]

**2. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3D computational imaging with
single-pixel detectors,” Science **340**, 844–847
(2013). [CrossRef] [PubMed]

**3. **M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel
three-dimensional imaging with time-based depth
resolution,” Nat. Commun. **7**, 12010 (2016). [CrossRef]

**4. **N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera
for video rate ultrasonic imaging,”
Optica **3**,
26–29 (2016). [CrossRef]

**5. **B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum
to classical to computational,” Adv. Opt.
Photonics **2**,
405–450 (2010). [CrossRef]

**6. **J. Greenberg, K. Krishnamurthy, and D. Brady, “Compressive single-pixel
snapshot x-ray diffraction imaging,” Opt.
Lett. **39**,
111–114 (2014). [CrossRef]

**7. **M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time
visible and infrared video with single-pixel
detectors,” Sci. Rep. **5**, 10669 (2015). [CrossRef]

**8. **N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and
visible microscope,” Optica **1**, 285–289
(2014). [CrossRef]

**9. **C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging
with metamaterial spatial light modulators,”
Nat. Photonics **8**,
605–609 (2014). [CrossRef]

**10. **E. Tajahuerce, V. Durán, P. Clemente, E. Irles, F. Soldevila, P. Andrés, and J. Lancis, “Image transmission through
dynamic scattering media by single-pixel
photodetection,” Opt. Express **22**, 16945–16955
(2014). [CrossRef] [PubMed]

**11. **V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in
scattering media,” Opt. Express **23**, 14424–14433
(2015). [CrossRef] [PubMed]

**12. **L. Martínez-León, P. Clemente, Y. Mori, V. Climent, J. Lancis, and E. Tajahuerce, “Single-pixel digital
holography with phase-encoded illumination,”
Opt. Express **25**,
4975–4984 (2017). [CrossRef] [PubMed]

**13. **G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. Malcolm, and M. J. Padgett, “Real-time imaging of methane
gas leaks using a single-pixel camera,” Opt.
Express **25**,
2998–3005 (2017). [CrossRef] [PubMed]

**14. **G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive
sensing laser radar for 3D imaging,” Appl.
Opt. **50**,
5917–5920 (2011). [CrossRef] [PubMed]

**15. **O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost
imaging,” Appl. Phys. Lett. **95**, 131110 (2009). [CrossRef]

**16. **F. Soldevila, E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, and J. Lancis, “High-resolution adaptive
imaging with a single photodiode,” Sci.
Rep. **5**, 14300
(2015). [CrossRef] [PubMed]

**17. **C. M. Watts, C. C. Nadell, J. Montoya, S. Krishna, and W. J. Padilla,
“Frequency-division-multiplexed single-pixel imaging
with metamaterials,” Optica **3**, 133–138
(2016). [CrossRef]

**18. **R. Szeliski, “Image alignment and
stitching: A tutorial,” Found. Trends. Comput.
Graph. Vis. **2**,
1–104 (2006). [CrossRef]

**19. **Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means
of fourier spectrum acquisition,” Nat.
Commun. **6**, 6225
(2015). [CrossRef] [PubMed]

**20. **M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise
ratio of single-pixel imaging using digital
microscanning,” Opt. Express **24**, 10476–10485
(2016). [CrossRef] [PubMed]

**21. **S. Tetsuno, K. Shibuya, and T. Iwata, “Subpixel-shift
cyclic-hadamard microscopic imaging using a pseudo-inverse-matrix
procedure,” Opt. Express **25**, 3420–3432
(2017). [CrossRef] [PubMed]

**22. **R. D. Juday, “Correlation with a spatial
light modulator having phase and amplitude cross
coupling,” Appl. Opt. **28**, 4865–4869
(1989). [CrossRef] [PubMed]

**23. **D. Engström, G. Milewski, J. Bengtsson, and S. Galt, “Diffraction-based
determination of the phase modulation for general spatial light
modulators,” Appl. Opt. **45**, 7195–7204
(2006). [CrossRef] [PubMed]

**24. **S. Shin, K. Kim, J. Yoon, and Y. Park, “Active illumination using a
digital micromirror device for quantitative phase
imaging,” Opt. Lett. **40**, 5407–5410
(2015). [CrossRef] [PubMed]

**25. **D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium
characterization with application to focusing light through turbid
media,” Opt. Express **20**, 1733–1740
(2012). [CrossRef] [PubMed]

**26. **A. Drémeau, A. Liutkus, D. Martina, O. Katz, C. Schülke, F. Krzakala, S. Gigan, and L. Daudet, “Reference-less measurement of
the transmission matrix of a highly scattering material using a dmd
and phase retrieval techniques,” Opt.
Express **23**,
11898–11911 (2015). [CrossRef] [PubMed]

**27. **R. Floyd and L. Steinberg, “An adaptive algorithm for
spatial greyscale,” Proc. Soc. Inf.
Disp. **17**,
75–77
(1976).

**28. **Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast fourier single-pixel
imaging via binary illumination,” Sci.
Rep. **7**, 12029
(2017). [CrossRef] [PubMed]

**29. **J. W. Goodman, *Introduction to Fourier
optics* (McGraw-Hill,
2008).

**30. **C. Li, “An efficient algorithm for total
variation regularization with applications to the single pixel camera
and compressive sensing,” Master’s thesis,
Rice University
(2010).

**31. **F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a
balanced detector,” Sci. Rep. **6**, 29181 (2016). [CrossRef] [PubMed]

**32. **Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment:
from error visibility to structural similarity,”
IEEE Trans. Image Process **13**,
600–612 (2004). [CrossRef] [PubMed]

**33. **S. Santos, K. K. Chu, D. Lim, N. Bozinovic, T. N. Ford, C. Hourtoule, A. C. Bartoo, S. K. Singh, and J. Mertz, “Optically sectioned
fluorescence endomicroscopy with hybrid-illumination imaging through a
flexible fiber bundle,” J. Biomed.
Opt. **14**, 030502
(2009). [CrossRef] [PubMed]

**34. **R. A. Ulichney, “Dithering with blue
noise,” Proc. IEEE **76**, 56–79
(1988). [CrossRef]

**35. **B. Redding, M. A. Choma, and H. Cao, “Speckle-free laser imaging
using random laser illumination,” Nat.
Photonics **6**,
355–359 (2012). [CrossRef] [PubMed]

**36. **P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number
of photons,” Nat. Commun. **6**, 5913 (2015). [CrossRef] [PubMed]

**37. **Q. Pian, R. Yao, N. Sinsuebphon, and X. Intes, “Compressive hyperspectral
time-resolved wide-field fluorescence lifetime
imaging,” Nat. Photonics **11**, 411–414
(2017). [CrossRef] [PubMed]

**38. **M. Strupler, E. D. Montigny, D. Morneau, and C. Boudoux, “Rapid spectrally encoded
fluorescence imaging using a wavelength-swept source,”
Opt. Lett. **35**,
1737–1739 (2010). [CrossRef] [PubMed]