Abstract

The speed and quality of single-pixel imaging (SPI) are fundamentally limited by image modulation frequency and by the levels of optical noise and compression noise. In an approach to come close to these limits, we introduce a SPI technique, which is inherently differential, and comprises a novel way of measuring the zeroth spatial frequency of images and makes use of varied thresholding of sampling patterns. With the proposed sampling, the entropy of the detection signal is increased in comparison to standard SPI protocols. Image reconstruction is obtained with a single matrix-vector product so the cost of the reconstruction method scales proportionally with the number of measured samples. A differential operator is included in the reconstruction and following the method is based on finding the generalized inversion of the modified measurement matrix with regularization in the Fourier domain. We demonstrate 256 × 256 SPI at up to 17 Hz at visible and near-infrared wavelength ranges using 2 polarization or spectral channels. A low bit-resolution data acquisition device with alternating-current-coupling can be used in the measurement indicating that the proposed method combines improved noise robustness with a differential removal of the direct current component of the signal.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Indirect image measurement techniques called single-pixel imaging (SPI), since their introduction over a decade ago [1,2], have led to a considerable amount of novel ideas about image measurement at various wavelength ranges, spectral imaging, imaging through scattering media, 3D imaging etc. [3,4].

The speed of single-pixel imaging is limited by the frequency of spatial modulation and by the time needed for digital image reconstruction. Most real-time reconstruction approaches rely on a single-step image reconstruction using a fast transform, e.g., the Fourier (FFT), Walsh-Hadamard (FWHT), or discrete cosine (DCT) transforms, or on the evaluation of a matrix-vector product [5,6]. There is also growing interest in using neural networks for image reconstruction or for removing artifacts caused by compression [7,8].

Modulation speed obtained with modern digital micromirror devices (DMD), such as the one used by us, is on the order of tens of kilohertz. This is not a lot, as many thousand exposures are needed per single image measurement. Far higher frame-rates are achievable with structured illumination with LEDs arrays, although such set-ups for ghost imaging and SPI have been demonstrated only at low resolutions [912]. Modulation with rotary elements with fixed patterns is a high-speed cost-efficient alternative to using dynamic modulators in THz [13,14].

Block compressive imaging also effectively increases the sampling frequency per pixel with use of multiple detectors or a focal plane-array [1517].

Differential photodetection techniques allow to eliminate some of the optical noise, and to effectively increase the SNR of the detection signal. Differential imaging and normalized imaging have been used in computational ghost imaging prior to SPI [18,19]. In SPI, a reference detector may be used either to measure the unmodulated reference signal and to normalize the detection signal in the presence of source intensity fluctuations, or to measure in parallel a differently modulated signal. The latter possibility is often based on the fact that DMD micromirrors may take exactly 1 of the 2 positions (let us call them "0" or "1"). Intensities of 2 reflected beams are proportional to the result of image sampling with complementary (binary negated) patterns [20,21]. Their difference provides a signal with improved SNR and with removed bias. This is possible when the collimated incident beam comes from a perpendicular direction onto the DMD, for instance in a telescopic set-up [20]. Otherwise the set-up must be carefully calibrated to retain a similar level of signal from both detectors, which becomes challenging for 3D moving and rotating objects. Still, this kind of balanced photodetection has been used by many groups [5,22,23].

A more straightforward way to obtain differential detection is to subtract the measurements from 2 subsequent measurements [24]. This is a way not only to remove a slowly changing bias from the signal but also to encode both positive and negative values of real-valued sampling functions [2528]. A more elaborated encoding allows to represent complex-valued sampling functions with three [29] instead of 4 [30] real-valued non-negative patterns. This approach has its origins in interferometry and also allows to remove a constant-bias at the same time. Encoding a complex pattern with a larger number of non-negative intensity signals has been also proposed [31]. A simplex encoding may also be used to represent an arbitrary bunch of real-valued sampling functions using non-negative patterns whose number is increased by 1. Once again the constant bias of the measurement is removed, and by varying the simplex dimension one may select an optimal trade-off between optical noise and compression noise, depending on the experimental conditions [32]. The same method may be easily combined with complementary detection to improve the SNR even further [32]. Encoding real or complex valued functions onto non-negative intensity patterns usually does not produce binary patterns yet, and a further binarization step is necessary. For instance error diffusion is commonly used with success thanks to the difference in DMD resolution and the actual imaging resolution.

In this paper we introduce a differential method for measuring the DC component of images (the zeroth spatial frequency) applicable to various SPI architectures with binary modulation. In a classical approach, the DC component is measured using a single pattern filled with all pixels in the state "1", possibly followed by another pattern with all pixels in the state "0" [20,21]. The proposed measurement method makes use of a small number of binary patterns with approximately (but not exactly) half of DMD pixels in the "1" and "0" states to encode the DC component. This allows to limit the required operating range of the detector and to improve the noise robustness of the measurement. Unlike in [31,32], the sampling is readily binary. At the same time it retains the advantageous of other differential methods such as the immunity of the measurement to some constant or slowly varying bias of the detection signal. On top of this, we propose a complete fast differential SPI framework with binary sampling and a single step image reconstruction. The set of sampling patterns includes several patterns required to capture the DC component of the image, and other patterns taken from some standard basis but binarized with variable thresholding. The purpose of varying the binarization level is to make the best use of the available detector measurement range. Image reconstruction is based on the Fourier domain regularized inversion (FDRI) [33] with a minor modification that includes the differential treatment of the measurement signal. We will use the short name D-FDRI for the modified method. In a simple comparison made under experimental conditions, the proposed reconstruction method gives a similar image quality as optimization with a compressive sensing method [34]. Finally, we validate D-FDRI experimentally. Concurrent imaging in the infrared and visible spectral bands, likewise polarisation imaging are fields where SPI is an interesting alternative to classical optical approaches [3543]. Making use of rather standard single pixel camera set-ups we demonstrate the capability of conducting a real time continuous measurement and reconstruction at a resolution of $256\times 256$ in 2 independent channels, using moderate computational resources.

2. Differential SPI with Fourier-domain regularized inversion

Compressive measurement of an image $\mathbf {x}$ (with pixel values arranged in a vector) in the presence of additive signal noise $\mathbf {n_s}$ and detector noise $\mathbf {n_d}$ may be expressed with a matrix-vector multiplication

$$\mathbf{y}=\textbf{M}\cdot (\mathbf{x+n_s})+\mathbf{n_d} ,$$
where the rows of the measurement matrix $\mathbf {M}$ contain the patterns displayed on the DMD during the measurement, and $\cdot$ is the dot product. In Eq. (1) noise has been decomposed into a signal independent part $\mathbf {n_d}$ primarily attributed to the detector dark current and the signal dependent part $\mathbf {n_s}$ from sources such as background illumination [44]. The measurement is compressive when the dimension of $\mathbf {y}$ which will be denoted as $k$ is smaller than the number of pixels $n$ in the image $\mathbf {x}$. In our case $n=256^2$ and the compression ratio $\textrm {CR}=k/n\geq 2\%$.

Consider applying discrete difference operator of the order $p$ on $\mathbf {y}$,

$$\mathbf{y'}=\mathbf{D}^p_k\cdot \mathbf{y}= (\mathbf{D}^p_k\cdot \mathbf{M})\cdot (\mathbf{x+n_s}) +\mathbf{D}^p_k\cdot\mathbf{n_d}.$$
Matrices $\mathbf {D}^p_k$ have the size of $(k-p)\times k$ and are defined as $[\mathbf {D}_k^1]_{i,j}=\delta _{i,j-1}-\delta _{i,j}$ (discrete gradient operator) and $[\mathbf {D}_k^2]_{i,j}=\delta _{i,j-1}-0.5\delta _{i,j-2}-0.5\delta _{i,j}$ (discrete Laplace operator), where $\delta$ is the Kronecker delta symbol.

Applying the gradient or Laplace operators on $\mathbf {y}$ makes $\mathbf {y'}$ invariant to the level of constant bias present in the detection signal represented in our model by the mean value of $\mathbf {\bar {n}_d}$. In the case of Laplace operator, a bias changing linearly with time is also removed. In practice, using $\mathbf {y'}$ for image reconstruction allows to operate the data acquisition device (DAQ) with alternating-current (AC) coupling at a reduced voltage range with the bit-resolution better adapted to cover the signal range.

The generalized inverse of the matrix product $\mathbf {D}^p_k\cdot \mathbf {M}$ multiplied by $\mathbf {y'}$ gives an approximation for image $\mathbf {x}$,

$$\tilde{\mathbf{x}} = {(\mathbf{D}^p_k\cdot \mathbf{M})^{g} }\cdot(\mathbf{D}^p_k\cdot \mathbf{M})\cdot \mathbf{x}=\mathbf{P_g}\cdot \mathbf{y},$$
where $\mathbf {P_g}=(\mathbf {D}^p_k\cdot \mathbf {M})^g \cdot \mathbf {D}^p_k$ and $~^g$ denotes the generalized matrix inverse, which is not unique. The most commonly used generalized matrix inverse operator is the Moore-Penrose (pseudo)inverse. Pseudoinverse is known to minimize the mean square error (MSE) in linear regression and is sometimes used in single-pixel imaging. However, in a compressive measurement, it may be more advantageous to use a different kind of matrix generalized inverse. For instance based on Fourier-domain regularization (FDRI) [5], the solution to Eq. (1) may be expressed as $\mathbf {x_0}=\mathbf {P}\cdot \mathbf {y}$ with
$$\mathbf{P}=\mathbf{F^*}\cdot\mathbf{\hat\Gamma}\cdot\mathbf{F}\cdot(\mathbf{M}\cdot\mathbf{F^*}\cdot\mathbf{\hat\Gamma}\cdot\mathbf{F})^{+},$$
where $^+$ denotes the pseudoinverse, and $\mathbf {F}$ is the 2D Fourier transform. $\mathbf {\hat \Gamma }$ is a diagonal matrix
$$[\mathbf{\hat\Gamma}]_{i,j}=\frac{\delta_{i,j}}{\sqrt{(1-\mu)^2(\sin^2(\omega_x)+\sin^2(\omega_y))+\mu^2\frac{\omega_x^2+\omega_y^2}{2\pi^2}+\epsilon }},$$
where $\mu$ and $\epsilon$ are used to tune the properties of the regularization [5] and $\omega _{x,y}$ are the spatial frequencies. Taking into account the Laplace or gradient operators from Eq. (2) which effectively modify the measurement matrix, the following matrix may be used to reconstruct the image in a differential measurement,
$$\mathbf{P_g}=\mathbf{F^*}\cdot\mathbf{\hat\Gamma}\cdot\mathbf{F}\cdot(\mathbf{D}^p_k\cdot\mathbf{M}\cdot\mathbf{F^*}\cdot\mathbf{\hat\Gamma}\cdot\mathbf{F})^{+}\cdot\mathbf{D}^p_k.$$
Calculation of matrix $\mathbf {P_g}$ is computationally costly, and in practice only possible at a small compression ratio $\textrm {CR}=k/n$. On the other hand, the measurement matrix $\mathbf {M}$ as well as $\mathbf {P_g}$ can be precalculated before the measurement. We use the singular-value decomposition (SVD) to calculate the pseudoinverse in Eq. (6), and the FFT algorithm to calculate the 2D Fourier transforms. Then the image reconstruction requires us to evaluate a single matrix-vector product $\tilde {\mathbf {x}}=\mathbf {P_g}\cdot \mathbf {y}$. Additionally, any negative values in the reconstructed image are replaced with 0s. The cost of matrix-vector multiplication is $O(k\cdot n)$. For $n=256\times 256$-pixel images sampled at a compression ratio of $\textrm {CR}=2\%$, a real-time ($17$ Hz) reconstruction does not require using GPU-accelerated computation, even when 2 independent channels are present. In fact, we have developed a program in Python based on integer and floating-point single-precision arithmetic capable of receiving, interpreting and averaging data from the DAQ and reconstructing images in 2 bands in real time on an Intel i5-4590 CPU. Also at a higher compression ratio, e.g., when $\textrm {CR}=6\%$ with a lower level of compression noise, the operating speed is limited by the DMD frame-rate and not by digital processing.

In comparison to reconstructions with inverse transforms, for which there exist fast algorithms such as FFT, DCT or FWHT, the proposed D-FDRI method is slower and more memory demanding. Its advantages are that it works well with arbitrary sampling matrices $\mathbf {M}$, also with nonorthogonal and binary patterns. The computational cost of FFT is $O(n\cdot \log 2 n)$. On the other hand, that of the matrix-vector multiplication is $O(k\cdot n)$ or $O(CR\cdot n^2)$. Therefore in practice the method is applicable at small compression ratios CR. The reconstruction properties, including noise robustness, may be further optimized by the choice of $\mu$ and $\epsilon$, and the reconstructed image is invariant to the presence of a constant or linearly changing in time bias in the detection signal.

2.1 Differential measurement of the DC-component with binary sampling functions

The usual method of measuring the zeroth spatial frequency (the DC-component) of the image is to display a pattern consisting of all pixels equal to "1" on the DMD, possibly followed by a pattern consisting of all pixels equal to "0". We have replaced a direct measurement with a measurement distributed over several sampling functions. We wanted them to be binary, with approximately but not exactly half of the pixels in states "0" and "1". On top of this, information about the DC component of the image should be preserved after applying the finite difference operator on the measurement with these patterns.

To this end, we group the DMD pixels into an odd number of classes $m$. The class number is randomly assigned to every pixel. To encode the measurement of the zeroth spatial frequency of the image we define $m+p$ binary sampling patterns. Each of these patterns consists of either $(m+1)/2$ or $(m-1)/2$ pixel classes in state "1". We note that such patterns contain approximately half of the pixels in states "0" and "1". To find a possible mapping between the pixel classes and the patterns we define a small binary matrix $\mathbf {A}_m^p$ of size $(m+p)\times m$. Each row of the matrix defines a single sampling pattern, and each column represents one class of pixels. We performed a numerical search for such binary matrices with the condition that every row of the matrix contains $(m\pm 1)/2$ equal values (0s or 1s) and that $\textrm {rank}(\mathbf {D}^p_{m+p}\cdot \mathbf {A}^p_m)=m$. This assures that a row consisting of ones is linearly dependent with the rows of $\mathbf {D}^p_{m+p}\cdot \mathbf {A}^p_m$, implying that information about the DC component is contained in the measurement also after applying the Laplace or gradient operator $\mathbf {D}^p_{m+p}$. Among many possible solutions, we have chosen such matrices $\mathbf {A}^p_m$ that minimize the dispersion of coefficients needed to reconstruct the DC component from the measurement with $\mathbf {D}^p_{m+p}\cdot \mathbf {A}_m^p$. As an example, the following matrices satisfy these conditions for $m=3$ and $m=7$ and for the order of the finite difference operator $p=1$ and $p=2$,

$$\mathbf{A}_3^1 = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, \mathbf{A}_3^2 = \begin{pmatrix} 0 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 0 \end{pmatrix},$$
$$\mathbf{A}_{7}^1 = \begin{pmatrix} 0 & 1 & 0 & 0 & 1 & 1 & 1\\ 1 & 1 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 1 & 1 \end{pmatrix}, \mathbf{A}_{7}^2 = \begin{pmatrix} 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 1 & 1 & 0 & 0 & 1 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 1 & 1\\ 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 0 & 1 & 1 & 1 & 0 & 0 & 0 \end{pmatrix}.$$
After finding $\mathbf {A}^p_m$ and mapping its columns to groups of DMD pixels, $m+p$ additional binary sampling patterns are created. Figure 1 graphically illustrates how the binary sampling patterns are constructed with the help of matrix $\mathbf {A}^p_m$. In Fig. 1, three colors indicate the three pixel classes. Every DMD pixel is randomly assigned to exactly one of the classes, and pixels within each class take the same value equal to the corresponding element of matrix $\mathbf {A}^p_m$. The patterns used in the measurement of the image DC-component are included as the first patterns in the measurement matrix $\mathbf {M}$. Other patterns are chosen from some selected basis. We use a subset of binarized DCT basis corresponding to the lowest spatial frequencies. These components are selected in a way shown in Fig. 2(a). The binarization level is selected randomly in the range $[(m-1)/2m, (m+1)/2m]$. By the binarization level we understand the percentage of pixels in state "0" after binarization. A sample DCT pattern binarized at various levels is shown in Fig. 3, while Fig. 4 illustrates the effect of randomizing the binarization level on the number of pixels in state "1" for the proposed sampling in comparison with a classical sampling protocol. SPI measurement of a uniform image would produce a detection signal $\mathbf {y}$ with the same distribution as that shown in Fig. 4. An image dominated with low-frequency contents is likely to give a similar signal as well, and the binarization range gives a rough indication of the operating range of the detector.

 figure: Fig. 1.

Fig. 1. This figure explains how the binary sampling patterns are created from matrix $\mathbf {A}^p_m$. We divide the DMD into $m$ classes of pixels. Every class contains approximately $n/m$ randomly chosen DMD pixels (shown with distinct colors). Rows of $\mathbf {A}^p_m$ encode $m+p$ binary patterns that enable to recover the DC component of the image in a differential measurement. Each element of the row governs the state of one class of pixels within a single pattern.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2.

Fig. 2. (a) Selected subset of low-frequency DCT patterns (shown in yellow), (b) Power spectral density (PSD) of the real-valued DCT sampling functions, (c) PSD of the binarized DCT sampling functions, (d) and (e) PSD of the binarized DCT sampling functions to which the first (d) or second (e) order difference operator was applied.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3.

Fig. 3. Binarization of a real-valued basis pattern at various levels. (a) sample DCT real-valued pattern; (b)-(d) the same pattern binarized at the levels of 50% (b), 60% (c), and 40% (d). The binarization level determines the percentage of DMD pixels in the state "0" for the binarized pattern.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. The fraction of DMD pixels in state "1" in the first 50 sampling patterns. Patterns used to measure the DC-component come first (and are plotted in magenta), other e.g., binarized DCT patterns follow (and are plotted in blue). (a) Classical sampling with the DC image component measured with a single pattern with all pixels in state "1", and with half pixels in both states for the rest of patterns, (b) classical differential sampling with the DC image component measured with 2 patterns with all pixels in the states "1" and "0". (c)-(f) The proposed differential sampling protocols for selected values of $p$ and $m$. The DC of the image is measured using $p+m$ binary patterns, and DCT patterns are binarized at a random level in the range $[(m-1)/2m, (m+1)/2m]$.

Download Full Size | PPT Slide | PDF

Figure 2(c) shows that binarization of the real-valued sampling patterns enhances the high spatial frequency contents of their power spectral density (PSD). On the other hand, Figs. 2(c) and 2(d) indicate that the PSD is little affected by the finite difference operators. The effective measurement matrix $\mathbf {D}^p_k\cdot \mathbf {M}$ consists of linear combinations of patterns in $\mathbf {M}$, and in terms of PSD, the differential sampling is nearly equivalent to the original one consisting of binarized DCT patterns.

2.2 Tuning the method

The properties of the D-FDRI framework presented in this section depend on the choice of several parameters, namely $p,m,\epsilon$ and $\mu$.

The value of $m$ determines the number of additional samples that probe the DC component of an image and the thresholding range used in the binarization of the sampling functions. We used $m=7$ most of the time, which implies that the binarization level of sampling functions is selected randomly in the range [43%, 57%], although values of $m$ between 3 and 11 also work well. As a rule of thumb, the thresholding level should be close to 50%, but varying it, increases the entropy of the measured signal, and improves noise robustness of the method.

The order of the differential operator $p\in \lbrace 1,2\rbrace$ is not as important as we expected. We did not observe any practical advantage of using $p=2$ over $p=1$ in the presence of noise under experimental conditions. When $p=2$ the calculation of pseudoinverse in Eq. (6) usually resulted in obtaining some very small eigenvalues in the SVD. Still, in practice we reached a similar imaging quality with both operators. As a remark, Eq. (6) should be evaluated using double-precision floating point arithmetic, even if $\mathbf {P_g}$ is later stored with a lower precision for the reconstruction stage.

In Fig. 5 we demonstrate how the parameters $m$ and $p$ influence the performance of the D-FDRI method without noise and in the presence of measurement noise. To measure the quality of the reconstructed images, we use the peak signal-to-noise ratio (PSNR) metric defined as:

$${\textrm{PSNR}}( {\tilde {\textbf{x}}}, x) = 10 \log_{10} \frac{\max(\textbf{x})^2}{\frac{1}{n} \sum_{i=1}^{n} (x_i - {\tilde x}_i)^2},$$
where $\max (\textbf {x})$ is the maximal pixel brightness of the original image. PSNR is affected both by the measurement noise and compression noise. Without measurement noise PSNR primarily depends on the compression CR. However at a certain level of experimental noise, PSNR can no longer be increased by increasing CR. This situation is illustrated in Fig. 5, which shows the drop in PSNR due to the measurement noise for various values of $p$ and $m$. Noise robustness improves with the increase of $m+p$. In fact, PSNR explicitly depends on the mean value of the image, which is found from the measurement with $m+p$ patterns. The more patterns are used to determine the DC component, the lower is its measurement uncertainty.

 figure: Fig. 5.

Fig. 5. Average PSNR obtained with the proposed differential sampling technique without noise vs. average PSNR obtained in the presence of additive white noise with $\sigma =0.004$, where $\sigma$ is the standard deviation of the noise divided by the range of the measured SPI signal. The plot compares results obtained for compression ratios of $2\%, 3\%, 6\%, 10\%$, for parameter $m=3,5,7$ and $p=1,2$, when $\epsilon =10^{-7}$ and $\epsilon =0.5$. The simulation includes the presence of white additive Gaussian noise with standard deviation $\sigma R$, where $R$ denotes the range of the measured SPI signal. PSNR values are averaged over results obtained for 5120 images from the Caltech256 image database.

Download Full Size | PPT Slide | PDF

The significance of parameters $\epsilon$ and $\mu$ in Eq. (5) has been discussed in [5]. The 2 parameters require fine-tuning. While $\mu \approx 0.5$ works reasonably well in most situations [5], the value of $\epsilon$ may significantly affect noise robustness of the method. It is reasonable to take $\epsilon <<1$ and then to test by trial and error the value of $\epsilon$ and respective magnitude of the elements in matrix $\mathbf {P_g}$. In the case $|\mathbf {P_g}|$ contains elements with a large magnitude, image reconstruction with the formula $\tilde {\mathbf {x}}=\mathbf {P_g}\cdot \mathbf {y}$ would enhance the measurement noise.

2.3 Complementary sampling

SPI optical set-ups with complementary sampling make use of the 2 beams reflected from the DMD. A single measurement gives 2 measurement vectors, $\textbf{y}_1=\textbf{M}\cdot \textbf{x}$ and $\textbf{y}_2=(1-\textbf{M})\cdot \textbf{x}$ (where we have neglected the noise terms, and $\textbf{1}$ stands for a $k\times n$ matrix filled with ones). The usual approach is to take the difference $\textbf{y}=({{\textbf y}_1}-{{\textbf y}_2})/2$ as the result of a differential measurement. This allows for increasing the SNR and removing an equal bias signal from the 2 detectors.

Since in D-FDRI the differential operator is included in the definition of $\textbf{P}_g$ in Eq. (6), the reconstructed image does not change when some constant value $c$ is added to $\textbf{y}$

$${\textbf{P}}_g\cdot\textbf{y}=\textbf{P}_g\cdot({\textbf{y}}+c).$$
This makes it possible to acquire data from the detectors with AC coupling. On the other hand, it allows to construct 2 independent data channels with the 2 measurements $\textbf{y}_1$ and $-\textbf{y}_2$. The same reconstruction matrix $\textbf{P}_g$ may be used in the 2 channels. The final formula for image reconstruction in the 2 channels is therefore
$$\tilde{\mathbf{x}_l}=\theta\left(\mathbf{P_g}\cdot \mathbf{y}_l\cdot ({-}1)^{l-1}\right),$$
where $l=1,2$ is the channel number, and $\theta (x)=x\textrm { if }x>0,\textrm { and }0\textrm { otherwise}$. In the present work, images reconstructed in the 2 channels are represented with different colors.

3. Imaging quality, noise robustness, and signal entropy

Entropy of the signals measured by the SPI system is calculated as

$$H ={-} \sum_{i=1}^{2^b} P(v_i) \log_2 P(v_i),$$
where $P(v_i)$ is the probability of an outcome of a single measurement $y_j$ ($j=1,2,\ldots ,k$) taking value $v_i$. The total number of possible outcomes of a single measurement equals $2^b$, where $b$ is the number of bits used in the analogue-to-digital (A/D) conversion of the signal.

In the information theory, entropy is a common measure of the average information carried by the possible outcomes of a stochastic variable. The less probable a specific outcome is, the more information is conveyed in its occurrence. While entropy is limited by the number of possible states of the random event $H \leq b$, its maximal value is attained in case of events with uniform probability distribution $P(v_i)$. It may be intuitively rephrased that the outcome of an experiment is most uncertain, and therefore most informative, when all possible results are equally probable. Entropy is also an essential measure for data compression, as it represents the absolute limit of how well data may be compressed without loss.

In Fig. 6(a) we compare our proposed differential SPI sampling protocol with the standard non-differential one in terms of the average entropy per measurement $H/k$. The respective probability distributions $P(v_i)$ for both sampling methods are presented in Fig. 6(b). We estimate the probability distributions $P(v_i)$ experimentally, based on the histograms of simulated measurements calculated for over 180 thousand images. The images are taken from the Caltech 256 Image Dataset [45], which contains 30 thousand images organized into 256 distinct categories. Additional data augmentation (rescaling, random cropping, flipping, rotation) was used to further increase the amount of data. For each of the images, we calculate the SPI signal ${\bf y}$ obtained by sampling the image with patterns stored in matrix ${\bf M}$, according to Eq. (1) excluding the noise. Then, each signal ${\bf y}$ is normalized. For the proposed sampling, we assume that the mean value of the measurements $\bar{\bf y}=0$ (i.e., only the AC component of the signal is measured) and the signal fits in range ${-}1 \leq y_j \leq 1$ for $j=1,2,\ldots ,k$. For the classical sampling, all measurements are non-negative and, after normalization, they fit into the range $0 \leq y_j \leq 1$, where the maximal value 1 is practically obtained only for the measurement with the pattern which has all pixels in the "1" state. Finally, the signals are discretised using between 8 and 16 bits as $b$, which corresponds to the range of the bit-resolution of the DAQ. The estimates of $P(v_i)$ are calculated for each value of $b$ separately as the average probability of measuring value $v_i$ over all of the sampling patterns and the whole image dataset.

 figure: Fig. 6.

Fig. 6. (a) Average entropy per measurement pattern $H/k$ of the measured SPI signals as a function of the signal bit-resolution $b$. (b) Experimental probability distributions of the occurrence of specific voltage values $v_i$ in the measured SPI signals for $b=8$. Results obtained from simulated measurements with over 180 thousand images.

Download Full Size | PPT Slide | PDF

While the standard SPI protocol results in the concentration of the measured values $y_j$ around the center of the DAQ range, the proposed differential method yields much broader spread of the measured signal. This broadening is achieved thanks to the following properties of the proposed method: (1) the elimination of the constant bias from the measurements by introducing the differential measurement protocol, (2) decomposition of the DC measurement into several patterns with transparency close to (but not equal) 50 %, and (3) the application of randomized thresholding to the DCT functions. The resulting entropy of the proposed sampling is close to its upper limit and takes an average value $H/k \approx 0.96\ b$, while for the standard sampling protocol $H/k \approx 0.4 b$ on average.

For the purpose of numerical evaluation of the proposed method, in the present and the following simulations we have taken $m=7, p=2, \mu =0.5$ and $\epsilon =10^{-8}$. The size of the scene is $n=256^2$, and the number of DCT sampling functions is equal to $k=1974+m+p$, which corresponds to the compression ratio CR $\approx$ 3%. DCT basis corresponding to the lowest spatial frequencies are selected, and they are thresholded randomly to contain between 43% and 57% pixels in the "1" and "0" states. As a reference, we take the non-differential FDRI [5] with the same basis functions discretized at $50\%$, and with $\mu =0.5$. This way, the comparison is focused on the significance of including the differential operator and on the varied thresholding in D-FDRI. In Figs. 7 and 8 we analyze the robustness of the proposed method to noise coming from 2 mutually independent sources: the statistical detector noise ${\bf n_d}$, modeled as additive white Gaussian noise, and the discretization noise applied to the signal at the stage of A/D conversion. For this reason, we have performed simulated measurements and reconstructions with a set of 5120 images selected randomly from the Caltech 256 database. For evaluation of the quality of the reconstructed images, we use 2 metrics, namely: the PSNR defined in Eq. (9) and the mean similarity index (SSIM).

 figure: Fig. 7.

Fig. 7. Sensitivity of the proposed sampling method (red) to the experimental noise $\mathbf {n}_d$. Direct DCT sampling (blue) for comparison. Average PSNR and SSIM values obtained from simulated SPI measurements and reconstructions with 5120 images. The simulation includes the presence of white additive Gaussian noise with $\sigma$ denoting the standard deviation of the noise divided by the range of the measured SPI signal. This noise model corresponds to the inherent statistical noise of the SPI camera and it is independent of the A/D discretization noise. Respective subplots are obtained for 8-bit, 10-bit, 12-bit, and 16-bit discretization. Vertical dotted lines mark the level of noise present in our experimental SPI set-up.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8.

Fig. 8. Sensitivity of the proposed sampling method (red) to the A/D discretization. Standard non-differential SPI sampling with the same set of DCT patterns (blue) for comparison. Average PSNR (a) and SSIM (b) values obtained from simulated SPI measurements and reconstructions with 5120 images. Before the image reconstruction, the SPI signals are converted into signals with b significant bits, i.e., only $2^b$ distinct values are measured by the detector. We use (b) between 8 and 16, which corresponds to the range of the bit-resolution of our experimental equipment. We assume no presence of other types of experimental noise.

Download Full Size | PPT Slide | PDF

For calculating SSIM, we use the built-in Matlab function ssim with default parameters, which implements the mean SSIM metric from [46].

In contrast to the standard SPI protocol, D-FDRI is significantly less sensitive to both types of noise present in the SPI cameras. Especially its robustness to the discretization noise allows to lower the cost of the SPI set-up by taking advantage of a cheaper, lower-resolution DAQ with almost no impact on the quality of the recorded images. We have found that in the case of using an 8-bit DAQ, the average decrease of PSNR due to the discretization is $\sim$0.6 dB for the proposed method, as compared to almost 2.5 dB for the standard non-differential sampling.

4. Experimental polarization imaging and VIS-IR imaging

In this section we validate the proposed D-FDRI SPI framework experimentally. The schematics of our optical SPI set-ups for polarization imaging and imaging in the visible and near infrared wavelength bands are shown in Figs. 9(a) and 9(b), respectively. The modulator splits the incoming optical beam into 2 complementarily modulated beams that are measured independently with 2 bucket detectors. In the case of polarization imaging, linear orthogonally oriented polarizers are put in front of the 2 photodiodes, and after image reconstruction, the polarization information is indicated with pseudocoloring. In the presented situation, the scene is illuminated with unpolarized light which passes through a linear polarizer and a moving birefringent object. In the case of VIS-IR imaging, 2 different detectors are used for measuring the intensities of the beams reflected from the DMD. A similar set-up could allow for a complementary differential measurement, however here the 2 channels are used to measure independently 2 polarization states or 2 wavelength bands, respectively, while the sampling and reconstruction method assure the differential properties of the measurement.

 figure: Fig. 9.

Fig. 9. Schematics of the SPI configurations with 2 channels. (a) Polarization imaging with complementary channels measuring orthogonal linear polarizations. (b) Concurrent measurement of visible and near-infrared bands.

Download Full Size | PPT Slide | PDF

The DMD (Vialux V-7001 XGA with DLP7000 chip) is operated at 22.7 kHz and at a spatial resolution of $256\times 256$ (with $3\times 4$ pixels blocks assigned equal values). The infrared channel makes use of reflective optics. The detector (Vigo System, PVI-4TE-1x1mm) is a four-stage thermoelectrically cooled IR photovoltaic detector based on HgCdTe heterostructures optimised for the range 1-5.5 $\mu$m. The detector is equipped with integrated transimpedance amplifier (DC 100k V/A). For the visible range, we use Thorlabs PDA100A2 amplified silicon photodiodes. The signals are digitized with a Picoscope 5000 DAQ with a sampling of 256 ns and streamed through a USB bus for real-time processing. A multiprocess Python program reads, synchronizes, and averages the data, and reconstructs images from 2 independent channels in real time. The flowchart in Fig. 10 shows the data processing stages in the 2 channels and image integration into a single pseudocolor frame. For polarization imaging, the 2 polarization channels provide the red and blue components of the red-green-blue color representation. For visible-infrared imaging, the visible channel is represented as a gray-scale image, and is replaced with the infrared channel represented with an orange image at those locations where the infrared channel exceeds a threshold value that corresponds to a temperature of approximately $100^{\circ }$C.

 figure: Fig. 10.

Fig. 10. Flow-chart explaining the processing of acquired data. Data is acquired continuously at a sampling of 256 ns. The data gathered during the exposure of a single sampling pattern (marked in color) are selected and averaged to obtain a single measurement $y_i$. Averaging is performed independently in each of the 2 channels. Then, monochromatic images from the 2 channels are reconstructed independently and integrated by color multiplexing to produce a single pseudocolor frame.

Download Full Size | PPT Slide | PDF

Sample image reconstructions are shown in Fig. 11. The following parameters were used $m=7,p=1,\mu =0.5, \textrm {CR}=6\%$. Respective Visualizations include sample movies with a birefringent ruler moved in front of a linear polarizer (Visualization 1 and Visualization 2) as well as with a hot soldering iron tip moved in front of a resolution test image (Visualization 3).

 figure: Fig. 11.

Fig. 11. Sample pseudocolor frames obtained with the real-time 2-channel SPI. (a) Polarization imaging of a birefringent object (ruler) placed in front of a linear polarizer, (b) VIS-IR imaging of a hot object (soldering iron tip at 400$^{\circ }$C). Respective visualizations are Visualization 1 showing polarization imaging of a rotated linear polarizer, Visualization 2 with a moving birefringent object, Visualization 3-with the moving soldering iron tip.

Download Full Size | PPT Slide | PDF

The compression ratios between 2% to 6% allow for imaging at in between $17$ Hz and $6$ Hz. For still objects, image quality naturally increases with the compression ratio $k/n$ which is inversely proportional to the framerate. However, for non-stationary objects the value of PSNR is a trade-off between compression ratio and acquisition time. This trade-off is characterized in Fig. 12 showing a rotating Lego robot. Figures 12(a) and 12(b) show the SPI measurements of the robot at the compression ratio equal to 100%, 6%, 3% and 2%, respectively. Rotating wheels give strong artifacts shown in Fig. 12(c), and the PSNR drops the most rapidly with angular velocity when the compression ratio is the largest [See Fig. 12(d)]. These results are also shown in Visualization 4, Visualization 5, Visualization 6, made for a varying angular speed of robot wheels.

 figure: Fig. 12.

Fig. 12. Examples of video frames obtained with the real-time single-channel visible SPI configuration using the proposed sampling with different compression ratios $\textrm {CR}=k/n$. (a) Reference image measured with $\textrm {CR} = 100\%$ (robotic toy constructed with Lego bricks). (b) Reconstructed images with $\textrm {CR} = 2, 3,$ or $6\%$ for still scene. (c) Reconstructed video frames with $\textrm {CR} = 2, 3,$ or $6\%$ for moving scene. More detailed video reconstructions with varying movement speeds are presented in Visualization 4, Visualization 5, and Visualization 6. (d) Average PSNR of the reconstructions obtained for each sampling ratio as a function of the movement speed of the scene. The reference image (a) is used as the ground truth for calculating PSNR, and the calculation is performed only for these frames, in which the position of the rotating wheels matches the one from the reference image.

Download Full Size | PPT Slide | PDF

Finally, in Table 1 we compare the proposed D-FDRI reconstruction method with the classical compressive sensing (CS) approach in terms of both image quality and reconstruction time. In both cases, the same differential binarized DCT measurement matrices are used with $m=7$ and $p=1$. For the purpose of CS reconstruction, we solve the basis pursuit denoise problem with the total variation regularisation using NESTA solver [34]. As the differential measurement matrix $\textbf{D}^p_k\cdot \textbf{M}$ is not semi-orthogonal (i.e., $(\textbf{D}^p_k\cdot \textbf{M})\cdot (\textbf{D}^p_k\cdot \textbf{M}) \neq \textbf{I}$, where $\textbf{I}$ is an identity matrix), we reformulate the CS optimization problem into the following equivalent form:

$${{\tilde{\textbf x}}} \ = \underset{\textbf{x}}{\textrm{argmin}} \ \lambda \,\|{\textbf{x}}\|_{TV} + \frac{1}{2} \,\| {\boldsymbol{\Sigma}}^{{-}1} \cdot {\textbf{U}}^T \cdot {\textbf{D}}^p_k \cdot {\textbf{y}} - {\textbf{V}}^T \cdot {\textbf{x}} \|_2^2,$$
where $\textbf{D}^p_k\cdot \textbf{M}=\textbf{U} \cdot \boldsymbol{\Sigma } \cdot \textbf{V}^T$ is the SVD decomposition of matrix $\textbf{D}^p_k\cdot \textbf{M}$. The parameter $\lambda$ in Eq. (13) controls the trade-off between sparsity and fidelity of the reconstruction. Denotations $\|\textbf{v} \|_{2}$ and $\|\ \textbf{v} \|_{TV}$ stand for the $\ell _2$-norm and total-variation norm of a vector $\textbf{v}$, respectively. The reconstruction times presented in Table 1 do not include either the time necessary to compute SVD for CS reconstruction or to compute matrix ${{\textbf P}_g}$ for D-FDRI, as they both may be computed only once for each measurement matrix and stored. The reconstruction times have been obtained on an Intel i5-4590 CPU without any GPU acceleration. For the given sampling, D-FDRI is faster than TV-norm minimization by over 2 orders of magnitude. As a matter of fact, being a single-step reconstruction method based on calculating a single matrix-vector product per image, D-FDRI is fast enough to reconstruct video images with resolution $256 \times 256$ in real-time on a desktop CPU using over 20 kHz DMD for image sampling. At the same time, both methods produce reconstructions of comparable quality, while D-FDRI yields on average 0.23 dB higher PSNR owing to better robustness to noise. This result does not prove that the proposed method would always give a higher PSNR than the methods of compressive sensing but it indicates that both approaches typically provide comparable results.

Tables Icon

Table 1. Performance of D-FDRI reconstruction method in comparison with the compressive sensing (CS) techniques in terms of the average reconstruction time and PSNR obtained for compression ratios $\textrm {CR} = 2\%, 3\%,$ or $6\%$. In both cases, the same measurement matrices have been used for each value of CR, and the measurements have been obtained using a single-channel visible SPI for the still robotic toy scene presented in Fig. 12(a). CS reconstructions have been calculated via solving the basis pursuit denoise problem with the total variation regularisation using NESTA solver [34].

5. Conclusions

We propose a fast, differential, single-pixel imaging framework and demonstrate real-time SPI at the resolution of $256\times 256$ at frame-rates between 6 and 17 Hz in 2 independent channels assigned to different spectral bands or polarizations. The D-FDRI reconstruction method is based on a matrix-vector multiplication (one multiplication per channel per frame) and resembles the solution proposed in [5] but is modified to account for the differential treatment of the measured data with either a Laplace or gradient discrete operators. It gives a similar or even better image quality as could be obtained with compressive sensing, but is significantly faster than iterative optimization techniques. Measurement of the zeroth spatial frequency is decomposed into several measurements with binary patterns with help of an auxiliary binary numerically optimized matrix. D-FDRI could use various bases of patterns, and we focus on binarized DCT basis with varied thresholding. The entropy of the detection signal is increased in comparison to standard SPI protocols enabling to reduce the bit-resolution requirements of the DAQ or otherwise improving noise robustness of the proposed SPI method. A trade-off behaviour between imaging quality and compression ratio is demonstrated as a function of the frame-rate.

Funding

Narodowe Centrum Nauki (UMO-2017/27/B/ST7/00885, UMO-2019/35/D/ST7/03781).

Disclosures

The authors declare no conflicts of interest.

Data availability

Source data will be provided by the authors at a reasonable request. Source code for calculating measurement and reconstruction matrices will be made public at [47] after article publication. The FDRI part of the code is based on [48]. We have used the Caltech256 image database [45] in this work.

References

1. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008). [CrossRef]  

2. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006). [CrossRef]  

3. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020). [CrossRef]  

4. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

5. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Real-time single-pixel video imaging with Fourier domain regularization,” Opt. Express 26(16), 20009–20022 (2018). [CrossRef]  

6. R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020). [CrossRef]  

7. C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018). [CrossRef]  

8. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020). [CrossRef]  

9. E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018). [CrossRef]  

10. Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

11. M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020). [CrossRef]  

12. W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019). [CrossRef]  

13. H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018). [CrossRef]  

14. S. Chen, L. Du, K. Meng, J. Li, Z. Zhai, Q. Shi, Z. Li, and L. Zhu, “Terahertz wave near-field compressive imaging with a spatial resolution of over λ/100,” Opt. Lett. 44(1), 21–24 (2019). [CrossRef]  

15. J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20(20), 22102–22117 (2012). [CrossRef]  

16. A. Mahalanobis, R. Shilling, R. Murphy, and R. Muise, “Recent results of medium wave infrared compressive sensing,” Appl. Opt. 53(34), 8060–8070 (2014). [CrossRef]  

17. Z. Wu and X. Wang, “Focal plane array-based compressive imaging in medium wave infrared: modeling implementation, and challenges,” Appl. Opt. 58(31), 8433–8441 (2019). [CrossRef]  

18. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

19. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

20. W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015). [CrossRef]  

21. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

22. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016). [CrossRef]  

23. O. Denk, A. Musiienko, and K. Žídek, “Differential single-pixel camera enabling low-cost microscopy in near-infrared spectral region,” Opt. Express 27(4), 4562–4571 (2019). [CrossRef]  

24. S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

25. M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

26. R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016). [CrossRef]  

27. A. Pastuszczak, B. Szczygiel, M. Mikołajczyk, and R. Kotyński, “Efficient adaptation of complex-valued noiselet sensing matrices for compressed single-pixel imaging,” Appl. Opt. 55(19), 5141–5148 (2016). [CrossRef]  

28. W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016). [CrossRef]  

29. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

30. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

31. E. de Tommasi, L. Lavanga, S. Watson, and M. Mazilu, “Encoding complex valued fields using intensity,” Opt. Express 24(20), 23186–23197 (2016). [CrossRef]  

32. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with sampling distributed over simplex vertices,” Opt. Lett. 44(5), 1241–1244 (2019). [CrossRef]  

33. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with Morlet wavelet correlated random patterns,” Sci. Rep. 8(1), 466 (2018). [CrossRef]  

34. S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011). [CrossRef]  

35. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

36. M. E. Gehm and D. J. Brady, “Compressive sensing in the eo/ir,” Appl. Opt. 54(8), C14–C22 (2015). [CrossRef]  

37. G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017). [CrossRef]  

38. J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018). [CrossRef]  

39. X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018). [CrossRef]  

40. V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012). [CrossRef]  

41. F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013). [CrossRef]  

42. S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015). [CrossRef]  

43. K. L. C. Seow, P. Torok, and M. R. Foreman, “Single pixel polarimetric imaging through scattering media,” Opt. Lett. 45(20), 5740–5743 (2020). [CrossRef]  

44. M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018). [CrossRef]  

45. G. Griffin, A. Holub, and P. Perona, “The Caltech 256. Caltech Technical Report,” Caltech, 2006, http://www.vision.caltech.edu/Image_Datasets/Caltech256/.

46. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

47. A. Pastuszczak, R. Stojek, P. Wrobel, and R. Kotynski, “Differential Fourier Domain Regularized Inversion (D-FDRI source code)),” Github, 2021, http://www.github.com/rkotynski/D_FDRI.

48. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Fourier Domain Regularized Inversion (FDRI source code),” Github, 2019, https://github.com/KMCzajkowski/FDRI-single-pixel-imaging.

References

  • View by:

  1. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
    [Crossref]
  2. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
    [Crossref]
  3. G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020).
    [Crossref]
  4. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019).
    [Crossref]
  5. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Real-time single-pixel video imaging with Fourier domain regularization,” Opt. Express 26(16), 20009–20022 (2018).
    [Crossref]
  6. R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
    [Crossref]
  7. C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
    [Crossref]
  8. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
    [Crossref]
  9. E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018).
    [Crossref]
  10. Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
    [Crossref]
  11. M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
    [Crossref]
  12. W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
    [Crossref]
  13. H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
    [Crossref]
  14. S. Chen, L. Du, K. Meng, J. Li, Z. Zhai, Q. Shi, Z. Li, and L. Zhu, “Terahertz wave near-field compressive imaging with a spatial resolution of over λ/100,” Opt. Lett. 44(1), 21–24 (2019).
    [Crossref]
  15. J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20(20), 22102–22117 (2012).
    [Crossref]
  16. A. Mahalanobis, R. Shilling, R. Murphy, and R. Muise, “Recent results of medium wave infrared compressive sensing,” Appl. Opt. 53(34), 8060–8070 (2014).
    [Crossref]
  17. Z. Wu and X. Wang, “Focal plane array-based compressive imaging in medium wave infrared: modeling implementation, and challenges,” Appl. Opt. 58(31), 8433–8441 (2019).
    [Crossref]
  18. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
    [Crossref]
  19. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
    [Crossref]
  20. W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
    [Crossref]
  21. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
    [Crossref]
  22. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
    [Crossref]
  23. O. Denk, A. Musiienko, and K. Žídek, “Differential single-pixel camera enabling low-cost microscopy in near-infrared spectral region,” Opt. Express 27(4), 4562–4571 (2019).
    [Crossref]
  24. S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013).
    [Crossref]
  25. M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).
  26. R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
    [Crossref]
  27. A. Pastuszczak, B. Szczygiel, M. Mikołajczyk, and R. Kotyński, “Efficient adaptation of complex-valued noiselet sensing matrices for compressed single-pixel imaging,” Appl. Opt. 55(19), 5141–5148 (2016).
    [Crossref]
  28. W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
    [Crossref]
  29. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
    [Crossref]
  30. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
    [Crossref]
  31. E. de Tommasi, L. Lavanga, S. Watson, and M. Mazilu, “Encoding complex valued fields using intensity,” Opt. Express 24(20), 23186–23197 (2016).
    [Crossref]
  32. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with sampling distributed over simplex vertices,” Opt. Lett. 44(5), 1241–1244 (2019).
    [Crossref]
  33. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with Morlet wavelet correlated random patterns,” Sci. Rep. 8(1), 466 (2018).
    [Crossref]
  34. S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
    [Crossref]
  35. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
    [Crossref]
  36. M. E. Gehm and D. J. Brady, “Compressive sensing in the eo/ir,” Appl. Opt. 54(8), C14–C22 (2015).
    [Crossref]
  37. G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017).
    [Crossref]
  38. J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
    [Crossref]
  39. X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
    [Crossref]
  40. V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
    [Crossref]
  41. F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
    [Crossref]
  42. S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
    [Crossref]
  43. K. L. C. Seow, P. Torok, and M. R. Foreman, “Single pixel polarimetric imaging through scattering media,” Opt. Lett. 45(20), 5740–5743 (2020).
    [Crossref]
  44. M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
    [Crossref]
  45. G. Griffin, A. Holub, and P. Perona, “The Caltech 256. Caltech Technical Report,” Caltech, 2006, http://www.vision.caltech.edu/Image_Datasets/Caltech256/ .
  46. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  47. A. Pastuszczak, R. Stojek, P. Wrobel, and R. Kotynski, “Differential Fourier Domain Regularized Inversion (D-FDRI source code)),” Github, 2021, http://www.github.com/rkotynski/D_FDRI .
  48. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Fourier Domain Regularized Inversion (FDRI source code),” Github, 2019, https://github.com/KMCzajkowski/FDRI-single-pixel-imaging .

2020 (5)

G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020).
[Crossref]

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
[Crossref]

K. L. C. Seow, P. Torok, and M. R. Foreman, “Single pixel polarimetric imaging through scattering media,” Opt. Lett. 45(20), 5740–5743 (2020).
[Crossref]

2019 (6)

2018 (9)

K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with Morlet wavelet correlated random patterns,” Sci. Rep. 8(1), 466 (2018).
[Crossref]

K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Real-time single-pixel video imaging with Fourier domain regularization,” Opt. Express 26(16), 20009–20022 (2018).
[Crossref]

E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018).
[Crossref]

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
[Crossref]

M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
[Crossref]

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

2017 (2)

2016 (5)

E. de Tommasi, L. Lavanga, S. Watson, and M. Mazilu, “Encoding complex valued fields using intensity,” Opt. Express 24(20), 23186–23197 (2016).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

A. Pastuszczak, B. Szczygiel, M. Mikołajczyk, and R. Kotyński, “Efficient adaptation of complex-valued noiselet sensing matrices for compressed single-pixel imaging,” Appl. Opt. 55(19), 5141–5148 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

2015 (6)

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

M. E. Gehm and D. J. Brady, “Compressive sensing in the eo/ir,” Appl. Opt. 54(8), C14–C22 (2015).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

2014 (2)

2013 (2)

S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013).
[Crossref]

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

2012 (3)

2011 (1)

S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
[Crossref]

2010 (1)

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

2008 (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

2006 (1)

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

2004 (1)

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Andres, P.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Becker, S.

S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
[Crossref]

Blu, T.

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Bobin, J.

S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
[Crossref]

Bovik, A.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Bowman, R.

Bowman, R. W.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

Brady, D. J.

Candes, E.

S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
[Crossref]

Candès, E. J.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Cao, J.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

Chabert, C.

Chen, H.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Chen, S.

S. Chen, L. Du, K. Meng, J. Li, Z. Zhai, Q. Shi, Z. Li, and L. Zhu, “Terahertz wave near-field compressive imaging with a spatial resolution of over λ/100,” Opt. Lett. 44(1), 21–24 (2019).
[Crossref]

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

Chen, W.

Clemente, P.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
[Crossref]

Czajkowski, K. M.

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

de Tommasi, E.

Denk, O.

Du, L.

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Duran, V.

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
[Crossref]

Edgar, M.

C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013).
[Crossref]

Edgar, M. P.

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019).
[Crossref]

G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
[Crossref]

Fernandez-Alonso, M.

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
[Crossref]

Ferri, F.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Foreman, M. R.

Fu, Y.

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

Gatti, A.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Gehm, M. E.

Gibson, G. M.

Guerboukha, H.

H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
[Crossref]

Hao, Q.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

Hempler, N.

Higham, C.

C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Huang, C.

M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
[Crossref]

Irles, E.

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Johnson, S. D.

Jonathan, P.

Kang, C.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

Ke, J.

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Kotynski, R.

Lam, E. Y.

Lan, R. M.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

Lan, R.-M.

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

Lancis, J.

E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
[Crossref]

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Latorre-Carmona, P.

Lavanga, L.

Li, G. N.

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

Li, J.

S. Chen, L. Du, K. Meng, J. Li, Z. Zhai, Q. Shi, Z. Li, and L. Zhu, “Terahertz wave near-field compressive imaging with a spatial resolution of over λ/100,” Opt. Lett. 44(1), 21–24 (2019).
[Crossref]

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

Li, Z.

Liu, J.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

Liu, X. F.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Liu, X.-F.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

Liu, Z. L.

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

Lugiato, L. A.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Ma, X.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Magatti, D.

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Mahalanobis, A.

Maker, G. T.

Malcolm, G. P. A.

Mazilu, M.

Meng, K.

Mikolajczyk, M.

Mitchell, K. J.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Muise, R.

Murphy, R.

Murray-Smith, R.

C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

Musiienko, A.

Nallappan, K.

H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
[Crossref]

Padgett, M.

Padgett, M. J.

G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020).
[Crossref]

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019).
[Crossref]

G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
[Crossref]

Pastuszczak, A.

Penuelas, J.

Phillips, D. B.

Pickwell-MacPherson, E.

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Pla, F.

Radwell, N.

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014).
[Crossref]

Rizvi, S.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

Romberg, J. K.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Salvador-Balaguer, E.

Seow, K. L. C.

Shapiro, J. H.

Sheikh, H.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Shi, Q.

Shilling, R.

Simoncelli, E.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Skorobogatiy, M.

H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
[Crossref]

Soldevila, F.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Stantchev, R.

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Sun, B.

Sun, B. Q.

G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

Sun, M.

Sun, M.-J.

M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
[Crossref]

M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
[Crossref]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Szczygiel, B.

Tajahuerce, E.

E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging,” Opt. Lett. 37(5), 824–826 (2012).
[Crossref]

Takhar, D.

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

Tao, T.

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

Torok, P.

Uribe-Patarroyo, N.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

Wang, C.

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Wang, M.

M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
[Crossref]

Wang, X.

Z. Wu and X. Wang, “Focal plane array-based compressive imaging in medium wave infrared: modeling implementation, and challenges,” Appl. Opt. 58(31), 8433–8441 (2019).
[Crossref]

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Wang, Z.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Watson, S.

Welsh, S.

Welsh, S. S.

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
[Crossref]

Wu, L.-A.

M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

Wu, Z.

Xu, W.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

Xu, Z.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

Xu, Z.-H.

M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
[Crossref]

Yao, X. R.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Yao, X.-R.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

Yu, W. K.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Yu, W.-K.

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

Yu, X.

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Yuan, Y.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Zhai, G. J.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Zhai, G.-J.

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

Zhai, Y.

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

Zhai, Z.

Zhang, K.

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

Zhang, Z.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Zhao, M.

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

Zhao, Q.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

Zhao, W.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Zheng, F.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

Zheng, G.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Zheng, H.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Zhong, J.

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Zhou, Y.

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Zhu, G.

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

Zhu, L.

Žídek, K.

Adv. Opt. Photonics (1)

H. Guerboukha, K. Nallappan, and M. Skorobogatiy, “Toward real-time terahertz imaging,” Adv. Opt. Photonics 10(4), 843–938 (2018).
[Crossref]

Appl. Opt. (4)

Appl. Phys. B (1)

F. Soldevila, E. Irles, V. Duran, P. Clemente, M. Fernandez-Alonso, E. Tajahuerce, and J. Lancis, “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B 113(4), 551–558 (2013).
[Crossref]

Commun. Pure and Appl. Math. (1)

E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure and Appl. Math. 59(8), 1207–1223 (2006).
[Crossref]

IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. (1)

J. Li, Y. Fu, G. N. Li, and Z. L. Liu, “Remote sensing image compression in visible/near-infrared range using heterogeneous compressive sensing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12), 4932–4938 (2018).
[Crossref]

IEEE Sig. Proc. Mag. (1)

M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Sig. Proc. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. on Image Process. (1)

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

J. Opt. (1)

S. S. Welsh, M. P. Edgar, R. Bowman, B. Q. Sun, and M. J. Padgett, “Near video-rate linear stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015).
[Crossref]

JEOS RP (1)

M. Zhao, J. Liu, S. Chen, C. Kang, and W. Xu, “Single-pixel imaging with deterministic complex-valued sensing matrices,” JEOS RP 10, 15041 (2015).

JPhys Photonics (1)

M. Wang, M.-J. Sun, and C. Huang, “Single-pixel 3D reconstruction via a high-speed LED array,” JPhys Photonics 2(2), 025006 (2020).
[Crossref]

Nat. Commun. (2)

R. Stantchev, X. Yu, T. Blu, and E. Pickwell-MacPherson, “Real-time terahertz imaging with a single-pixel detector,” Nat. Commun. 11(1), 2535 (2020).
[Crossref]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref]

Nat. Photonics (1)

M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019).
[Crossref]

Opt. Commun. (3)

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun. 371, 105–111 (2016).
[Crossref]

R.-M. Lan, X.-F. Liu, X.-R. Yao, W.-K. Yu, and G.-J. Zhai, “Single-pixel complementary compressive sampling spectrometer,” Opt. Commun. 366, 349–353 (2016).
[Crossref]

X. R. Yao, R. M. Lan, X. F. Liu, G. Zhu, F. Zheng, W. K. Yu, and G. J. Zhai, “High throughput dual-wavelength temperature distribution imaging via compressive imaging,” Opt. Commun. 410, 287–291 (2018).
[Crossref]

Opt. Express (10)

O. Denk, A. Musiienko, and K. Žídek, “Differential single-pixel camera enabling low-cost microscopy in near-infrared spectral region,” Opt. Express 27(4), 4562–4571 (2019).
[Crossref]

S. Welsh, M. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013).
[Crossref]

B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012).
[Crossref]

G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017).
[Crossref]

E. de Tommasi, L. Lavanga, S. Watson, and M. Mazilu, “Encoding complex valued fields using intensity,” Opt. Express 24(20), 23186–23197 (2016).
[Crossref]

K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Real-time single-pixel video imaging with Fourier domain regularization,” Opt. Express 26(16), 20009–20022 (2018).
[Crossref]

G. M. Gibson, S. D. Johnson, and M. J. Padgett, “Single-pixel imaging 12 years on: a review,” Opt. Express 28(19), 28190–28208 (2020).
[Crossref]

E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26(12), 15623–15631 (2018).
[Crossref]

Z. Xu, W. Chen, J. Penuelas, M. Padgett, and M. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018).
[Crossref]

J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20(20), 22102–22117 (2012).
[Crossref]

Opt. Lasers Eng. (1)

M.-J. Sun, Z.-H. Xu, and L.-A. Wu, “Collective noise model for focal plane modulated single-pixel imaging,” Opt. Lasers Eng. 100, 18–22 (2018).
[Crossref]

Opt. Lett. (4)

Optica (1)

Phys. Rev. Appl. (1)

W. Zhao, H. Chen, Y. Yuan, H. Zheng, J. Liu, Z. Xu, and Y. Zhou, “Ultrahigh-Speed Color Imaging with Single-Pixel Detectors at Low Light Level,” Phys. Rev. Appl. 12(3), 034049 (2019).
[Crossref]

Phys. Rev. Lett. (1)

F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential Ghost Imaging,” Phys. Rev. Lett. 104(25), 253603 (2010).
[Crossref]

Sci. Rep. (7)

C. Higham, R. Murray-Smith, M. Padgett, and M. Edgar, “Deep learning for real-time single-pixel video,” Sci. Rep. 8(1), 2369 (2018).
[Crossref]

S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “DeepGhost: real-time computational ghost imaging via deep learning,” Sci. Rep. 10(1), 11400 (2020).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andres, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6(1), 29181 (2016).
[Crossref]

W. K. Yu, X. F. Liu, X. R. Yao, C. Wang, Y. Zhai, and G. J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015).
[Crossref]

K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with Morlet wavelet correlated random patterns,” Sci. Rep. 8(1), 466 (2018).
[Crossref]

Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017).
[Crossref]

M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015).
[Crossref]

SIAM J. Imaging Sciences (1)

S. Becker, J. Bobin, and E. Candes, “NESTA: A Fast and Accurate First-Order Method for Sparse Recovery,” SIAM J. Imaging Sciences 4(1), 1–39 (2011).
[Crossref]

Other (3)

G. Griffin, A. Holub, and P. Perona, “The Caltech 256. Caltech Technical Report,” Caltech, 2006, http://www.vision.caltech.edu/Image_Datasets/Caltech256/ .

A. Pastuszczak, R. Stojek, P. Wrobel, and R. Kotynski, “Differential Fourier Domain Regularized Inversion (D-FDRI source code)),” Github, 2021, http://www.github.com/rkotynski/D_FDRI .

K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Fourier Domain Regularized Inversion (FDRI source code),” Github, 2019, https://github.com/KMCzajkowski/FDRI-single-pixel-imaging .

Supplementary Material (6)

NameDescription
Visualization 1       Real-time differential single-pixel imaging withFourier domain regularization. Polarization imaging at the resolution of 256x256, at 5.5 fps at the compression ratio of 6%. Orthogonal linear polarizations are shown in pseudocolor (in red and in blue
Visualization 2       Real-time differential single-pixel imaging withFourier domain regularization. Polarization imaging at the resolution of 256x256, at 5.5fps at the compression ratio of 6%. Orthogonal linear polarizations are shown in pseudocolor (in red and in blue)
Visualization 3       Real-time differential single-pixel imaging withFourier domain regularization. Combined visible and near infrared imaging at the resolution of 256x256, at 5.5fps at the compression ratio of 6%. The object consists of a backround scene with a resolut
Visualization 4       Real-time differential single-pixel imaging withFourier domain regularization at the resolution of 256x256, at 5.8fps at the compression ratio of 6%. The object contains a robotic toy constructed with Lego bricks moving at a varying speed.
Visualization 5       Real-time differential single-pixel imaging withFourier domain regularization at the resolution of 256x256, at 11.5fps at the compression ratio of 3%. The object contains a robotic toy constructed with Lego bricks moving at a varying speed.
Visualization 6       Real-time differential single-pixel imaging withFourier domain regularization at the resolution of 256x256, at 17.2fps at the compression ratio of 2%. The object contains a robotic toy constructed with Lego bricks moving at a varying speed.

Data availability

Source data will be provided by the authors at a reasonable request. Source code for calculating measurement and reconstruction matrices will be made public at [47] after article publication. The FDRI part of the code is based on [48]. We have used the Caltech256 image database [45] in this work.

47. A. Pastuszczak, R. Stojek, P. Wrobel, and R. Kotynski, “Differential Fourier Domain Regularized Inversion (D-FDRI source code)),” Github, 2021, http://www.github.com/rkotynski/D_FDRI.

48. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Fourier Domain Regularized Inversion (FDRI source code),” Github, 2019, https://github.com/KMCzajkowski/FDRI-single-pixel-imaging.

45. G. Griffin, A. Holub, and P. Perona, “The Caltech 256. Caltech Technical Report,” Caltech, 2006, http://www.vision.caltech.edu/Image_Datasets/Caltech256/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. This figure explains how the binary sampling patterns are created from matrix $\mathbf {A}^p_m$ . We divide the DMD into $m$ classes of pixels. Every class contains approximately $n/m$ randomly chosen DMD pixels (shown with distinct colors). Rows of $\mathbf {A}^p_m$ encode $m+p$ binary patterns that enable to recover the DC component of the image in a differential measurement. Each element of the row governs the state of one class of pixels within a single pattern.
Fig. 2.
Fig. 2. (a) Selected subset of low-frequency DCT patterns (shown in yellow), (b) Power spectral density (PSD) of the real-valued DCT sampling functions, (c) PSD of the binarized DCT sampling functions, (d) and (e) PSD of the binarized DCT sampling functions to which the first (d) or second (e) order difference operator was applied.
Fig. 3.
Fig. 3. Binarization of a real-valued basis pattern at various levels. (a) sample DCT real-valued pattern; (b)-(d) the same pattern binarized at the levels of 50% (b), 60% (c), and 40% (d). The binarization level determines the percentage of DMD pixels in the state "0" for the binarized pattern.
Fig. 4.
Fig. 4. The fraction of DMD pixels in state "1" in the first 50 sampling patterns. Patterns used to measure the DC-component come first (and are plotted in magenta), other e.g., binarized DCT patterns follow (and are plotted in blue). (a) Classical sampling with the DC image component measured with a single pattern with all pixels in state "1", and with half pixels in both states for the rest of patterns, (b) classical differential sampling with the DC image component measured with 2 patterns with all pixels in the states "1" and "0". (c)-(f) The proposed differential sampling protocols for selected values of $p$ and $m$ . The DC of the image is measured using $p+m$ binary patterns, and DCT patterns are binarized at a random level in the range $[(m-1)/2m, (m+1)/2m]$ .
Fig. 5.
Fig. 5. Average PSNR obtained with the proposed differential sampling technique without noise vs. average PSNR obtained in the presence of additive white noise with $\sigma =0.004$ , where $\sigma$ is the standard deviation of the noise divided by the range of the measured SPI signal. The plot compares results obtained for compression ratios of $2\%, 3\%, 6\%, 10\%$ , for parameter $m=3,5,7$ and $p=1,2$ , when $\epsilon =10^{-7}$ and $\epsilon =0.5$ . The simulation includes the presence of white additive Gaussian noise with standard deviation $\sigma R$ , where $R$ denotes the range of the measured SPI signal. PSNR values are averaged over results obtained for 5120 images from the Caltech256 image database.
Fig. 6.
Fig. 6. (a) Average entropy per measurement pattern $H/k$ of the measured SPI signals as a function of the signal bit-resolution $b$ . (b) Experimental probability distributions of the occurrence of specific voltage values $v_i$ in the measured SPI signals for $b=8$ . Results obtained from simulated measurements with over 180 thousand images.
Fig. 7.
Fig. 7. Sensitivity of the proposed sampling method (red) to the experimental noise $\mathbf {n}_d$ . Direct DCT sampling (blue) for comparison. Average PSNR and SSIM values obtained from simulated SPI measurements and reconstructions with 5120 images. The simulation includes the presence of white additive Gaussian noise with $\sigma$ denoting the standard deviation of the noise divided by the range of the measured SPI signal. This noise model corresponds to the inherent statistical noise of the SPI camera and it is independent of the A/D discretization noise. Respective subplots are obtained for 8-bit, 10-bit, 12-bit, and 16-bit discretization. Vertical dotted lines mark the level of noise present in our experimental SPI set-up.
Fig. 8.
Fig. 8. Sensitivity of the proposed sampling method (red) to the A/D discretization. Standard non-differential SPI sampling with the same set of DCT patterns (blue) for comparison. Average PSNR (a) and SSIM (b) values obtained from simulated SPI measurements and reconstructions with 5120 images. Before the image reconstruction, the SPI signals are converted into signals with b significant bits, i.e., only $2^b$ distinct values are measured by the detector. We use (b) between 8 and 16, which corresponds to the range of the bit-resolution of our experimental equipment. We assume no presence of other types of experimental noise.
Fig. 9.
Fig. 9. Schematics of the SPI configurations with 2 channels. (a) Polarization imaging with complementary channels measuring orthogonal linear polarizations. (b) Concurrent measurement of visible and near-infrared bands.
Fig. 10.
Fig. 10. Flow-chart explaining the processing of acquired data. Data is acquired continuously at a sampling of 256 ns. The data gathered during the exposure of a single sampling pattern (marked in color) are selected and averaged to obtain a single measurement $y_i$ . Averaging is performed independently in each of the 2 channels. Then, monochromatic images from the 2 channels are reconstructed independently and integrated by color multiplexing to produce a single pseudocolor frame.
Fig. 11.
Fig. 11. Sample pseudocolor frames obtained with the real-time 2-channel SPI. (a) Polarization imaging of a birefringent object (ruler) placed in front of a linear polarizer, (b) VIS-IR imaging of a hot object (soldering iron tip at 400 $^{\circ }$ C). Respective visualizations are Visualization 1 showing polarization imaging of a rotated linear polarizer, Visualization 2 with a moving birefringent object, Visualization 3-with the moving soldering iron tip.
Fig. 12.
Fig. 12. Examples of video frames obtained with the real-time single-channel visible SPI configuration using the proposed sampling with different compression ratios $\textrm {CR}=k/n$ . (a) Reference image measured with $\textrm {CR} = 100\%$ (robotic toy constructed with Lego bricks). (b) Reconstructed images with $\textrm {CR} = 2, 3,$ or $6\%$ for still scene. (c) Reconstructed video frames with $\textrm {CR} = 2, 3,$ or $6\%$ for moving scene. More detailed video reconstructions with varying movement speeds are presented in Visualization 4, Visualization 5, and Visualization 6. (d) Average PSNR of the reconstructions obtained for each sampling ratio as a function of the movement speed of the scene. The reference image (a) is used as the ground truth for calculating PSNR, and the calculation is performed only for these frames, in which the position of the rotating wheels matches the one from the reference image.

Tables (1)

Tables Icon

Table 1. Performance of D-FDRI reconstruction method in comparison with the compressive sensing (CS) techniques in terms of the average reconstruction time and PSNR obtained for compression ratios CR = 2 % , 3 % , or 6 % . In both cases, the same measurement matrices have been used for each value of CR, and the measurements have been obtained using a single-channel visible SPI for the still robotic toy scene presented in Fig. 12(a). CS reconstructions have been calculated via solving the basis pursuit denoise problem with the total variation regularisation using NESTA solver [34].

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

y = M ( x + n s ) + n d ,
y = D k p y = ( D k p M ) ( x + n s ) + D k p n d .
x ~ = ( D k p M ) g ( D k p M ) x = P g y ,
P = F Γ ^ F ( M F Γ ^ F ) + ,
[ Γ ^ ] i , j = δ i , j ( 1 μ ) 2 ( sin 2 ( ω x ) + sin 2 ( ω y ) ) + μ 2 ω x 2 + ω y 2 2 π 2 + ϵ ,
P g = F Γ ^ F ( D k p M F Γ ^ F ) + D k p .
A 3 1 = ( 0 1 1 1 1 0 1 0 0 0 1 0 ) , A 3 2 = ( 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 ) ,
A 7 1 = ( 0 1 0 0 1 1 1 1 1 0 0 0 0 1 0 0 1 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 1 0 0 0 1 0 0 1 0 1 1 1 ) , A 7 2 = ( 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 0 0 1 1 1 1 0 0 0 1 0 0 1 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 0 ) .
PSNR ( x ~ , x ) = 10 log 10 max ( x ) 2 1 n i = 1 n ( x i x ~ i ) 2 ,
P g y = P g ( y + c ) .
x l ~ = θ ( P g y l ( 1 ) l 1 ) ,
H = i = 1 2 b P ( v i ) log 2 P ( v i ) ,
x ~   = argmin x   λ x T V + 1 2 Σ 1 U T D k p y V T x 2 2 ,

Metrics