Abstract

We demonstrate an approach that allows taking videos at very high frame-rates of over 100,000 frames per second by exploiting the fast sampling rate of the standard rolling-shutter readout mechanism, common to most conventional sensors, and a compressive-sampling acquisition scheme. Our approach is directly applied to a conventional imaging system by the simple addition of a diffuser to the pupil plane that randomly encodes the entire field-of-view to each camera row, while maintaining diffraction-limited resolution. A short video is reconstructed from a single camera frame via a compressed-sensing reconstruction algorithm, exploiting the inherent sparsity of the imaged scene.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction and principle

High-speed imaging is important for observations of fast occurring phenomena, such as high-speed tracking of single-particles [1], the study of explosive materials, in the automotive industry, and more [2]. While high-resolution cameras are available today in affordable devices such as in smartphones, very high-speed ($>$10,000 frames-per-second (fps)) cameras are still uncommon and expensive, with price tags at the thousands of dollars. The high price-tag is a result of the hardware requirements, which include high-speed readout, a high transfer bit-rate from the sensor to the memory, and a high-speed memory allocated close to the sensor.

Here, we suggest a simple solution for high-speed (>100,000fps) imaging of sparse-scenes that goes beyond the camera frame-rate for the same region of interest (ROI) by two orders of magnitude. The fast frame-rate is obtained by exploiting the fast rolling-shutter readout common to most conventional sensors, and a compressive-sampling (CS) acquisition scheme, without changing the camera readout speed. Our technique can be applied to any conventional imaging system by the simple addition of a diffuser at the imaging-system pupil-plane. The diffuser goal is to generate a random speckle point-spread-function (PSF), which encodes the entire scene to each camera row. The rolling shutter samples the scene at 100,000 rows-per-second, allowing high-speed video reconstruction from a single camera frame, via a post-processing computational reconstruction.

In recent years, various techniques that allow video reconstruction from a single frame based on compressed-sensing have been developed [3]. Among them are coded exposure photography [4], dynamic coded-aperture photography [5], dynamic PSF modulation [6] and many more [712]. However, these approaches rely on a fast dynamic modulation and require relatively-complex modifications or additions to the imaging system.

In contrast to these approaches, our approach requires only a simple, straightforward, addition of a static diffuser to the imaging system pupil-plane. Recently, a similar technique of utilizing the rolling-shutter effect for video reconstruction from a single frame was presented by Antipa et al. [13] for lensless imaging systems. The implementation of this approach, however, required access to the bare sensor, and thus, is not directly adaptable to all conventional imaging systems. Here, we demonstrate that the use of the rolling-shutter for high-speed photography can be exploited in any lens-based imaging system by accessing only the pupil plane. In addition, by relying on a speckle-based encoding, rather than a caustics-one used by Antipa et al., our implementation preserves a diffraction-limited spatial resolution, a potential advantage for microscopic imaging.

The enabling principle in all approaches of video reconstruction from a single frame is that the dynamic scene to be captured is compressible in some known domain and that each pixel in the acquired frame encodes information from a large number of video pixels, in both space and time. Following the principles of CS [14], a dynamic video can be reconstructed from such a single frame under the conditions of an appropriate encoding and a minimal number of acquired pixels. Importantly, a random encoding, i.e. an encoding where each camera pixel is a random linear superposition of video pixels in space and time, satisfies those conditions.

In a conventional camera with a rolling shutter readout, the camera image is sampled row after row in a serial, consecutive manner [9]. Therefore, each row of pixels in the acquired frame encodes the scene information in a specific, short, time (Fig. 1(a)). This effect is usually a hurdle for high-speed photography [15], causing artifacts for fast-moving objects, or fast-changing scenes, as is depicted in Fig. 1(a). However, one can exploit the inherently fast sampling-rate of the rolling shutter for video reconstruction by randomly-encoding the entire scene information (in all spatial and temporal coordinates) to every camera row (scanline).

 

Fig. 1. Principle. (a) In a conventional rolling-shutter readout, the sensor rows are sampled consecutively at different times, yielding a sampling-rate that is orders of magnitude faster than the full frame-rate. (b) The fast sampling-rate is exploited for video capture by spreading the light from each point in the imaged scene (left) over all camera rows, by an optical diffuser placed at the camera pupil plane, generating a random speckle PSF. (b-f) An illustrated example of imaging a rapidly moving point source: (c) at each time $t_i$ the moving source projects the speckle PSF at a different position on the sensor, and a different row (in white) is sampled. Each row in the full captured camera frame (d) reflects the scene at a different time. (e) a compressed-sensing algorithm reconstructs the video (f) from the single captured camera frame.

Download Full Size | PPT Slide | PDF

A schematic realization of this principle using an optical diffuser is presented in Figs. 1(b)-(g). A light scattering diffuser at the pupil plane of the imaging system (Fig. 1(c)) optically randomly encodes the entire scene to each row in the camera sensor by scattering (Fig. 1(d)). A camera with a rolling-shutter readout captures the single frame (Fig. 1(e)). Each row in the captured image encodes a single time frame of the video. The single captured image is fed into a CS reconstruction algorithm that decodes the video (Figs. 1(f)-(g)). Importantly, in common CMOS cameras, the readout speed of each row is usually several orders of magnitude faster than the full image acquisition rate (the number of rows in the image) [9].

2. Forward model and reconstruction scheme

The acquisition and reconstruction of our approach can be mathematically described as follows: for imaging a 2D spatially-incoherently illuminated scene, described by $O(x,y,t),$ the image intensity at the sensor plane at any time, $t,$ is given by:

$$I(x,y,t) = O(x/M,y/M,t) \underset{x,y} \circledast PSF(x,y)$$
Where PSF(x,y) is the imaging PSF, $M$ is the magnification of the imaging system, and $\underset {x,y} \circledast$ denotes 2D spatial-convolution over (x,y).

The single image acquired by a camera with a rolling-shutter readout, $I_{cam}(m,n)$, is a sampled version of $I(x,y,t)$, where the row at the coordinate $n$ is sampled at a time, $t_n=n/V_s,$ where $V_s$ is the rolling-shutter speed in rows per second, i.e. the row sampling frequency is $f_s=V_s.$ Assuming an exposure time given by $T_{exp},$ the captured image is given by:

$$I_{cam}(m,n) = \int^{x_m+\Delta x /2}_{x_m-\Delta x /2} \int^{y_n+\Delta y /2}_{y_n-\Delta y /2} \int^{t_n}_{t_n-T_{exp}} \ I(x,y,t)dt dy dx$$
Where ($\Delta x$, $\Delta y$) are the camera pixel dimensions. Let us consider the case when the exposure time is equal or shorter than the time it takes the rolling shutter to pass through a single camera row: $T_{exp} \leq 1/V_s.$ This case represents the highest achievable temporal measurement bandwidth, since exposure times longer than $1/V_s$ would average-out dynamics that are faster than $T_{exp}.$ If the scene dynamics are slower than this short exposure time, Eq. (2) can be approximated by:
$$I_{cam}(m,n) \approx T_{exp}\int^{x_m+\Delta x /2}_{x_m-\Delta x /2} \int^{y_n+\Delta y /2}_{y_n-\Delta y /2} O(x/M,y/M,t_n = \frac{n}{V_s}) \underset{x,y} \circledast PSF(x,y) dy dx$$
This simple approximation allows temporal discretization with a temporal resolution of $T_{exp}.$ Finer temporal discretization is also possible, but for the sake of simplicity we will not consider it here. The case where $T_{exp} > 1/V_s$ could also be considered to increase the SNR, at the price of lower temporal bandwidth [13].

The linear forward model described in Eq. (3) can be written as a matrix-vector multiplication of a convolution-like matrix, $\textbf {A}$ that describes the PSF and the rolling-shutter readout, with a vector $\textbf {v}$ that describes the dynamic scene $O(x,y,t)$ [16,17].

$$\textbf{b} = \textbf{A}\textbf{v}$$
Where $\textbf {b}$ is a vector of dimensions $[m=n_x\cdot n_y,1]$, representing the captured $n_x$ by $n_y$ pixels-image. $\textbf {v}$ is a vector with dimensions $[n = n_x \cdot n_y \cdot n_t,1]$ that represents the 3D dynamic scene having $n_t$ temporal bins, and $\textbf {A}$ is the forward-model (Eq. (3)) matrix of dimensions $[m,n].$ The $i$-th row in the matrix $\textbf {A}$ describes the random-encoding of the spatio-temporal scene to the $i$-th sensor-pixel. The $j$-th column in $\textbf {A}$ describes the spreading of each spatio-temporal pixel intensity in the scene $I(x_j,y_j,t_j)$ to all camera pixels. The columns in $\textbf {A}$ are thus shifted, sampled representations of the PSF.

In the framework of CS, the scene video, $\textbf {v}$, can be reconstructed from the acquired frame, $\textbf {b}$, by e.g. finding the solution to the convex-minimization problem [14]:

$$\tilde{v} = \underset{\textbf{v} \geq 0}{\operatorname{argmin}} ||\textbf{b}-\textbf{A}\textbf{v}||^2_2 + \tau || \boldsymbol{\Psi} \textbf{v}||_1$$
where $\boldsymbol{\Psi}$ is a linear transformation matrix mapping $\textbf {v}$ to a domain where it has a sparse representation. For example $\boldsymbol{\Psi}$ can be a spatial discrete cosine-transform [18], a spatial and/or temporal derivative aimed at minimizing the spatio-temporal total variation [19]. We used $\boldsymbol{\Psi} = I$, i.e. regularizing for scene sparsity in space and time (x,y,t). $\tau$ is a regularization parameter chosen according to the scene sparsity and measurement signal-to-noise (SNR). A large $\tau$ should be chosen for a sparse scene and/or a low SNR. According to CS theory, for a scene represented by a $k$-sparse representation, a high-fidelity reconstruction is possible if the number of measurements, $m,$ satisfies: $m \geq \mathcal {O}(k \cdot log(n/k))$ [20]. In our case, $m$ is the number of camera pixels, and $n$ is the number of spatio-temporal pixels in the video.

3. Numerical results

The imaging procedure requires performing three steps: PSF calibration, acquisition, and reconstruction. For calibration, a point-object is used to record the system PSF. For an ideal thin diffuser placed at the pupil plane, the system is expected to be isoplanatic and a single PSF recording is sufficient. For realistic diffusers, the isoplanatic angle is the angular "memory effect" [21] of the diffuser, which for ground-glass or holographic diffusers is a couple of degrees [22,23]. Thus, the PSF calibration can be done by recording the PSF in a few isoplanatic patches. The acquisition (encoding) step is a simple recording of a single camera frame. Finally, the reconstruction (decoding) step is performed by running a conventional CS reconstruction algorithm to solve the inverse problem [24].

As a first step to confirm the proposed approach, we performed a numerical simulation using a single experimentally measured PSF, generated by $1^{\circ }$ diffuser (Newport). The results of this simulation are presented in Fig. 2.

 

Fig. 2. Numerical simulation and analysis of reconstruction fidelity as a function of sparsity and SNR: (a) raw 108x108-pixel rolling-shutter sensor image simulated using an experimentally measured speckle PSF (b), and an SNR=50 (17dB). The dynamic scene is composed of 54 time-bins with temporally and spatially varying digits. The reconstructed short video (c) is in excellent agreement with the ground truth (d). The different colors represent frames at different times. (e-h) reconstruction of frame 11 from the video: (e) captured rows of the $11^{th}$ time bin, (f) reconstructed frame, (g) zoom-in on (f), (h) same as (g) ground-truth for (g). (i-l) same as (e-h) for the $48^{th}$ frame. (m) Pearson correlation of the reconstruction and the ground-truth for various SNR and scene spatio-temporal sparsity (k). The scene in (a-l) has a spatio-temporal sparsity value of $k=679$.

Download Full Size | PPT Slide | PDF

The simulated scene, composed of 54 digits changing in space and time over 54 time-bins and 108x108 spatial pixels. The captured image (Fig. 2(a)) contains 108x108 pixels. Thus, m=11,664, n=629,856, and the number of non-zero entries in the scene are k=679. Poisson noise was added to the raw image pixels, to simulate a measurement SNR of 50 (17dB), where the SNR is defined as the ratio of the signal mean-intensity to the standard deviation of the noise intensity. These values were chosen since they are representative of the SNR in our experiments (Fig. 3 and Fig. 4). To appropriately simulate the dual rolling shutter of our camera (Andor Zyla 4.2), two camera rows are simultaneously sampling the scene at each specific time, rolling from the top and bottom of the frame at earlier times to the center of the frame at later times.

Under these conditions, a high-fidelity reconstruction of the 54 video frames at a resolution of 108x108 pixels was obtained and compared to the ground truth video (Figs. 2(c) and (d)), yielding a $\times 60$ increase in acquisition speed, compared to the raw camera frame-rate. Figures 2(e)-(h) and Figs. 2(i)-(l) shows a zoomed-in comparison on two of the digits, one that appears early and is located at the bottom edge of the FoV (Figs. 2(e)-(h)), and one that is located at the center of the FoV and appears at a later time (Figs. 2(i)-(l)). As a result of the finite dimensions of the speckle PSF, the signal from objects that are located at the bottom or top edges of the FoV, would not evenly spread over the entire sensor, and thus would result in a lower number of effective samples (Fig. 2(e)) than an object at the center of the FoV (Fig. 2(i)). This effect may reduce the reconstruction fidelity of the edges of the FoV. It can be reduced by choosing a larger PSF, albeit with a reduced SNR for the center of the frame.

We have performed additional investigations of reconstruction fidelity at various SNR levels and scene sparsity by simulating three different scenes with varying sparsity levels and calculating to each scene the Pearson’s correlation coefficient between the simulated video and the full video reconstruction. The results of these investigations are presented in Fig. 2(m), confirming that the complexity of our scene is close to the bounds of CS reconstruction.

4. Experimental proof-of-principle

For a proof-of-principle experimental demonstration, the setup depicted in Figs. 3(a)-(c) was constructed. The setup is a simple imaging setup consisting of two 4f imaging telescopes with a $\times 22.5$ total de-magnification. An optical diffuser (10DKIT-C1 holographic diffuser, Newport) having a 1-degree scattering angle, without a dominant ballistic component (zero-order diffraction peak) is placed at the Fourier plane (Fig. 3(b)). The criterion for choosing the optical diffuser scattering angle is that PSF spread would be such that significant intensity from a point object located at the bottom of the field-of-view (FoV) would reach the entire camera sensor. A larger PSF would be wasteful for the photon flux reaching the sensor. A smaller PSF would not encode the information from the lower and upper edges of the FoV to all camera rows. The experimental PSF was measured by imaging a $30 \mu m$-diameter pinhole located at the center of the FoV. The measurement matrix $\textbf {A}$ was built as a convolution matrix from the single measured PSF, assuming perfect shift-invariance of the PSF (i.e. infinite memory effect).

 

Fig. 3. Experimental proof of principle. (a) the imaged scene, composed of three fast modulated LEDs. (b) The optical setup is an 8-f imaging system with a 1$^{\circ }$ optical diffuser at the Fourier plane (4-f shown for simplicity). (c) the raw image is captured at a pixel resolution of 108x108, and a $9.6\mu s$ exposure time. (d-f) a 54-frame video is reconstructed from the single frame at a frame rate of 104,166 fps, at the same 108x108 pixel resolution. (d) three reconstructed frames. (e) time traces for three pixels at the LED locations. (f) spatio-temporal presentation of the reconstructed video. (g-i) same as (d-f) for direct, diffraction-limited imaging (d), and (i) temporal traces recorded with a fast photodiode. The direct image is taken by a camera with the diffuser removed, and with no temporal modulation. Scale bars: 2 mm.

Download Full Size | PPT Slide | PDF

As a dynamic, rapidly changing scene, three LEDs at a wavelength of $625nm$ (Thorlabs M625F2, fiber-coupled to 200$\mu$m fibers) were modulated at different frequencies up to 52KHz with different duty-cycles using three independent function generators. An sCMOS camera (Andor Zyla 4.2) set to fast readout mode of $V_s=f_s=104,166 \ rows/sec$ and an exposure time of $T_{exp}=1/f_s=9.6 \mu s$ was used to capture the scene. According to the Shannon-Nyquist sampling criteria, the maximum frequency our system can record without aliasing is $f_{max} = f_s/2 = 52,083 \ Hz$. A sample video with 54 frames reconstructed at a frame rate of 104,166 fps at a pixel resolution of 108x108 pixels is presented in Figs. 3(d)-(f). This video is compared to measurements taken with direct diffraction-limited imaging, captured with the same optical setup without the optical diffuser and no temporal modulations. In addition, measurements of the temporal traces were taken with a fast photodiode 3g-i. The reconstructed frame-rate is $\times 60$ higher than the highest native frame-rate of the camera at such a small ROI of 1,600 fps, and $\times 1000$ higher than the camera frame-rate at full resolution. The diffraction-limited resolution of our system was chosen such that the speckled PSF (Fig. 2(b)) is Nyquist sampled by the camera pixels. Thus, the optical resolution of the system is $\delta x \approx \Delta x/M_x$ where $M_x=1/22.5$ Is the magnification of the optical system. It is worth noting that the CS reconstruction enables a sub-Nyquist super-resolution reconstruction for sparse samples [25].

 

Fig. 4. Imaging a near-diffraction-limited temporally modulated object. (a) direct image of an object composed of eight closely spaced points. (b) same as (a) but with an optical iris limiting the NA of the imaging system. (c) average of all reconstruction frames, captured with the same limited NA as in (b). (d) cross-section comparison between (a-c), demonstrating the reconstruction resolution is as good as diffraction-limited imaging. (e) the raw captured 96x80-pixel image, (f) the speckle PSF used. (g) selected frames from the 40-frame reconstructed video. (h) the full time-traces for the eight object dots over all reconstructed frames. Scale bars: 2 mm.

Download Full Size | PPT Slide | PDF

To validate that the proposed approach preserves a diffraction-limited spatial resolution, we performed an additional experiment with a more complex multi-point object, with spacings that are close to the diffraction-limited resolution of the optical imaging system. For this experiment, the object was composed of eight diffraction-limited dots with a diameter of $~250\mu m$ at spacings of $~600\mu m$ placed in front of an LED (Thorlabs M625L4) that was modulated at a frequency of 20,833Hz. The reconstructed 40 frames video, at a frame rate of 41,666 fps, with a pixel resolution of 96x80 pixels is presented in Fig. 4. To compare the reconstruction resolution to the diffraction-limited resolution, two direct images of the object were taken without the diffuser with and without the iris diaphragm limiting the system NA (Figs. 4(a)-(b)). Comparing the direct images (Figs. 4(a)-(b)) to the sum of the frames in our reconstruction (Fig. 4(c)) shows that the reconstruction quality is comparable to the diffraction-limited image. Moreover, a quantitative analysis of a cross-section of these images (Fig. 4(d)) reveals that our method maintains diffraction-limited spatial resolution. Cross-sections and sample frames from the full video reconstruction are given in Figs. 4(e)-(h). Raw image (Fig. 4(e)) is taken with the same sCMOS camera set to slow readout mode of $V_s=f_s=41,666 \ rows/sec$ and an exposure time of $T_{exp}=1/f_s=24 \mu s$. The same optical diffuser is placed at the camera pupil plane, generating a random speckle PSF (Fig. 4(f)). The illuminating LED is modulated at a frequency of 20,833Hz which is the maximum frequency our system can record without aliasing at this slower readout mode. Three video frames (Fig. 4(g)) are presented and a full time-trace for the eight object dots is presented for every video frame (Fig. 4(h)).

5. Conclusion

We presented a method for high-speed imaging that relies on the rolling shutter effect. While placing an optical diffuser at the pupil plane results in a random speckle PSF, it does not affect the resolution limit of the imaging system, since the speckle grain-size (Fig. 2(b)) is diffraction-limited [26]. While in our demonstration we used a simple diffuser that produces a speckle pattern PSF obeying Rayleigh-statistics [26], an engineered phase-mask can be used to customize more efficient encoding or different intensity statistics [27].

As speckles can be very sensitive to the optical wavelength [28], broadband scenes may result in low contrast raw images. This can be alleviated by engineered phase masks. However, the spectral sensitivity of speckles can be an interesting advantage for hyperspectral (x,y,t,$\lambda$) imaging [29,30]. Moreover, the suggested approach could also be extended for recovering depth information (x,y,t,z) by exploiting the natural orthogonality of speckles at different axial planes, as was recently demonstrated [17]. However, both of those extensions would come at a cost of more demanding sparsity constraints since a higher dimensional vector should be reconstructed from the same number of measurements (camera pixels). To maximize reconstruction fidelity the PSF can be specifically engineered, as was recently demonstrated for the goal of 3D reconstruction using deep-learning [31].

As in similar approaches, an inherent drawback of the presented approach is in the fact that the intensity from each spatial position in the scene is spread over a large number of camera pixels, reducing the raw signal to noise. The approach is thus not ideal for high pixel-count imaging of low photon-flux non-sparse scenes. Nonetheless, Antipa et al. [13] demonstrated that spatially-complex scenes can be reconstructed from a low-contrast single frame. This was achieved by increased exposure time and a considerably lower frame-rates than those considered in our experiments (4,500 fps vs. 100,000 fps). The high frame-rate of our reconstruction limits the spatial complexity of the scenes, as the total spatio-temporal sparsity and number of measurements need to satisfy the CS theory conditions: $m \geq \mathcal {O}(k \cdot log(n/k).$ The goal of our work is to demonstrate the highest frame-rate possible using conventional rolling-shutter sensors, by a simple pupil plane manipulation. The ability to reconstruct spatial-sparse fast dynamics scenes may be most valuable for high-speed simultaneous tracking of multiple particles. A demonstration of such an application is beyond the scope of the current work.

The CS reconstruction algorithm used for our demonstration is gradient projection for sparse reconstruction (GPSR) [24], which takes approximately three hours for the reconstruction of 54 video frames at a pixel resolution of 108x108 on Intel I7 CPU. The use of GPU and faster reconstruction algorithms can considerably reduce the run-time. In addition, deep-learning based reconstruction framework can also be used [32,33] for improving the reconstruction run-time and potentially the reconstruction fidelity. Finally, the approach can also be extended to spatially-coherent scenes, e.g. for high-speed holographic video [34], or in optical coherence tomography (OCT) [35], by considering holographic detection of the fields rather than intensity only detection.

Funding

European Research Council (ERC) Horizon 2020 research and innovation program (677909); Azrieli Foundation; Ministry of Science, Technology and Space (712845); Human Frontier Science Program (RGP0015/2016); Israel Science Foundation (1361/18).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017). [CrossRef]  

2. H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016). [CrossRef]  

3. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018). [CrossRef]  

4. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.

5. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). [CrossRef]  

6. Y. Shi, Y. Liu, W. Sheng, J. Wang, and T. Wu, “Speckle rotation decorrelation based single-shot video through scattering media,” Opt. Express 27(10), 14567–14576 (2019). [CrossRef]  

7. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014). [CrossRef]  

8. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992–16007 (2015). [CrossRef]  

9. J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

10. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

11. Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017). [CrossRef]  

12. X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.

13. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

14. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

15. C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008). [CrossRef]  

16. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

17. M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019). [CrossRef]  

18. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

19. S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011). [CrossRef]  

20. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006). [CrossRef]  

21. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

22. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

23. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

24. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). [CrossRef]  

25. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920–23946 (2009). [CrossRef]  

26. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

27. N. Bender, H. Yılmaz, Y. Bromberg, and H. Cao, “Customizing speckle intensity statistics,” Optica 5(5), 595–600 (2018). [CrossRef]  

28. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013). [CrossRef]  

29. M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55(3), 432–443 (2016). [CrossRef]  

30. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]  

31. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020). [CrossRef]  

32. M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

33. M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020). [CrossRef]  

34. Z. Wang, L. Spinoulas, K. He, L. Tian, O. Cossairt, A. K. Katsaggelos, and H. Chen, “Compressive holographic video,” Opt. Express 25(1), 250–262 (2017). [CrossRef]  

35. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
    [Crossref]
  2. H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016).
    [Crossref]
  3. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018).
    [Crossref]
  4. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.
  5. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013).
    [Crossref]
  6. Y. Shi, Y. Liu, W. Sheng, J. Wang, and T. Wu, “Speckle rotation decorrelation based single-shot video through scattering media,” Opt. Express 27(10), 14567–14576 (2019).
    [Crossref]
  7. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
    [Crossref]
  8. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992–16007 (2015).
    [Crossref]
  9. J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.
  10. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.
  11. Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017).
    [Crossref]
  12. X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.
  13. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.
  14. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
    [Crossref]
  15. C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
    [Crossref]
  16. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018).
    [Crossref]
  17. M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
    [Crossref]
  18. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
    [Crossref]
  19. S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
    [Crossref]
  20. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006).
    [Crossref]
  21. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
    [Crossref]
  22. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
    [Crossref]
  23. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
    [Crossref]
  24. M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
    [Crossref]
  25. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920–23946 (2009).
    [Crossref]
  26. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).
  27. N. Bender, H. Yılmaz, Y. Bromberg, and H. Cao, “Customizing speckle intensity statistics,” Optica 5(5), 595–600 (2018).
    [Crossref]
  28. B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
    [Crossref]
  29. M. A. Golub, A. Averbuch, M. Nathan, V. A. Zheludev, J. Hauser, S. Gurevitch, R. Malinsky, and A. Kagan, “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt. 55(3), 432–443 (2016).
    [Crossref]
  30. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017).
    [Crossref]
  31. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
    [Crossref]
  32. M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).
  33. M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
    [Crossref]
  34. Z. Wang, L. Spinoulas, K. He, L. Tian, O. Cossairt, A. K. Katsaggelos, and H. Chen, “Compressive holographic video,” Opt. Express 25(1), 250–262 (2017).
    [Crossref]
  35. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
    [Crossref]

2020 (2)

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

2019 (2)

Y. Shi, Y. Liu, W. Sheng, J. Wang, and T. Wu, “Speckle rotation decorrelation based single-shot video through scattering media,” Opt. Express 27(10), 14567–14576 (2019).
[Crossref]

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

2018 (3)

2017 (4)

2016 (2)

2015 (1)

2014 (1)

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

2013 (2)

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013).
[Crossref]

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

2012 (2)

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

2011 (1)

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

2009 (2)

2008 (1)

C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
[Crossref]

2007 (1)

M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
[Crossref]

2006 (2)

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

1991 (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

1988 (1)

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
[Crossref]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.

Alalouf, O.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Antipa, N.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

Averbuch, A.

Baiyasi, R.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Bender, N.

Bertolotti, J.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Blum, C.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Bostan, E.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

Brady, D. J.

Bromberg, Y.

N. Bender, H. Yılmaz, Y. Bromberg, and H. Cao, “Customizing speckle intensity statistics,” Optica 5(5), 595–600 (2018).
[Crossref]

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Candes, E. J.

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006).
[Crossref]

Cao, H.

N. Bender, H. Yılmaz, Y. Bromberg, and H. Cao, “Customizing speckle intensity statistics,” Optica 5(5), 595–600 (2018).
[Crossref]

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

Carin, L.

Chan, S. H.

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Chang, L.-W.

C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
[Crossref]

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Chen, H.

Chen, H. H.

C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
[Crossref]

Cossairt, O.

Dang, C.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

Eldar, Y. C.

Emiliani, V.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Feng, S.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
[Crossref]

Ferdman, B.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Figueiredo, M. A.

M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
[Crossref]

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Freedman, D.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Freund, I.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
[Crossref]

Fujimoto, J. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Ganesan, S.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Gao, L.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

Gazit, S.

Gibson, K. B.

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Gill, P. E.

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Goda, K.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016).
[Crossref]

Golub, M. A.

Goodman, J. W.

J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

Gordon, R.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Gu, J.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

Guillon, M.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Gupta, M.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

Gurevitch, S.

Hauser, J.

He, K.

Heckel, R.

Hee, M. R.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Hitomi, Y.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

Huang, D.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Kagan, A.

Katsaggelos, A. K.

Katz, O.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Kellman, M.

M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

Khoshabeh, R.

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Kittle, D.

Koller, R.

Kuo, G.

Lagendijk, A.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Landes, C. F.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Li, C.

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

Liang, C.-K.

C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
[Crossref]

Liang, J.

J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

Liao, X.

Liew, S. F.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

Lin, C. P.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Liu, Y.

Llull, P.

Lustig, M.

M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

Ma, J.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

Malinsky, R.

Matsuda, N.

Meng, Z.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

Michaeli, T.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Mikami, H.

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016).
[Crossref]

Mildenhall, B.

Mitsunaga, T.

J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

Moringo, N.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Mosk, A. P.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Naor, T.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Nathan, M.

Nayar, S.

J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

Nayar, S. K.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

Nehme, E.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Ng, R.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

Nguyen, T. Q.

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Niederberger, T.

Nowak, R. D.

M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
[Crossref]

Oare, P.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

Orange, R.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Pang, S.

Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017).
[Crossref]

X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.

Pascucci, M.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Puliafito, C. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Qiao, M.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

Raskar, R.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.

Redding, B.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
[Crossref]

Sahoo, S. K.

Sapiro, G.

Sarma, R.

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

Schmid, L.

Schuman, J. S.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Schuster, G.

Segev, M.

Shechtman, Y.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Shen, H.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Sheng, W.

Shi, Y.

Shuang, B.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Spinoulas, L.

Stinson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Sun, Y.

Swanson, E. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Szameit, A.

Tang, D.

Tao, T.

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006).
[Crossref]

Tauzin, L. J.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Tian, L.

Tripathi, A.

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.

Van Putten, E. G.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Vos, W. L.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Waller, L.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018).
[Crossref]

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

Wang, J.

Wang, L. V.

J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

Wang, W.

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

Wang, Z.

Weiss, L. E.

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Wright, S. J.

M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
[Crossref]

Wu, T.

Yang, J.

Yilmaz, H.

Yuan, X.

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013).
[Crossref]

X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.

Zheludev, V. A.

APL Photonics (1)

M. Qiao, Z. Meng, J. Ma, and X. Yuan, “Deep learning for video compressive sensing,” APL Photonics 5(3), 030801 (2020).
[Crossref]

Appl. Opt. (1)

Appl. Phys. Lett. (1)

O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009).
[Crossref]

Chem. Rev. (1)

H. Shen, L. J. Tauzin, R. Baiyasi, W. Wang, N. Moringo, B. Shuang, and C. F. Landes, “Single particle tracking: from theory to biophysical applications,” Chem. Rev. 117(11), 7331–7376 (2017).
[Crossref]

IEEE J. Sel. Top. Signal Process. (1)

M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007).
[Crossref]

IEEE Trans. Inf. Theory (2)

E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006).
[Crossref]

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

IEEE Trans. on Image Process. (2)

C.-K. Liang, L.-W. Chang, and H. H. Chen, “Analysis and compensation of rolling shutter effect,” IEEE Trans. on Image Process. 17(8), 1323–1330 (2008).
[Crossref]

S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented lagrangian method for total variation video restoration,” IEEE Trans. on Image Process. 20(11), 3097–3111 (2011).
[Crossref]

Nanophotonics (1)

H. Mikami, L. Gao, and K. Goda, “Ultrafast optical imaging technology: principles and applications of emerging methods,” Nanophotonics 5(4), 497–509 (2016).
[Crossref]

Nat. Commun. (1)

M. Pascucci, S. Ganesan, A. Tripathi, O. Katz, V. Emiliani, and M. Guillon, “Compressive three-dimensional super-resolution microscopy with speckle-saturated fluorescence excitation,” Nat. Commun. 10(1), 1327 (2019).
[Crossref]

Nat. Methods (1)

E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “Deepstorm3d: dense 3d localization microscopy and psf design by deep learning,” Nat. Methods 17(7), 734–740 (2020).
[Crossref]

Nat. Photonics (2)

B. Redding, S. F. Liew, R. Sarma, and H. Cao, “Compact spectrometer based on a disordered photonic chip,” Nat. Photonics 7(9), 746–751 (2013).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Nature (2)

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014).
[Crossref]

Opt. Express (6)

Optica (4)

Phys. Rev. Lett. (1)

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988).
[Crossref]

Science (1)

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991).
[Crossref]

Other (7)

M. Kellman, M. Lustig, and L. Waller, “How to do physics-based learning,” arXiv preprint arXiv:2005.13531 (2020).

J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” in ACM transactions on graphics (TOG), (ACM, 2006), pp. 795–804.

J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in 2010 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2010), pp. 1–8.

Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in 2011 International Conference on Computer Vision, (IEEE, 2011), pp. 287–294.

X. Yuan and S. Pang, “Compressive video microscope via structured illumination,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 1589–1593.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” in 2019 IEEE International Conference on Computational Photography (ICCP), (IEEE, 2019), pp. 1–8.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Principle. (a) In a conventional rolling-shutter readout, the sensor rows are sampled consecutively at different times, yielding a sampling-rate that is orders of magnitude faster than the full frame-rate. (b) The fast sampling-rate is exploited for video capture by spreading the light from each point in the imaged scene (left) over all camera rows, by an optical diffuser placed at the camera pupil plane, generating a random speckle PSF. (b-f) An illustrated example of imaging a rapidly moving point source: (c) at each time $t_i$ the moving source projects the speckle PSF at a different position on the sensor, and a different row (in white) is sampled. Each row in the full captured camera frame (d) reflects the scene at a different time. (e) a compressed-sensing algorithm reconstructs the video (f) from the single captured camera frame.
Fig. 2.
Fig. 2. Numerical simulation and analysis of reconstruction fidelity as a function of sparsity and SNR: (a) raw 108x108-pixel rolling-shutter sensor image simulated using an experimentally measured speckle PSF (b), and an SNR=50 (17dB). The dynamic scene is composed of 54 time-bins with temporally and spatially varying digits. The reconstructed short video (c) is in excellent agreement with the ground truth (d). The different colors represent frames at different times. (e-h) reconstruction of frame 11 from the video: (e) captured rows of the $11^{th}$ time bin, (f) reconstructed frame, (g) zoom-in on (f), (h) same as (g) ground-truth for (g). (i-l) same as (e-h) for the $48^{th}$ frame. (m) Pearson correlation of the reconstruction and the ground-truth for various SNR and scene spatio-temporal sparsity (k). The scene in (a-l) has a spatio-temporal sparsity value of $k=679$.
Fig. 3.
Fig. 3. Experimental proof of principle. (a) the imaged scene, composed of three fast modulated LEDs. (b) The optical setup is an 8-f imaging system with a 1$^{\circ }$ optical diffuser at the Fourier plane (4-f shown for simplicity). (c) the raw image is captured at a pixel resolution of 108x108, and a $9.6\mu s$ exposure time. (d-f) a 54-frame video is reconstructed from the single frame at a frame rate of 104,166 fps, at the same 108x108 pixel resolution. (d) three reconstructed frames. (e) time traces for three pixels at the LED locations. (f) spatio-temporal presentation of the reconstructed video. (g-i) same as (d-f) for direct, diffraction-limited imaging (d), and (i) temporal traces recorded with a fast photodiode. The direct image is taken by a camera with the diffuser removed, and with no temporal modulation. Scale bars: 2 mm.
Fig. 4.
Fig. 4. Imaging a near-diffraction-limited temporally modulated object. (a) direct image of an object composed of eight closely spaced points. (b) same as (a) but with an optical iris limiting the NA of the imaging system. (c) average of all reconstruction frames, captured with the same limited NA as in (b). (d) cross-section comparison between (a-c), demonstrating the reconstruction resolution is as good as diffraction-limited imaging. (e) the raw captured 96x80-pixel image, (f) the speckle PSF used. (g) selected frames from the 40-frame reconstructed video. (h) the full time-traces for the eight object dots over all reconstructed frames. Scale bars: 2 mm.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y , t ) = O ( x / M , y / M , t ) x , y P S F ( x , y )
I c a m ( m , n ) = x m Δ x / 2 x m + Δ x / 2 y n Δ y / 2 y n + Δ y / 2 t n T e x p t n   I ( x , y , t ) d t d y d x
I c a m ( m , n ) T e x p x m Δ x / 2 x m + Δ x / 2 y n Δ y / 2 y n + Δ y / 2 O ( x / M , y / M , t n = n V s ) x , y P S F ( x , y ) d y d x
b = A v
v ~ = argmin v 0 | | b A v | | 2 2 + τ | | Ψ v | | 1

Metrics