Abstract

Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.

© 2017 Optical Society of America

1. Introduction

3D fluorescence microscopy is a critical tool for bioimaging, since most samples are thick and can be functionally labeled. High-resolution 3D imaging typically uses confocal [1], two-photon [2] or light sheet microscopy [3]. Because these methods all involve scanning, they are inherently limited in terms of speed or volume. Light field microscopy [4,5], on the other hand, achieves single-shot 3D capture, but sacrifices resolution. High resolution and single-shot capture are possible with coded aperture microscopy [6–9]; however, this requires an extremely sparse sample. Here, we describe a multi-shot coded aperture microscopy method for high-resolution imaging of large and dense volumes with efficient data capture.

Our work fits into the framework of phase-space optics, which is the general term for any space-angle description of light [10–13]. Light fields, for example, are phase-space functions for ray optics with incoherent light. The light field describes each ray’s position and angle at a particular plane, which can be used for digital refocusing [14], 3D imaging, aberration correction [15] or imaging in scattering [16]. In microscopy, wave-optical effects become prominent and so ray optics no longer suffices if one wishes to achieve diffraction-limited resolution. Hence, we use Wigner functions, which are the wave-optical analog to light fields [12,13].

The Wigner function describes spatial and spatial frequency (propagation angle) information for a wave-field of arbitrary coherence. It converges to the light field in the limit of incoherent ray optics [13]. Capturing Wigner functions is akin to spatial coherence imaging [17,18], since they contain the same information as mutual intensity and coherent mode decompositions [10,19]. In the case of fluorescence microscopy, where the object is a set of incoherent emitters, Wigner representations define the incoherent 3D Optical Transfer Function (OTF) [10].

The 4D nature of phase space (two spatial and two spatial-frequency dimensions) poses significant measurement challenges. Single-shot schemes (e.g. lenslet arrays [20]) must distribute the 4D function across a 2D sensor, severely restricting resolution. Scanning aperture methods [21–23] are slow (∼minutes) and light inefficient. We seek here a flexible trade-off between capture time and resolution, with the ability to exploit sparsity for further data reduction.

Our experimental setup consists of a widefield fluorescence microscope with a spatial light modulator (SLM) in Fourier space (the pupil plane). The SLM implements a series of quasi-random coded aperture patterns, while collecting real space images for each (Fig. 1) [24]. The recovered 4D phase space has very large pixel count (the product of the pixel counts of the SLM and the sensor) ∼ 1012. Compared to scanning aperture methods [21,22], the new scheme has three major benefits. First, it achieves better resolution by capturing high-frequency interference effects (high-order correlations). This enables diffraction-limited resolution at the microscope’s full numerical aperture (NA). Second, we achieve higher light throughput by opening up more of the pupil in each capture; this can be traded for shorter exposure time and faster acquisition. Third, the multiplexed nature of the measurements means that we can employ compressed sensing approaches (when samples are sparse) in order to capture fewer images without sacrificing resolution. This means that the number of images required scales not with the reconstructed number of resolved voxels, but rather with the sparsity of the volume.

 figure: Fig. 1

Fig. 1 Phase-space multiplexing for 3D fluorescence microscopy. The microscope (a 4 f system) uses a Spatial Light Modulator (SLM) in Fourier space to implement multiple coded apertures, while capturing 2D intensity images in real space for each. Our wave-optical forward model A relates the object c to the measured images ymeas for each pattern. The inverse problem recovers the object, subject to sparsity priors where applicable. [See Visualization 1]

Download Full Size | PPT Slide | PDF

Our method can be thought of as a multi-shot coded aperture scheme for diffraction-limited 3D fluorescence microscopy. It is analogous to coded aperture photography [25–28]; however, we use a wave-optical model to account for diffraction effects, so intensity measurements are nonlinear with complex-field. Fluorescent imaging allows a simplification of the forward model, since each fluorophore is spatially coherent with itself but incoherent with all other fluorophores. Our reconstruction algorithm then becomes a large-scale inverse problem akin to multi-image 3D deconvolution, formulated as a convex 1 regularized least-square error problem and solved by a fast iterative shrinkage-thresholding algorithm (FISTA) [29].

2. Theory and methods

We use the Wigner function (WF) and its close relative, the Mutual Intensity (MI) function, to describe our multiplexing scheme. The WF describes position and spatial frequency, where spatial frequency can be thought of as the direction of ray propagation in geometrical optics. The concept was introduced by Walther [30] to describe the connection between ray intensity and wave theory. Here we assume that the light is quasi-monochromatic, temporally stationary and ergodic, so the Wigner function is [10,11]:

W(u,r)E˜*(uΔu2)E˜(u+Δu2)ei2π(Δu)rd2(Δu),
where r = (x, y) denotes transverse spatial coordinates, u = (ux, uy) denotes spatial frequency coordinates and 〈·〉 denotes ensemble average. The quantity contained by angle brackets,
MI(u,Δu)E˜*(uΔu2)E˜(u+Δu2),
is, apart from a coordinate transform, the Mutual Intensity. E(r) is a spatially coherent electric field (e.g. from a single fluorophore) and (u) is its Fourier transform. The ensemble average allows representation of both coherent and partially (spatially) coherent light. Here, we assume that the object is a 3D volume of incoherent emitters with no occlusions. Thus, the phase-space description of light from the object is a linear sum of that from each fluorophore.

2.1. General forward model

Our forward model relates each coded aperture’s captured image to the 3D object’s intensity. Each image contains information from multiple frequencies and their interference terms (which are key to resolution enhancement). The MI framework facilitates our mathematical analysis, since the projection of a mask in MI space is equivalent to applying the 3D Optical Transfer Function (OTF) to an incoherent source (see Fig. 2). MI analysis further reveals the interference effects that may be obscured by looking only at a projected quantity (the OTF). For simplicity, we first describe the forward model of a point source and then generalize to multiple mutually incoherent sources.

 figure: Fig. 2

Fig. 2 Illustration of how the spatial spectrum of intensity measurements are related to the input complex-field for a coded-aperture measurement. (a) Widefield imaging: the frequency representation of the electric field (u1) forms an outer product with its conjugate, resulting in a diamond-shaped Mutual Intensity (MI) space, whose rotated projection gives the spectrum of the measured image Ĩ(∆u). (b) Scanning aperture imaging: the nth aperture Mn patterns the electric field and its conjugate to probe a select area of MI space. (c) Coded-aperture imaging: the quasi-random coded apertures probe different areas within a larger range of the MI space. If is from a point source, the intensity projections as the source moves through focus make up the 3D OTF Fourier-transformed along the optical axis.

Download Full Size | PPT Slide | PDF

Consider a complex electric field, specified by some properties, α (e.g. location and wavelength of a point source). The field Es (r1; α) acts like a unique coherent mode – it interferes coherently with itself but not with other modes. For a single coherent mode at the front focal plane of the 4 f system in Fig. 1, the complex-field just before the SLM is the Fourier transform of the input complex-field [31], expressed as s (u1, α), where u1 is in units of spatial frequency that relate to the lateral coordinates x1 on the SLM by multiplying the wavelength λ and the front focal length f, such that x1 = λfu1. The field s (u1; α) is then multiplied by the coded aperture, giving,

E˜n(u1;α)=Mn(λfu1)E˜s(u1;α),
where n is the patterned complex-field in Fourier space and Mn represents the n’th binary coded aperture pattern. At the sensor plane (the back focal plane of the 4 f system) the intensity In is:
In(r;α)=E˜n(u1;α)ei2πu1rd2u1E˜n*(u2;α)ei2πu2rd2u2,
where u2 is a duplicate of u1. The intensity image can be alternately related to the MI or WF by a simple coordinate transform u1 = u + ∆u/2, u2 = u − ∆u/2:
In(r;α)=E˜n(u+Δu/2;α)E˜n*(uΔu/2;α)ei2πΔurd2Δud2u
=MIn(u,Δu;α)ei2πΔurd2Δud2u
=Wn(u,r;α)d2u,
where MIn and Wn are the MI and WF associated with the field modified by the n’th mask.

Describing the intensity measurement in terms of both its WF and MI gives new insights. Eq. (7) shows that the intensity image is a projection of the patterned Wigner function Wn across all spatial frequencies. Alternately, looking at the Fourier transform of In, denoted by Ĩn, we can interpret the intensity measurement as a patterning and projection of the source MI:

I˜n(Δu;α)=MIn(u,Δu;α)d2u=Mn*(λf(uΔu2))Mn(λf(u+Δu2))MIs(u,Δu;α)d2u,
where MIs is the MI of source field Es. This interpretation is illustrated for a 1D complex-field in Fig. 2. Each SLM pattern probes different parts of the input MI. By using multiple coded apertures, one may reconstruct the MI from its projections.

The extension of our forward model to multiple coherent modes is straightforward. Since each mode is mutually incoherent with others, we sum over all of them with their weights C(α):

In(r)=αC(α)In(r;α).

2.2. Forward model for incoherent sources

From the general forward model of Eq. (9), we can now derive the case in which the object is a 3D incoherent intensity distribution (e.g. a fluorescent sample). We specify s in Eq. (3) to be from a single-color fluorophore located at position (rs, zs) where zs is the defocus distance. The parameter α is (rs, zs, λ) for a single point source, and our goal is to solve for the coefficients C(α) which represents intensity of each. To account for off-focus point sources, the field can be propagated to −zs using angular spectrum propagation [31]:

E˜s(u1;rs,zs,λ)={ei2πλ(zs)1λ2|u1|2i2πrsu1,|u1|<NAλ0,otherwise.
Using Eqs. (2), (6) and (8), after some algebra the intensity becomes:
In(r;rs,zs,λ)=KMn,zs(Δu;λ)ei2π(rrs)Δud2Δu,
where
KMn,zs(Δu;λ)=Mn*(λf(uΔu2))Mn(λf(u+Δu2))ei2πzsλ(1λ2|uΔu2|21λ2|u+Δu2|2)d2u
is the kernel for mask Mn at depth zs. Plugging Eq. (11) into Eq. (9) gives the final expression of the forward model. Before doing so, we assume that the fluorescent color spectrum S(λ) is identical and known for all flourophores. Thus, we can decompose the mode weights in Eq. (9) into a product of the spectral weight and the 3D intensity distribution, S(λ)C(rs, zs), giving:
In(r)=(λS(λ)KMn,zs(Δu;λ)ei2π(rrs)Δud2Δu)C(rs,zs)d2rsdzs.

Equation (13) describes the forward model for a 3D fluorescent object C(rs, zs) with no occlusions. The term in parentheses is a convolution kernel describing the 3D Point Spread Function (PSF) for mask Mn (shown in Fig. 1). For simplicity, we assume here no scattering, though incorporating the scattering forward model in [16,22] is straightforward.

2.3. Inverse problem

Based on the raw data and forward model, the inverse problem is formulated as a nonlinear optimization. Our goal is to reconstruct the 3D intensity distribution C(rs, zs) from the measured images. To do so, we aim to minimize data mismatch, with an 1 regularizer to mitigate the effects of noise (and promote sparsity where applicable). The mismatch is defined as the least-square error between the measured intensity images and the intensity predicted by our forward model (Eq. (13)). This formulation has a smooth part and a non-smooth part in the objective function and is efficiently solved by a proximal gradient descent solver (FISTA [29]).

To formulate the inverse problem, we first discretize the forward model in Eq. (13) to be

y=Ac.
Here y ∈ ℝMP×1 corresponds to predicted images on the sensor; each small chunk (∈ ℝP×1) of y is a vectorized image In(r). We discretize r into P pixels, and the number of masks is M, so n = 1 … M. Similarly, we discretize rs and zs into P′ pixels and L samples, respectively, to obtain a vectorized version of C(rs, zs) (c ∈ ℝLP×1). The matrix A ∈ ℝMP×LP′, which is not materialized, represents the summation and convolution in Eq. (13) using 2D Fast Fourier Transforms (FFTs) for each subvector (∈ ℝP×1) of c, with zero-padding to avoid periodic boundary condition errors. The convolution kernel is precomputed and stored for speed.

The inverse problem becomes a data fidelity term plus an 1 regularization with parameter µ:

argminc012yymeas22+μWc1,
where ymeas ∈ ℝMP×1 is the measured intensity. We also use a diagonal matrix W ∈ ℝLP×LP′ to lower the weight of point sources near the borders of images whose light falls off the sensor. Each diagonal entry of W is obtained by summing the corresponding column in A. The value of µ is 0.0005 × max(W−1 AT ymeas). Note that c = 0 wherever µ ≥ max(W−1 AT ymeas). Outside point sources may also contribute to the measured intensity due to defocus; hence, we use an extended field-of-view method [32] to solve for more 2D points in c than y (i.e. P′ > P).

3. Design of coded apertures

In the scanning-aperture scheme [21,22], smaller apertures give better frequency sampling of the 4D phase space, at a cost of: 1) lower resolution, 2) lower signal-to-noise ratio (SNR) and 3) large data sets. Our multiplexing scheme alleviates all of these problems. Multiplexing achieves diffraction-limited resolution by additionally capturing interference terms, which cover the full NA-limited bandwidth. This is evident in the Fourier transform of the captured images (Fig. 3). The SNR improvement is also visible; the multiplexed image is less noisy.

 figure: Fig. 3

Fig. 3 Multiplexed phase-space measurements contain more information than scanning-aperture measurements. (Left) A sample aperture pattern, (Middle) the corresponding measured intensity image (same exposure time), and (Right) its log-scale Fourier transform. The multiplexed measurements have better light throughput and more high-frequency content.

Download Full Size | PPT Slide | PDF

Our masks are chosen by quasi-random non-replacement selection. We section the SLM plane into 18×18 square blocks and keep only the 240 blocks that are inside the system NA. For each mask, we open 12 blocks, selected randomly from the blocks remaining after excluding ones that were open in previous sequences. In this scheme, the full NA can be covered by 20 masks. To allow for both diversity and redundancy, we choose to cover the entire pupil 5 times, resulting in 100 multiplexed aperture patterns, one of which is shown in Fig. 3.

Importantly, the number of multiplexed patterns can be flexibly chosen to trade off accuracy for speed of capture. For instance, by increasing the number of openings in each pattern, we can cover the entire pupil with fewer patterns. This means that we may be able to reconstruct the object from fewer measurements, if the inverse problem is solvable. By using a priori information about the object (such as sparsity in 3D) as a constraint, we can solve under-determined problems with fewer total measured pixels than voxels in the reconstruction.

4. Experiments

Our experimental setup consists of the 4 f system (f1 = 250 mm, f2 = 225 mm) shown in Fig. 1, with an additional 4 f system in front, made of an objective lens (20× NA 0.4) and a tube lens (f = 200 mm) to image the sample at the input plane. The SLM (1400 × 1050 pixels of size 10.3µm) is a liquid crystal chip from a 3-LCOS projector (Canon SX50) which is reflective and polarization-sensitive, so we fold the optical train with a polarization beam splitter and insert linear polarizers. Our sensor (Hamamatsu ORCA-Flash4.0 V2) captures the multiplexed images and synchronizes with the SLM via computer control.

Our sample is a fixed fluorescent brine shrimp (Carolina Biological). It is relatively dense, yet does not fill the entire 3D volume. The reconstructed 3D intensity (Fig. 4(a), 4(b) and 4(f)–4(h)) is stitched from five volume reconstructions, each with 640×640×120 voxels to represent the sample volume of 455×455×600 µm. The reconstruction is cropped to the central part of our extended field of view, so the final volume contains 1422×715×100 voxels corresponding to 1010×510×500µm. The dataset size is large (9 GB), and since the size of the 3D array is ∼5×107 without the extended field-of-view and the measured data is ~4×107, the number of operations for evaluating Eq. (13) is on the order of 3×1010. This takes 4 seconds to compute on a computer with 48-core 3.0 GHz CPUs and requires 94 GB memory to store the kernel (Eq. (12)).

 figure: Fig. 4

Fig. 4 3D reconstruction of a brine shrimp sample as compared to focus stack and confocal microscopy. (a) and (b) 3D renderings of the reconstructed fluorescence intensity distribution (1010×510×500µm) from different perspectives. (c)–(e) 2D widefield images at three different focus planes. (f)–(h) Slices of our reconstructed volume at the same depth planes. (i)–(k) Confocal microscopy slices at the same depth planes for comparison [See Visualization 2.

Download Full Size | PPT Slide | PDF

The reconstructed 3D intensity is shown in Fig. 4, alongside images from a confocal microscope and a widefield focus stack, for comparison. Both our method and the focus stack use a 0.4 NA objective, while the confocal uses 0.25 NA; hence, the confocal results should have slightly better resolution. Our reconstructed slices appear to have slightly lower resolution than the defocus stack and confocal, possibly due to the missing information in the frequency mutual intensity illustrated in Fig. 2(c), which is undersampled. As expected, each depth slice of our reconstruction has better rejection of information from other depths, similar to the confocal ones.

To illustrate the flexible tradeoff between capture time (number of coded apertures used) and quality, we show reconstructions in Fig. 5 using different numbers of coded aperture images. The case of only 1 image corresponds to a single coded aperture and gives a poor result, since the sample is relatively dense. However, with as few as 10 images we obtain a reasonable result, despite the fact that we are solving a severely under-determined problem. This is possible because the measurements are multiplexed and so the 1 regularizer acts as a sparsity promoter.

 figure: Fig. 5

Fig. 5 Image quality can be traded for capture speed (number of coded aperture images). 3D reconstructions from increasing numbers of images with different coded apertures show that this object is too dense to be accurately reconstructed by a single coded-aperture image, but gives a reasonable reconstruction with 10 or more images, due to sparsity of the sample.

Download Full Size | PPT Slide | PDF

5. Conclusion

We demonstrated 3D reconstruction of a large-volume high-resolution dense fluorescent object from multiplexed phase-space measurements. An SLM in Fourier space dynamically implements quasi-random coded apertures while intensity images are collected in real space for each coded aperture. Theory is developed in the framework phase space, with relation to Mutual Intensity functions and 3D OTFs. Reconstruction is formulated as an 1-regularized least-square problem. This method enables diffraction-limited 3D imaging with high resolution across large volumes, efficient data capture and a flexible acquisition scheme for different types and sizes of samples.

Funding

The Office of Naval Research (ONR) (Grant N00014-14-1-0083) and NSF (Grant 1351896).

Acknowledgments

The authors thank Eric Jonas, Ben Recht, Jingzhao Zhang and the AMP Lab at UC Berkeley for help with computational resources.

References and links

1. I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

2. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005) [CrossRef]   [PubMed]  .

3. T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011) [CrossRef]   [PubMed]  .

4. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006) [CrossRef]  .

5. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013) [CrossRef]   [PubMed]  .

6. Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996) [CrossRef]  .

7. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009) [CrossRef]   [PubMed]  .

8. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014) [CrossRef]   [PubMed]  .

9. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010) [CrossRef]  .

10. M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

11. M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978) [CrossRef]  .

12. A. Accardi and G. Wornell, “Quasi light fields: extending the light field to coherent radiation,” J. Opt. Soc. Am. A 26, 2055–2066 (2009) [CrossRef]  .

13. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

14. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

15. R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007) [CrossRef]  .

16. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016) [CrossRef]  .

17. D. L. Marks, R. A. Stack, and D. J. Brady, “Three-dimensional coherence imaging in the Fresnel domain,” Appl. Opt. 38, 1332–1342 (1999) [CrossRef]  .

18. J. K. Wood, K. A. Sharma, S. Cho, T. G. Brown, and M. A. Alonso, “Using shadows to measure spatial coherence,” Opt. Lett. 39, 4927–4930 (2014) [CrossRef]   [PubMed]  .

19. D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001) [CrossRef]  .

20. L. Tian, Z. Zhang, J. C. Petruccelli, and G. Barbastathis, “Wigner function measurement using a lenslet array,” Opt. Express 21, 10511–10525 (2013) [CrossRef]   [PubMed]  .

21. L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012) [CrossRef]  .

22. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015) [CrossRef]   [PubMed]  .

23. S. Cho, M. A. Alonso, and T. G. Brown, “Measurement of spatial coherence through diffraction from a transparent mask with a phase discontinuity,” Opt. Lett. 37, 2724–2726 (2012) [CrossRef]   [PubMed]  .

24. H.-Y. Liu, J. Zhong, and L. Waller, “4D phase-space multiplexing for fluorescent microscopy,” Proc. SPIE 9720, 97200A (2016) [CrossRef]  .

25. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007) [CrossRef]  .

26. C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

27. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007) [CrossRef]  .

28. J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

29. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009) [CrossRef]  .

30. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256–1259 (1968) [CrossRef]  .

31. J. Goodman, Introduction to Fourier optics (Roberts & Co., 2005).

32. M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005) [CrossRef]  .

References

  • View by:

  1. I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).
  2. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005).
    [Crossref] [PubMed]
  3. T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
    [Crossref] [PubMed]
  4. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
    [Crossref]
  5. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref] [PubMed]
  6. Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
    [Crossref]
  7. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
    [Crossref] [PubMed]
  8. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
    [Crossref] [PubMed]
  9. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
    [Crossref]
  10. M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).
  11. M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978).
    [Crossref]
  12. A. Accardi and G. Wornell, “Quasi light fields: extending the light field to coherent radiation,” J. Opt. Soc. Am. A 26, 2055–2066 (2009).
    [Crossref]
  13. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.
  14. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).
  15. R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007).
    [Crossref]
  16. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
    [Crossref]
  17. D. L. Marks, R. A. Stack, and D. J. Brady, “Three-dimensional coherence imaging in the Fresnel domain,” Appl. Opt. 38, 1332–1342 (1999).
    [Crossref]
  18. J. K. Wood, K. A. Sharma, S. Cho, T. G. Brown, and M. A. Alonso, “Using shadows to measure spatial coherence,” Opt. Lett. 39, 4927–4930 (2014).
    [Crossref] [PubMed]
  19. D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
    [Crossref]
  20. L. Tian, Z. Zhang, J. C. Petruccelli, and G. Barbastathis, “Wigner function measurement using a lenslet array,” Opt. Express 21, 10511–10525 (2013).
    [Crossref] [PubMed]
  21. L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
    [Crossref]
  22. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
    [Crossref] [PubMed]
  23. S. Cho, M. A. Alonso, and T. G. Brown, “Measurement of spatial coherence through diffraction from a transparent mask with a phase discontinuity,” Opt. Lett. 37, 2724–2726 (2012).
    [Crossref] [PubMed]
  24. H.-Y. Liu, J. Zhong, and L. Waller, “4D phase-space multiplexing for fluorescent microscopy,” Proc. SPIE 9720, 97200A (2016).
    [Crossref]
  25. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
    [Crossref]
  26. C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.
  27. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
    [Crossref]
  28. J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.
  29. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
    [Crossref]
  30. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256–1259 (1968).
    [Crossref]
  31. J. Goodman, Introduction to Fourier optics (Roberts & Co., 2005).
  32. M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005).
    [Crossref]

2016 (2)

2015 (1)

2014 (2)

J. K. Wood, K. A. Sharma, S. Cho, T. G. Brown, and M. A. Alonso, “Using shadows to measure spatial coherence,” Opt. Lett. 39, 4927–4930 (2014).
[Crossref] [PubMed]

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

2013 (2)

2012 (2)

L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
[Crossref]

S. Cho, M. A. Alonso, and T. G. Brown, “Measurement of spatial coherence through diffraction from a transparent mask with a phase discontinuity,” Opt. Lett. 37, 2724–2726 (2012).
[Crossref] [PubMed]

2011 (1)

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

2010 (1)

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

2009 (3)

A. Accardi and G. Wornell, “Quasi light fields: extending the light field to coherent radiation,” J. Opt. Soc. Am. A 26, 2055–2066 (2009).
[Crossref]

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

2007 (3)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007).
[Crossref]

2006 (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

2005 (2)

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005).
[Crossref] [PubMed]

M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005).
[Crossref]

2001 (1)

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

1999 (1)

1996 (1)

Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
[Crossref]

1982 (1)

I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

1978 (1)

M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978).
[Crossref]

1968 (1)

Accardi, A.

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

Adesnik, H.

Alonso, M. A.

Andalman, A.

Antipa, N.

Backer, A. S.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

Barbastathis, G.

Bastiaans, M.

M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978).
[Crossref]

Beck, A.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Bertero, M.

M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005).
[Crossref]

Betzig, E.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Biteen, J. S.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Boccacci, P.

M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005).
[Crossref]

Brady, D. J.

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Brown, T. G.

Broxton, M.

Chang, J.

J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

Chen, H.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

Cho, S.

Christodoulides, D.

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Cohen, N.

Coskun, T.

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Cox, I.

I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

Davidson, M.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Deisseroth, K.

Denk, W.

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005).
[Crossref] [PubMed]

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Eugenieva, E.

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Fleischer, J.

L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
[Crossref]

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Galbraith, C.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Galbraith, J.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Gao, L.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Gerlock, M.

Goodman, J.

J. Goodman, Introduction to Fourier optics (Roberts & Co., 2005).

Green, P.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

Grosenick, L.

Hanrahan, P.

R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007).
[Crossref]

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Helmchen, F.

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005).
[Crossref] [PubMed]

Hennelly, B.

M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

Horowitz, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Hu, X.

J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

Ji, N.

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Jonas, E.

Kauvar, I.

J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Liang, C.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

Lin, T.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

Liu, C.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

Liu, H.-Y.

Liu, N.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Lord, S. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Marks, D. L.

Matusik, W.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

Milkie, D.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Milkie, D. E.

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

Mitchell, M.

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Moerner, W.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Moerner, W. E.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

Ng, R.

R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007).
[Crossref]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Ojeda-Castañeda, J.

M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

Pavani, S. R. P.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Pégard, N. C.

Petruccelli, J. C.

Piestun, R.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
[Crossref]

Planchon, T.

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Recht, B.

Sahl, S. J.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

Schechner, Y. Y.

Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
[Crossref]

Segev, M.

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Shamir, J.

Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
[Crossref]

Sharma, K. A.

Shechtman, Y.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

Sheppard, C.

I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

Situ, G.

L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
[Crossref]

Stack, R. A.

Sun, W.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

Teboulle, M.

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Testorf, M.

M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

Thompson, M. A.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Tian, L.

Twieg, R. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Waller, L.

N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
[Crossref]

H.-Y. Liu, J. Zhong, and L. Waller, “4D phase-space multiplexing for fluorescent microscopy,” Proc. SPIE 9720, 97200A (2016).
[Crossref]

H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
[Crossref] [PubMed]

L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
[Crossref]

Walther, A.

Wetzstein, G.

J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

Wilson, T.

I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

Wong, B.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

Wood, J. K.

Wornell, G.

Yang, S.

Zhang, Z.

L. Tian, Z. Zhang, J. C. Petruccelli, and G. Barbastathis, “Wigner function measurement using a lenslet array,” Opt. Express 21, 10511–10525 (2013).
[Crossref] [PubMed]

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

Zhong, J.

ACM Trans. Graph. (3)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 70 (2007).
[Crossref]

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 68 (2007).
[Crossref]

Appl. Opt. (1)

Astronomy and Astrophysics (1)

M. Bertero and P. Boccacci, “A simple method for the reduction of boundary effects in the Richardson-Lucy approach to image deconvolution,” Astronomy and Astrophysics 437, 369–374 (2005).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (1)

Nat. Methods (3)

N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010).
[Crossref]

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005).
[Crossref] [PubMed]

T. Planchon, L. Gao, D. Milkie, M. Davidson, J. Galbraith, C. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011).
[Crossref] [PubMed]

Nat. Photonics (1)

L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).
[Crossref]

Opt. Commun. (1)

M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978).
[Crossref]

Opt. Express (3)

Opt. Lett. (2)

Optica (1)

Optik (1)

I. Cox, C. Sheppard, and T. Wilson, “Super-resolution by confocal fluorescent microscopy,” Optik 60, 391–396 (1982).

Phys. Rev. E (2)

Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54, R50–R53 (1996).
[Crossref]

D. Christodoulides, E. Eugenieva, T. Coskun, M. Segev, and M. Mitchell, “Equivalence of three approaches describing partially incoherent wave propagation in inertial nonlinear media,” Phys. Rev. E 63, 35601 (2001).
[Crossref]

Phys. Rev. Lett. (1)

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref] [PubMed]

Proc. Natl. Acad. Sci. USA (1)

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[Crossref] [PubMed]

Proc. SPIE (2)

R. Ng and P. Hanrahan, “Digital correction of lens aberrations in light field photography,” Proc. SPIE 6342, 63421E (2007).
[Crossref]

H.-Y. Liu, J. Zhong, and L. Waller, “4D phase-space multiplexing for fluorescent microscopy,” Proc. SPIE 9720, 97200A (2016).
[Crossref]

SIAM Journal on Imaging Sciences (1)

A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009).
[Crossref]

Other (6)

J. Goodman, Introduction to Fourier optics (Roberts & Co., 2005).

J. Chang, I. Kauvar, X. Hu, and G. Wetzstein, “Variable aperture light field photography: Overcoming the diffraction-limited spatio-angular resolution tradeoff,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 3737–3745.

C. Liang, T. Lin, B. Wong, C. Liu, and H. Chen, “Programmable aperture photography: Multiplexed light field acquisition,” in Proceedings of ACM SIGGRAPH (ACM, 2008), pp. 55:1–55:10.

M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–10.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

Supplementary Material (2)

NameDescription
Visualization 1: MOV (1138 KB)      Raw acquisition data and masks used on the SLM
Visualization 2: MOV (3037 KB)      3D Rotation display of the reconstructed object

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Phase-space multiplexing for 3D fluorescence microscopy. The microscope (a 4 f system) uses a Spatial Light Modulator (SLM) in Fourier space to implement multiple coded apertures, while capturing 2D intensity images in real space for each. Our wave-optical forward model A relates the object c to the measured images ymeas for each pattern. The inverse problem recovers the object, subject to sparsity priors where applicable. [See Visualization 1]
Fig. 2
Fig. 2 Illustration of how the spatial spectrum of intensity measurements are related to the input complex-field for a coded-aperture measurement. (a) Widefield imaging: the frequency representation of the electric field (u1) forms an outer product with its conjugate, resulting in a diamond-shaped Mutual Intensity (MI) space, whose rotated projection gives the spectrum of the measured image Ĩ(∆u). (b) Scanning aperture imaging: the nth aperture Mn patterns the electric field and its conjugate to probe a select area of MI space. (c) Coded-aperture imaging: the quasi-random coded apertures probe different areas within a larger range of the MI space. If is from a point source, the intensity projections as the source moves through focus make up the 3D OTF Fourier-transformed along the optical axis.
Fig. 3
Fig. 3 Multiplexed phase-space measurements contain more information than scanning-aperture measurements. (Left) A sample aperture pattern, (Middle) the corresponding measured intensity image (same exposure time), and (Right) its log-scale Fourier transform. The multiplexed measurements have better light throughput and more high-frequency content.
Fig. 4
Fig. 4 3D reconstruction of a brine shrimp sample as compared to focus stack and confocal microscopy. (a) and (b) 3D renderings of the reconstructed fluorescence intensity distribution (1010×510×500µm) from different perspectives. (c)–(e) 2D widefield images at three different focus planes. (f)–(h) Slices of our reconstructed volume at the same depth planes. (i)–(k) Confocal microscopy slices at the same depth planes for comparison [See Visualization 2.
Fig. 5
Fig. 5 Image quality can be traded for capture speed (number of coded aperture images). 3D reconstructions from increasing numbers of images with different coded apertures show that this object is too dense to be accurately reconstructed by a single coded-aperture image, but gives a reasonable reconstruction with 10 or more images, due to sparsity of the sample.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

W ( u , r ) E ˜ * ( u Δ u 2 ) E ˜ ( u + Δ u 2 ) e i 2 π ( Δ u ) r d 2 ( Δ u ) ,
MI ( u , Δ u ) E ˜ * ( u Δ u 2 ) E ˜ ( u + Δ u 2 ) ,
E ˜ n ( u 1 ; α ) = M n ( λ f u 1 ) E ˜ s ( u 1 ; α ) ,
I n ( r ; α ) = E ˜ n ( u 1 ; α ) e i 2 π u 1 r d 2 u 1 E ˜ n * ( u 2 ; α ) e i 2 π u 2 r d 2 u 2 ,
I n ( r ; α ) = E ˜ n ( u + Δ u / 2 ; α ) E ˜ n * ( u Δ u / 2 ; α ) e i 2 π Δ u r d 2 Δ u d 2 u
= MI n ( u , Δ u ; α ) e i 2 π Δ u r d 2 Δ u d 2 u
= W n ( u , r ; α ) d 2 u ,
I ˜ n ( Δ u ; α ) = MI n ( u , Δ u ; α ) d 2 u = M n * ( λ f ( u Δ u 2 ) ) M n ( λ f ( u + Δ u 2 ) ) MI s ( u , Δ u ; α ) d 2 u ,
I n ( r ) = α C ( α ) I n ( r ; α ) .
E ˜ s ( u 1 ; r s , z s , λ ) = { e i 2 π λ ( z s ) 1 λ 2 | u 1 | 2 i 2 π r s u 1 , | u 1 | < N A λ 0 , otherwise .
I n ( r ; r s , z s , λ ) = K M n , z s ( Δ u ; λ ) e i 2 π ( r r s ) Δ u d 2 Δ u ,
K M n , z s ( Δ u ; λ ) = M n * ( λ f ( u Δ u 2 ) ) M n ( λ f ( u + Δ u 2 ) ) e i 2 π z s λ ( 1 λ 2 | u Δ u 2 | 2 1 λ 2 | u + Δ u 2 | 2 ) d 2 u
I n ( r ) = ( λ S ( λ ) K M n , z s ( Δ u ; λ ) e i 2 π ( r r s ) Δ u d 2 Δ u ) C ( r s , z s ) d 2 r s d z s .
y = A c .
arg min c 0 1 2 y y meas 2 2 + μ W c 1 ,

Metrics