Abstract

We demonstrate a compact, easy-to-build computational camera for single-shot three-dimensional (3D) imaging. Our lensless system consists solely of a diffuser placed in front of an image sensor. Every point within the volumetric field-of-view projects a unique pseudorandom pattern of caustics on the sensor. By using a physical approximation and simple calibration scheme, we solve the large-scale inverse problem in a computationally efficient way. The caustic patterns enable compressed sensing, which exploits sparsity in the sample to solve for more 3D voxels than pixels on the 2D sensor. Our 3D reconstruction grid is chosen to match the experimentally measured two-point optical resolution, resulting in 100 million voxels being reconstructed from a single 1.3 megapixel image. However, the effective resolution varies significantly with scene content. Because this effect is common to a wide range of computational cameras, we provide a new theory for analyzing resolution in such systems.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Because optical sensors are two dimensional (2D), imaging 3D objects requires projection to 2D in such a way that the 3D information can be recovered. Scanning and multishot methods can achieve high spatial resolution 3D imaging, but sacrifices capture speed [1,2]. In contrast, single-shot 3D methods are fast, but may have low resolution or small field-of-view (FoV) [3,4]. Often, bulky hardware and complicated setups are required. Here, we introduce a compact, inexpensive single-shot lensless optical system for 3D imaging. We show how it can reconstruct a large number of voxels by leveraging compressed sensing.

Our lensless imager, DiffuserCam, encodes the 3D intensity of volumetric objects in a single 2D image. The diffuser, a thin phase mask, is placed a few millimeters in front of an image sensor. Each point source in the 3D space creates a unique pseudorandom caustic pattern that covers a large portion of the sensor. As result, compressed sensing algorithms can be used to reconstruct more voxels than pixels captured, provided that the 3D sample is sparse in some domain. We solve the inverse problem via a sparsity-constrained optimization procedure, using a physical model and simple calibration scheme to make the computation scalable. This approach allows us to reconstruct several orders of magnitude more voxels than related previous work [5,6].

We demonstrate a prototype DiffuserCam system built entirely from commodity hardware. It is efficient to calibrate, does not require precise alignment, and is light efficient (as compared to amplitude masks). We reconstruct 3D objects on a grid of 100 million voxels (nonuniformly spaced) from a single 1.3 megapixel image. Our reconstructions show true depth sectioning, allowing us to generate 3D renderings of the sample.

Our system, like many computational cameras, uses a nonlinear reconstruction algorithm, resulting in object-dependent performance. To quantify, we experimentally measure the resolution of our prototype with different objects. We show that the standard two-point resolution criterion is misleading and should be considered a best-case scenario. To better explain the variable resolving power of our system, we propose a new local condition number analysis that is consistent with our experiments.

DiffuserCam uses concepts from lensless camera technology and imaging through complex media, integrated together via computational imaging design principles. Our proposed architecture and algorithm could enable high-resolution, light-efficient lensless 3D imaging of large and dynamic 3D samples in an extremely compact package. We believe these cameras will open up new applications in remote diagnostics, mobile photography, and in vivo microscopy.

A. Previous Work

Lensless cameras for 2D photography have shown great promise because of their small form factors. Unlike traditional cameras, in which a point in the scene maps to a pixel on the sensor, lensless cameras map a point in the scene to many points on the sensor, requiring computational reconstruction. A typical lensless architecture replaces the lens with an encoding element placed directly in front of the sensor. These 2D lensless cameras have demonstrated passive incoherent imaging using amplitude masks [7], diffractive masks [8,9], random reflective surfaces [10,11], and modified microlens arrays [12]. Our system uses a similar architecture with a diffuser as the encoding element, and also extends the design and image reconstruction to enable 3D capture.

Light field cameras (integral imagers) passively capture 4D space-angle information in a single-shot [13], which can be used for 3D reconstructions. This concept can be built into a thin form factor with microlens arrays [14] or Fresnel zone plates [15]. Lenslet array-based 3D capture schemes have also been used in microscopy [16], where wave-optical [3,17] or scattering [4,17] effects can be included. All of these systems, however, must trade resolution (or field-of-view) for single-shot capture, limiting the number of useful voxels. DiffuserCam improves upon this tradeoff, capturing large 3D volumes with high voxel counts in a single exposure.

Lensless imaging has also been demonstrated with coherent systems in both 2D [1821] and 3D [2226], but these methods require active (coherent) illumination, which limits applications. Further, many coherent methods do not generate unambiguous 3D reconstructions, but rather use digital refocusing to estimate depth. DiffuserCam, on the other hand, exhibits actual depth sectioning (in the absence of occlusions) for “true 3D.”

Since methods for imaging through scattering often use diffusers as a proxy for general scattering media [2729], our mathematical models will be similar. However, instead of trying to mitigate the effects of unwanted scattering, here we use the diffuser as an optical element in our system design. We choose a thin, optically smooth diffuser that refracts pseudorandomly, producing high-contrast patterns under incoherent illumination. Such diffusers have been used in light field imaging [30] and coherent holography [23,31]. Coherent multiple scattering has been demonstrated as an encoding mechanism for 2D compressed sensing [6], but necessitates a transmission matrix approach that does not scale well past a few thousand pixels. We achieve similar benefits without needing coherent illumination, and we reconstruct 3D objects, rather than 2D. Finally, an important benefit of our system over previous work is the simple calibration and efficient computation that allow for 3D reconstruction at megavoxel scales with superior image quality.

B. System Overview

DiffuserCam is part of the class of mask-based passive lensless imagers in which a phase or amplitude mask is placed a small distance in front of a sensor, with no main lens. Our mask (the diffuser) is a thin transparent phase object with smoothly varying thickness (see Fig. 1). When a temporally incoherent point source is placed in the scene, we observe a high-frequency pseudorandom caustic pattern at the sensor. The caustic patterns, termed point spread functions (PSFs), vary with the 3D position of the source, thereby encoding 3D information.

 figure: Fig. 1.

Fig. 1. DiffuserCam setup and reconstruction pipeline. Our lensless system consists of a diffuser placed in front of a sensor (bumps on the diffuser are exaggerated for illustration). The system encodes a 3D scene into a 2D image on the sensor. A one-time calibration consists of scanning a point source axially while capturing images. Images are reconstructed computationally by solving a nonlinear inverse problem with a sparsity prior. The result is a 3D image reconstructed from a single 2D measurement.

Download Full Size | PPT Slide | PDF

To illustrate how the caustics capture 3D information, Fig. 2 shows simulations of the PSFs for a point source at different locations in the object space. A lateral shift of the point source causes a lateral translation of the PSF [32]. An axial shift of the point source causes (approximately) a scaling of the PSF. Hence, each 3D position in the volume generates a unique caustic pattern. The structure and spatial frequencies present in the PSFs determine our reconstruction resolution. By using a phase mask (which concentrates light better than an amplitude mask) and designing the system to retain high spatial frequencies over a large range of depths, DiffuserCam attains good lateral resolution across the volumetric field-of-view.

 figure: Fig. 2.

Fig. 2. Caustic pattern shifts with lateral shifts of a point source in the scene and scales with axial shifts. (a) Ray-traced renderings of caustics as a point source moves laterally. For large shifts, part of the pattern is clipped by the sensor. (b) The caustics magnify as the source is brought closer.

Download Full Size | PPT Slide | PDF

By assuming that all points in the scene are incoherent with each other, the measurement can be modeled as a linear combination of PSFs from different 3D positions. We represent this as the matrix–vector multiplication,

b=Hv,
where b is a vector containing the 2D sensor measurement and v is a vector representing the intensity of the object at every point in the 3D FoV, sampled on a user-chosen grid. H is the forward model matrix whose columns consist of each of the caustic patterns created by the corresponding 3D points on the object grid. The number of entries in b and the number of rows of H are equal to the number of pixels on the image sensor, but the number of columns in H is set by the choice of reconstruction grid (discussed in Section 3). Note that this model does not account for partial occlusion of sources.

To reconstruct the 3D object, v, from the measured 2D image, b, we must solve Eq. (1) for v. However, if we solve it on a 3D reconstruction grid that corresponds to the full optical resolution of our system (measured in Section 3.B), v will contain more voxels than there are sensor pixels. In this case, H has more columns than rows, so the problem is underdetermined and we cannot uniquely recover v simply by inverting Eq. (1). To remedy this issue, we rely on sparsity-based principles [33]. We exploit the fact that many 3D objects are sparse in some domain, meaning that the majority of coefficients are zero after a linear transformation. We enforce this sparsity as a prior and solve the 1 regularized nonnegativity-constrained inverse problem:

v^=argminv012bHv22+τΨv1.

Here, Ψ maps v into a domain in which it is sparse (Ψv is mostly zeros), and τ is a tuning parameter that adjusts the degree of sparsity. For objects that are sparse in voxels, such as fluorescent particles in a volume, Ψ is the identity matrix. In our results, we show the reconstruction of objects that are not sparse in voxels but are sparse in the gradient domain. Hence, we choose Ψ to be the finite difference operator and Ψv1 to be the 3D total variation (TV) semi-norm [34]. In general, any linear sparsity transformation may be used (e.g., wavelets), but we use only identity and gradient representations in this work.

Equation (2) is the basis pursuit problem in compressed sensing [33]. For this optimization procedure to succeed, H must have distributed, uncorrelated columns. Since our diffuser creates high spatial frequency caustics that spread across many pixels in a pseudorandom fashion, any shift or magnification of the caustics leads to a new pattern that is uncorrelated with the original one (quantified in Supplement 1 Fig. S4). As discussed in Sections 2.B and 2.C, these properties allow us to reconstruct 3D images via compressed sensing.

2. METHODS

A. System Architecture

The hardware setup for our prototype DiffuserCam [Fig. 3(a)] consists of an off-the-shelf diffuser (Luminit 0.5°) placed at a fixed distance in front a sensor (PCO.edge 5.5 Color camera, 6.5 μm pixels). The diffuser has a flat input surface and an output surface described statistically as Gaussian low-pass-filtered white noise with an average spatial feature size of 140 μm and average slope magnitude of 0.7° (see Supplement 1 Fig. S1). The convex bumps on the diffuser surface can be thought of as randomly spaced microlenses that have statistically varying focal lengths and f-numbers. The average focal length determines the distance at which the caustics have highest contrast (the caustic plane), which is where we place the sensor [30]. This distance, measured experimentally, is 8 mm for our diffuser. However, the high average f-number of the bumps (8mm/140μm=57) means that the caustics maintain high contrast over a large range of propagation distances. Therefore, the diffuser need not be placed precisely at the caustic plane (in our prototype, d=8.9mm). We also affix a 5.5×7.5mm aperture on the textured side of the diffuser to limit the support of the caustics.

 figure: Fig. 3.

Fig. 3. Experimentally determined field-of-view (FoV) and resolution. (a) System architecture with design parameters. (b) Angular pixel response of our sensor. We define the angular cutoff (αc) as the angle at which the response falls to 20%. (c) Reconstructed images of two points (captured separately) at varying separations laterally and axially, near the z=20mm depth plane. Points are considered resolved if they are separated by a dip of at least 20%. (d) To-scale nonuniform voxel grid for 3D reconstruction. The chosen voxel grid is based on the system geometry and Nyquist-sampled two-point resolution over the entire FoV. For visualization purposes, each box represents 20×20 voxels, as shown in red.

Download Full Size | PPT Slide | PDF

Similar to a traditional camera, the sensor’s pixel pitch should Nyquist sample the minimum features of the PSF. Since the f-number of the smallest bumps on the diffuser determines the minimum feature size of the caustics, it will also set the lateral optical resolution. In our case, the smallest features generated by the caustic patterns are roughly twice the pixel pitch of our sensor, so we perform 2×2 binning on the data, yielding 1.3 megapixel images, before applying our reconstruction algorithm.

B. Convolutional Forward Model

Recovering a 3D image requires knowing the system matrix, H, which is extremely large. Measuring or storing the full H would be impractical, requiring millions of calibration images and operating on multi-terabyte matrices. Instead, we use the convolution model outlined below to drastically reduce the complexity of both the calibration and computation.

We describe the object, v, as a set of point sources located at (x,y,z) on a non-Cartesian 3D grid. The relative radiant power collected by the aperture from each source is v(x,y,z). The caustic pattern at pixel (x,y) on the sensor due to a unit-powered point source at (x,y,z) is the PSF, h(x,y;x,y,z). Thus, b(x,y) is the sum of all 2D sensor measurements for each nonzero point in v after propagating through the diffuser and onto the sensor. This lets us explicitly write the matrix–vector multiplication Hv by summing over all voxels in the FoV, so

b(x,y)=(x,y,z)v(x,y,z)h(x,y;x,y,z).

Our convolution model amounts to a shift invariance (or infinite memory effect [27,28]) assumption, which greatly simplifies the evaluation of Eq. (3). Consider the caustics created by point sources at a fixed distance, z, from the diffuser. Because the diffuser surface is slowly varying and smooth, the paraxial approximation holds. This implies that a lateral translation of the source by (Δx,Δy) leads to a lateral shift of the caustics on the sensor by (Δx,Δy)=(mΔx,mΔy), where m is the paraxial magnification. We validate this behavior in both simulations (see Fig. 2) and experiments (see Section 3.D). For notational convenience, we define the on-axis caustic pattern at depth z as h(x,y;z)h(x,y;0,0,z). Thus, the off-axis caustic pattern is given by h(x,y;x,y,z)=h(x+mx,y+my;z). Plugging into Eq. (3), the sensor measurement is then given by

b(x,y)=z(x,y)v(x,y,z)h(x+mx,y+my;z)=Cz[v(xm,ym,z)*h(x,y;z)].

Here, * represents the 2D discrete convolution over (x,y), which returns arrays that are larger than the originals. Hence, we crop to the original sensor size, denoted by the linear operator C (see Supplement 1 Fig. S5 for more details). For an object discretized into Nz depth slices, the number of columns of H is Nz times larger than the number of elements in b (i.e., the number of sensor pixels), so our system is underdetermined.

The cropped convolution model provides three benefits. First, it allows us to compute Hv as a linear operator in terms of Nz images, rather than instantiating H explicitly (which would require petabytes of memory to store). In practice, we evaluate the sum of 2D cropped convolutions using a single circular 3D convolution, implemented with 3D FFTs, which scale well to large arrays (see Supplement 1, Section 2.C). Second, it provides a theoretical justification of our system’s capability for compressed sensing; derivations in [35] show that translated copies of a random pattern provide close-to-optimal performance.

The third benefit of our convolution model is that it enables simple calibration. Rather than measuring the system response for every voxel (hundreds of millions of images), we only need to capture a single calibration image of the caustic pattern from an on-axis point source. Though the scaling effect described in Section 1.B suggests that we could use only one image to calibrate the entire 3D space (by scaling it to predict PSFs at different depths), we obtain better results when we calibrate the PSF at each depth. A typical calibration thus consists of capturing images as a point source is moved axially. This takes minutes, but must only be performed once. The added aperture at the diffuser ensures that a point source at the minimum z distance generates caustics that just fill the sensor, so that the entire PSF is captured in each image (see Supplement 1 Fig. S2).

C. Inverse Algorithm

Our inverse problem is extremely large in scale, with millions of inputs and outputs. Even with the convolution model described above, using projected gradient techniques is extremely slow due to the time required to compute the proximal operator of 3D TV [36]. To alleviate this issue, we use the alternating direction method of multipliers (ADMM) [37] and derive a variable splitting that leverages the specific structure of our problem.

Our algorithm uses the fact that Ψ can be written as a circular convolution for both the 3D TV and native sparsity cases. Additionally, we factor the forward model in Eq. (4) into a diagonal component, D, and a 3D convolution matrix, M, such that H=DM (details in Supplement 1). Thus, both the forward operator and the regularizer can be computed in 3D Fouier space. This enables us to use variable-splitting [3840] to formulate the constrained counterpart of Eq. (2) as

v^=argminw0,u,v12bDv22+τu1s.t.v=Mv,u=Ψv,w=v,
where v, u, and w are auxiliary variables. We solve Eq. (5) by following the augmented Lagrangian arguments [41]. Using ADMM, this results in the following scheme at iteration k,
uk+1Tτμ2(Ψvk+ηk/μ2)vk+1(DD+μ1I)1(ξk+μ1Mvk+Db)wk+1max(ρk/μ3+vk,0)vk+1(μ1MM+μ2ΨΨ+μ3I)1rkξk+1ξk+μ1(Mvk+1vk+1)ηk+1ηk+μ2(Ψvk+1uk+1)ρk+1ρk+μ3(vk+1wk+1),
where
rk=(μ3wk+1ρk)+Ψ(μ2uk+1ηk)+M(μ1vk+1ξk).

Note that Tν is a vectorial soft-thresholding operator with a threshold value of ν [42], and ξ, η, and ρ are the Lagrange multipliers associated with v, u, and w, respectively. The scalars μ1, μ2, and μ3 are penalty parameters that we compute automatically using the tuning strategy in [37]. A MATLAB implementation of our algorithm is available at [43].

Although our algorithm involves two large-scale matrix inversions, both can be computed efficiently and in closed form. Since D is diagonal, (DD+μ1I) is itself diagonal, requiring complexity O(n) to invert using point-wise multiplication. Additionally, all three matrices in (μ1MM+μ2ΨΨ+μ3I) are diagonalized by the 3D discrete Fourier transform (DFT) matrix, so inversion of the entire term can be done using point-wise division in the 3D frequency space. Therefore, its inversion has good computational complexity, O(n3logn), since it is dominated by two 3D FFTs being applied to n3 total voxels. We parallelize our algorithm on the CPU using C++ and Halide [44], a high-performance programming language for image processing (see Supplement 1 Fig. S6 for runtime performance).

A typical reconstruction requires at least 200 iterations. Solving for 2048×2048×128=537 million voxels takes 26 min (8 s per iteration) on a 144-core workstation and requires 85 gigabytes of RAM. A smaller reconstruction (512×512×128=33.5 million voxels) takes 3 min (1 s per iteration) on a four-core laptop with 16 gigabytes of RAM.

3. SYSTEM ANALYSIS

Unlike traditional cameras, the performance of computational cameras depends on properties of the scene being imaged (e.g., the number of sources). As a consequence, standard two-point resolution metrics may be misleading, as they do not predict resolving power for complex objects. To address this issue, we propose a new local condition number metric that we believe better predicts performance. We analyze resolution, FoV, and the validity of the convolution model, then combine these analyses to determine the appropriate sampling grid for our experiments.

A. Field-of-View

At every depth in the volume, the angular half-FoV is determined by the most extreme lateral position that contributes to the measurement. There are two possible limiting factors. The first is the geometric angular cutoff, α, set by the aperture size, w, the sensor size, l, and the distance from the diffuser to the sensor, d [see Fig. 3(a)]. Since the diffuser bends light, we also take into account the diffuser’s maximum deflection angle, β. This gives a geometric angular half-FoV at every depth of l+w=2dtan(αβ). The second limiting factor is the angular response of the sensor pixels. Real-world sensor pixels may not accept light at the high angles of incidence that our lensless camera accepts, so the sensor angular response [shown in Fig. 3(b)] may limit the FoV. Defining the angular cutoff of the sensor, αc, as the angle at which the camera response falls to 20% of its on-axis value, we can write the overall FoV equation as

FoV=β+min[αc,tan1(l+w2d)].

Since we image in 3D, we must also consider the axial FoV. In practice, the axial FoV is limited by the range of calibrated depths. However, the system geometry creates bounds on possible calibration locations. Point sources arbitrarily close to the sensor would produce caustic patterns that exceed the sensor size. To avoid this complication, we impose a minimum object distance at which an on-axis point source creates caustics that fill the sensor. Point sources arbitrarily far from the sensor theoretically can be captured, but axial resolution degrades with depth. The hyperfocal plane represents the axial distance beyond which no depth discrimination is available, establishing an upper bound. Objects beyond the hyperfocal focal plane can still be reconstructed to create 2D images for photographic applications [45], without any hardware modifications.

In our prototype, the axial FoV ranges from the minimum calibration distance (7.3 mm) to the hyperfocal plane (2.3 m). The angular FoV is limited by the pixel angular acceptance (αc=41.5° in x, αc=30° in y). Combined with our diffuser’s maximum deflection angle (β=0.5°), this yields an angular FoV of ±42° in x and ±30.5° in y. We validate the lateral FoV experimentally by capturing a scene at optical infinity and measuring the angular extent of the result (see Supplement 1 Fig. S3).

B. Resolution

Investigating optical resolution is critical for quantifying system performance and choosing our reconstruction grid. Although the raw data is collected on a fixed sensor grid, we can choose the nonuniform 3D reconstruction grid arbitrarily. This choice of reconstruction grid is important. When the grid is chosen with voxels that are too large, resolution is lost. When the voxels are too small, extra computation is performed without resolution gain. In this section we explain how to choose the grid of voxels for our reconstructions, with the aim of Nyquist sampling the two-point optical resolution limit.

1. Two-Point Resolution

A common metric for resolution analysis in traditional cameras is two-point distinguishablity. We measure our system’s two-point resolution by imaging scenes containing two point sources at different separation distances, built by summing together images of a single point source (1 μm pinhole, wavelength 532 nm) at two different locations. We reconstruct the scene using our algorithm, with τ=0 to remove the influence of the regularizer. To ensure best-case resolution, we use the full 5 MP sensor data (no binning). The point sources are considered distinguishable if the reconstruction has a dip of at least 20% between the sources, as in the Rayleigh criterion. Figure 3(c) shows reconstructions with point sources separated both laterally and axially.

Our system has highly non-isotropic resolution [Fig. 3(d)], but we can use our model to predict the two-point distinguishability over the entire volume from localized experiments. Due to the shift invariance assumption, the lateral resolution is constant within a single depth plane and the paraxial magnification causes the lateral resolution to vary linearly with depth. For axial resolution, the main difference between the two point sources is the size of their PSF supports. We find pairs of depths such that the difference in their support widths is constant,

c=1z11z2.
Here, z1 and z2 are neighboring depths and c is a constant determined experimentally.

Based on this model, we set the voxel spacing in our grid to Nyquist sample the 3D two-point resolution. Figure 3(d) shows a to-scale map of the resulting voxel grid. Axial resolution degrades with distance until it reaches the hyperfocal plane (2.3m from the camera), beyond which no depth information is recoverable. Due to the non-telecentric nature of the system, the voxel sizes are a function of depth, with the densest sampling occurring close to the camera. Objects within 5 cm of the camera can be reconstructed with somewhat isotropic resolution. In practice, this is where we place objects.

2. Multi-Point Resolution

In a traditional camera, resolution is a function of the system and is independent of the scene. In contrast, computational cameras that use nonlinear reconstruction algorithms may incur degradation of the effective resolution as the scene complexity increases. To demonstrate this in our system, we consider a more complex scene consisting of 16 point sources. Figure 4 shows experiments using 16 point sources arranged in a 4×4 grid in the (x,z) plane at two different spacings. The first spacing is set to match the measured two-point resolution limit (Δx=45μm, Δz=336μm). Despite being able to separate two points at this spacing, we cannot resolve all 16 sources. However, if we increase the source separation to (Δx=75μm, Δz=448μm), all 16 points are distinguishable [Fig. 4(d)]. In this example, the usable lateral resolution of the system degrades by approximately 1.7× due to the increased scene complexity. As we show in Section 3.C, the resolution loss does not become arbitrarily worse as the scene complexity increases.

 figure: Fig. 4.

Fig. 4. Our computational camera has object-dependent performance, such that the resolution depends on the number of points. (a) To illustrate, we show here a situation with two points successfully resolved at the two-point resolution limit (Δx,Δz)=(45μm,336μm) at a depth of approximately 20 mm. (c) When the object consists of more points (16 points in a 4×4 grid in the xz plane) at the same spacing, however, the reconstruction fails. (b) and (d) Increasing the separation to (Δx,Δz)=(75μm,448μm) gives successful reconstructions. (e) and (f) A close-up of the raw data shows noticeable splitting of the caustic lines for the 16-point case, making the points distinguishable. Heuristically, the 16-point resolution cutoff is a good indicator of resolution for real-world objects.

Download Full Size | PPT Slide | PDF

This experiment demonstrates that existing resolution metrics cannot be blindly used to determine the performance of computational cameras like ours. How can we then analyze resolution if it depends on object properties? In the next section, we introduce a general theoretical framework to assess resolution in computational cameras like ours.

C. Local Condition Number Theory

Our goal is to provide a new theory that describes how the effective reconstruction resolution of computational cameras changes with object complexity. To do so, we introduce a numerical analysis of how well our forward model can be inverted.

First, note that recovering the image v from the measurement b=Hv entails simultaneous estimation of the locations of all nonzeros within our image reconstruction, v, as well as the values at each nonzero location. To simplify the problem, suppose an oracle tells us the exact location of every source within the 3D scene. This corresponds to knowing a priori the support of v, so we then need only determine the values of the nonzero elements in v. This can be done by solving a least-squares problem using a sub-matrix consisting of only the columns of H that correspond to the indices of the nonzero voxels. If this problem fails, then the more difficult problem of simultaneously determining the nonzero locations and their values will certainly fail.

In practice, the measurement is corrupted by noise. The maximal effect this noise can have on the least-squares estimate of the nonzero values is determined by the condition number of the sub-matrix described above. We therefore say that the reconstruction problem is ill-posed if any sub-matrices of H are very ill-conditioned. In practice, ill-conditioned matrices result in increased noise sensitivity and longer reconstruction times, as more iterations are needed to converge to a solution.

In general, finding the worst-case sub-matrix is difficult. However, because our system measurements vary smoothly for inputs within a small neighborhood, the worst-case scenario is when multiple sources are in a contiguous block (i.e., nearby measurements are most similar, either by shift or scaling). Therefore, we compute the condition number of sub-matrices of H corresponding to a group of point sources with the separation varying by integer numbers of voxels. We repeat this calculation for different numbers of sources. The results are shown in Fig. 5. As expected, the conditioning is worse when sources are closer together. In this case, increased noise sensitivity means that even small amounts of noise could prevent us from resolving the sources. This trend matches experiments in Figs. 3 and 4.

 figure: Fig. 5.

Fig. 5. Our local condition number theory shows how the resolution varies with the object complexity. (a) Virtual point sources are simulated on a fixed grid and moved by integer numbers of voxels to change the separation distance. (b) Local condition numbers are plotted for sub-matrices corresponding to grids of neighboring point sources with varying separation (at a depth 20 mm from the sensor). As the number of sources increases, the condition number approaches a limit, indicating that resolution for complex objects can be approximated by a limited number (but more than two) sources.

Download Full Size | PPT Slide | PDF

Figure 5 also shows that the local condition number increases with the number of sources in the scene, as expected. This means that the resolution will degrade as more and more sources are added. We see in Fig. 5, however, that as the number of sources increases, the conditioning approaches a limiting case. Hence, the resolution does not become arbitrarily worse with an increased number of sources. Therefore, we can estimate the system resolution for complex objects from distiguishability measurements with a limited number of point sources. This is experimentally validated in Section 4, where we find that the experimental 16-point resolution is a good predictor of the resolution for a USAF target.

Unlike the traditional two-point resolution metric, our new local condition number theory explains the resolution loss we observe experimentally. Since many optical systems are locally shift invariant, we believe that it is sufficiently general to be applicable to other computational cameras that use nonlinear algorithms, which likely exhibit similar performance loss.

D. Validity of the Convolution Model

In Section 2.B, we modeled the caustic pattern as shift invariant at every depth, leading to a simple calibration and efficient computation. Since our convolution model is an approximation, we should quantify its validity. Figures 6(a)6(c) show registered close-ups of experimentally measured PSFs from plane waves incident at 0°, 15°, and 30°. The convolution model assumes that these are all exactly the same, although, they actually have subtle differences. To quantify the similarity across the FoV, we plot the inner product between each off-axis PSF and the on-axis PSF [see Fig. 6(d)]. The inner product is greater than 75% across the entire FoV and is particularly good within ±15° of the optical axis, indicating that the convolution model holds relatively well.

 figure: Fig. 6.

Fig. 6. Experimental validation of the convolution model. (a)–(c) Close-ups of registered experimental PSFs for sources at 0°, 15°, and 30°. The PSF at 15° is visually similar to that on-axis, while the PSF at 30° has subtle differences. (d) Inner product between the on-axis PSF and registered off-axis PSFs as a function of source position. (e) Resulting spot size (normalized by on-axis spot). The convolution model holds well up to ±15°, beyond which resolution degrades (solid). Exhaustive calibration would improve the resolution (dashed), at the expense of complexity in computation and calibration.

Download Full Size | PPT Slide | PDF

To investigate how the spatial variance of the PSF impacts the system performance, we use the peak width of the cross-correlation between the on-axis and off-axis PSFs to approximate the spot size off-axis. Figure 6(e) (solid) shows that we retain the on-axis resolution up to ±15°. Beyond that, the resolution gradually degrades. To avoid model mismatch, one could replace the convolution model with exhaustive calibration over all positions in the FoV. This procedure would yield higher resolution at the edges of the FoV, as shown by the dashed line in Fig. 6(e). The gap between these lines is what we sacrifice in resolution by using the convolution model. However, in return, we gain simplified calibration and efficient computation, which makes the large-scale problem feasible.

4. EXPERIMENTAL RESULTS

Images of two objects are shown in Fig. 7, both illuminated using broadband white light and reconstructed with a 3D TV regularizer. We choose a reconstruction grid that approximately Nyquist samples the two-point resolution (by 2×2 binning the sensor pixels to yield a 1.3 megapixel measurement). Calibration images are taken at 128 different z-planes, ranging from z=10.86mm to z=36.26mm (from the diffuser), with spacing set according to conditions outlined in Section 3.B. The 3D images are reconstructed on a 2048×2048×128 grid, but the angular FoV restricts the usable portion of this grid to the center 100 million voxels. Note that the resolvable feature size on this reconstruction grid varies based on the object complexity.

 figure: Fig. 7.

Fig. 7. Experimental 3D reconstructions. (a) Tilted resolution target, which was reconstructed on a 4.2 MP lateral grid with 128 z-planes and cropped to 640×640×50 voxels. The large panel shows the max projection over z. Note that the spatial scale is not isotropic. Inset is a magnification of group 2 with an intensity cutline, showing that we resolve element 5 at a distance of 24 mm, which corresponds to a feature size of 79 μm (approximately twice the lateral voxel size of 35 μm at this depth). The degraded resolution matches our 16-point distinguishability (75 μm at 20 mm depth). Lower panels show depth slices from the recovered volume. (b) Reconstruction of a small plant, cropped to 480×320×128 voxels, rendered from multiple angles.

Download Full Size | PPT Slide | PDF

The first object is a negative USAF 1951 fluorescence test target, tilted 45° about the y-axis [Fig. 7(a)]. Slices of the reconstructed volume at different z planes are shown to highlight the system’s depth sectioning capabilities. As described in Section 3.B, the spatial scale changes with depth. Analyzing the resolution in the vertical direction [Fig. 7(a) inset], we can easily resolve group 2/element 4 and barely resolve group 2/element 5 at z=24mm. This corresponds to resolving features 79 μm apart on the resolution target. This resolution is significantly worse than the two-point resolution at this depth (50 μm), but is similar to the 16-point resolution (75 μm). Hence, we reinforce our claim that two-point resolution is a misleading metric for computational cameras, but multipoint distinguishability can be extended to more complex objects.

Finally, we demonstrate the ability of DiffuserCam to image natural objects by reconstructing a small plant [Fig. 7(b)]. Multiple perspectives of the 3D reconstruction are rendered to demonstrate the ability to capture the 3D structure of the leaves.

5. CONCLUSION

We demonstrated a simple optical system, with only a diffuser in front of a sensor, which is capable of single-shot 3D imaging. The diffuser encodes the 3D location of the point sources in caustic patterns, which allow us to apply compressed sensing to reconstruct more voxels than we have measurements. By using a convolution model that assumes that the caustic pattern is shift invariant at every depth, we developed an efficient ADMM algorithm for image recovery and simple calibration scheme. We characterized the FoV and two-point resolution of our system, and showed how resolution varies with object complexity. This motivated the introduction of a new condition number analysis, which we used to analyze how inverse problem conditioning changes with object complexity.

Funding

Hertz Foundation; U.S. Department of Defense (DOD); Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (SNF) (P2ELP2 172278, P2EZP2 159065); Defense Advanced Research Projects Agency (DARPA) (N66001-17-C-4015); Gordon and Betty Moore Foundation (GBMF4562); National Science Foundation (NSF) CAREER award.

Acknowledgment

Laura Waller is a Chan Zuckerberg Biohub Investigator. Ren Ng acknowledges support from the Alfred P. Sloan Foundation. Ben Mildenhall acknowledges funding from the Hertz Foundation and Grace Kuo is a National Defense Science and Engineering Graduate Fellow. Reinhard Heckel and Emrah Bostan acknowledge funding from the Swiss NSF. The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The authors thank Dr. Eric Jonas and the Rice FlatCam team for helpful discussions.

 

See Supplement 1 for supporting content.

REFERENCES

1. W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]  

2. T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008). [CrossRef]  

3. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013). [CrossRef]  

4. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016). [CrossRef]  

5. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

6. A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014). [CrossRef]  

7. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

8. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).

9. P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

10. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Technical Report MIT-CSAIL-TR-2006-058 (Massachusetts Institute of Technology, 2006).

11. A. Stylianou and R. Pless, “Sparklegeometry: glitter imaging for 3D point tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 10–17.

12. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics: concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

13. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report CSTR 2005-02, Stanford University, 2005), pp. 3418–3421.

14. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007). [CrossRef]  

15. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

16. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

17. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015). [CrossRef]  

18. W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22, 22146–22156 (2014). [CrossRef]  

19. W. Chi and N. George, “Optical imaging with phase-coded aperture,” Opt. Express 19, 4294–4300 (2011). [CrossRef]  

20. A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017). [CrossRef]  

21. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017). [CrossRef]  

22. D. Brady, K. Choi, D. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]  

23. K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016). [CrossRef]  

24. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). [CrossRef]  

25. H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]  

26. A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017). [CrossRef]  

27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

28. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [CrossRef]  

29. A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014). [CrossRef]  

30. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

31. Y. Kashter, A. Vijayakumar, and J. Rosen, “Resolving images by blurring: superresolution method with a scattering mask between the observed objects and the hologram recorder,” Optica 4, 932–939 (2017). [CrossRef]  

32. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]  

33. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

34. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992). [CrossRef]  

35. F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014). [CrossRef]  

36. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009). [CrossRef]  

37. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010). [CrossRef]  

38. M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013). [CrossRef]  

39. A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013). [CrossRef]  

40. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010). [CrossRef]  

41. J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 2006).

42. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008). [CrossRef]  

43. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam,” http://www.laurawaller.com/research/diffusercam/ (2017). Accessed: 2017-11-17.

44. J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

45. G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

References

  • View by:
  • |
  • |
  • |

  1. W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
    [Crossref]
  2. T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
    [Crossref]
  3. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref]
  4. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
    [Crossref]
  5. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
    [Crossref]
  6. A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
    [Crossref]
  7. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.
  8. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).
  9. P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.
  10. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” (Massachusetts Institute of Technology, 2006).
  11. A. Stylianou and R. Pless, “Sparklegeometry: glitter imaging for 3D point tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 10–17.
  12. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics: concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001).
    [Crossref]
  13. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.
  14. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
    [Crossref]
  15. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.
  16. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.
  17. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
    [Crossref]
  18. W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22, 22146–22156 (2014).
    [Crossref]
  19. W. Chi and N. George, “Optical imaging with phase-coded aperture,” Opt. Express 19, 4294–4300 (2011).
    [Crossref]
  20. A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
    [Crossref]
  21. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  22. D. Brady, K. Choi, D. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
    [Crossref]
  23. K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016).
    [Crossref]
  24. W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010).
    [Crossref]
  25. H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).
    [Crossref]
  26. A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
    [Crossref]
  27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  28. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
    [Crossref]
  29. A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
    [Crossref]
  30. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.
  31. Y. Kashter, A. Vijayakumar, and J. Rosen, “Resolving images by blurring: superresolution method with a scattering mask between the observed objects and the hologram recorder,” Optica 4, 932–939 (2017).
    [Crossref]
  32. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
    [Crossref]
  33. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
    [Crossref]
  34. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
    [Crossref]
  35. F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
    [Crossref]
  36. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
    [Crossref]
  37. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
    [Crossref]
  38. M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013).
    [Crossref]
  39. A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
    [Crossref]
  40. M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
    [Crossref]
  41. J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 2006).
  42. Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
    [Crossref]
  43. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam,” http://www.laurawaller.com/research/diffusercam/ (2017). Accessed: 2017-11-17.
  44. J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.
  45. G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

2017 (4)

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

Y. Kashter, A. Vijayakumar, and J. Rosen, “Resolving images by blurring: superresolution method with a scattering mask between the observed objects and the hologram recorder,” Optica 4, 932–939 (2017).
[Crossref]

2016 (3)

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016).
[Crossref]

N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
[Crossref]

2015 (1)

2014 (6)

W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22, 22146–22156 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
[Crossref]

2013 (3)

M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013).
[Crossref]

A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
[Crossref]

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref]

2011 (1)

2010 (3)

W. Bishara, T.-W. Su, A. F. Coskun, and A. Ozcan, “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010).
[Crossref]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref]

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

2009 (2)

A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref]

D. Brady, K. Choi, D. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
[Crossref]

2008 (4)

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
[Crossref]

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[Crossref]

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

2007 (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
[Crossref]

2004 (1)

H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).
[Crossref]

2001 (1)

1992 (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

1990 (1)

W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

1988 (1)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Adesnik, H.

Afonso, M. V.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref]

Almeida, M. S. C.

M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013).
[Crossref]

Amarasinghe, S.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Andalman, A.

Antipa, N.

N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
[Crossref]

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Barbastathis, G.

Barnes, C.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Beck, A.

A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref]

Bernet, S.

Bioucas-Dias, J. M.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref]

Bishara, W.

Boyd, S.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

Brady, D.

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

Broxton, M.

Candès, E. J.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[Crossref]

Carron, I.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Chardon, G.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Chi, W.

Choi, K.

Chu, E.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

Cohen, N.

Coskun, A. F.

Daudet, L.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Deisseroth, K.

Denk, W.

W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Durand, F.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

Eckstein, J.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

Edrei, E.

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

Erickson, E.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Faulkner, H.

H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).
[Crossref]

Feng, S.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Fergus, R.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” (Massachusetts Institute of Technology, 2006).

Fessler, J. A.

A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
[Crossref]

Figueiredo, M.

M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013).
[Crossref]

Figueiredo, M. A. T.

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref]

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

Freeman, W. T.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” (Massachusetts Institute of Technology, 2006).

George, N.

Gerlock, M.

Gigan, S.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Gill, P. R.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Grosenick, L.

Hanrahan, P.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

Harm, W.

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Holekamp, T. F.

T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
[Crossref]

Holy, T. E.

T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
[Crossref]

Horisaki, R.

D. Brady, K. Choi, D. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009).
[Crossref]

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
[Crossref]

Horowitz, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

Hoshizawa, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

Ichioka, Y.

Irie, S.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
[Crossref]

Ishida, K.

Jesacher, A.

Jonas, E.

Kabir, S.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Kane, C.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Kashter, Y.

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Kellam, M.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Kondou, N.

Krahmer, F.

F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
[Crossref]

Kumagai, T.

Kuo, G.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Lee, J.

Lee, K.

K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016).
[Crossref]

Lee, P. A.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Lerosey, G.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

Li, S.

Lim, S.

Liu, H.-Y.

Liutkus, A.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Marks, D.

Martina, D.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Matakos, A.

A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
[Crossref]

Mendelson, S.

F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
[Crossref]

Miyatake, S.

Miyazaki, D.

Morimoto, T.

Naik, D.

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

Nakamura, Y.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

Necula, S.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

Ng, R.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

Nocedal, J.

J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 2006).

Ogura, Y.

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
[Crossref]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Osten, W.

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

Ozcan, A.

Parikh, N.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

Paris, S.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Park, Y.

K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016).
[Crossref]

Pedrini, G.

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

Pégard, N. C.

Peleato, B.

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

Pless, R.

A. Stylianou and R. Pless, “Sparklegeometry: glitter imaging for 3D point tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 10–17.

Popoff, S.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

Ragan-Kelley, J.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

Ramani, S.

A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
[Crossref]

Rauhut, H.

F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
[Crossref]

Recht, B.

Ritsch-Marte, M.

Rodenburg, J.

H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).
[Crossref]

Roider, C.

Rosen, J.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Sao, M.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

Scarcelli, G.

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

Schneider, A.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Shimano, T.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

Singh, A.

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

Sinha, A.

Stone, A. D.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Stork, D. G.

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Strickler, J.

W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Stylianou, A.

A. Stylianou and R. Pless, “Sparklegeometry: glitter imaging for 3D point tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 10–17.

Su, T.-W.

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Tajima, K.

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

Takbar, D.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

Takeda, M.

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014).
[Crossref]

Tanida, J.

Teboulle, M.

A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref]

Tian, L.

Torralba, A.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” (Massachusetts Institute of Technology, 2006).

Tringali, J.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

Turaga, D.

T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
[Crossref]

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

Vijayakumar, A.

Wakin, M. B.

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[Crossref]

Waller, L.

N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3, 517–524 (2016).
[Crossref]

H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).
[Crossref]

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

Wang, Y.

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

Webb, W.

W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Wright, S. J.

J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 2006).

Yamada, K.

Yang, J.

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

Yang, S.

Yin, W.

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

Zhang, Y.

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

Zhong, J.

Appl. Opt. (1)

Commun. Pur. Appl. Math. (1)

F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” Commun. Pur. Appl. Math. 67, 1877–1904 (2014).
[Crossref]

Found. Trends Mach. Learn. (1)

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3, 1–122 (2010).
[Crossref]

IEEE Signal Process. Mag. (2)

E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
[Crossref]

IEEE Trans. Image Process. (4)

M. S. C. Almeida and M. Figueiredo, “Deconvolving images with unknown boundaries using the alternating direction method of multipliers,” IEEE Trans. Image Process. 22, 3074–3086 (2013).
[Crossref]

A. Matakos, S. Ramani, and J. A. Fessler, “Accelerated edge-preserving image restoration without boundary artifacts,” IEEE Trans. Image Process. 22, 2019–2029 (2013).
[Crossref]

M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process. 19, 2345–2356 (2010).
[Crossref]

A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process. 18, 2419–2434 (2009).
[Crossref]

Int. J. Adv. Syst. Meas. (1)

D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Syst. Meas. 7, 201–208 (2014).

Light Sci. Appl. (1)

A. Singh, D. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).
[Crossref]

Nat. Commun. (1)

K. Lee and Y. Park, “Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor,” Nat. Commun. 7, 13359 (2016).
[Crossref]

Nat. Photonics (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Neuron (1)

T. F. Holekamp, D. Turaga, and T. E. Holy, “Fast three-dimensional fluorescence imaging of activity in neural populations by objective-coupled planar illumination microscopy,” Neuron 57, 661–672 (2008).
[Crossref]

Opt. Express (7)

Opt. Rev. (1)

R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007).
[Crossref]

Optica (3)

Phys. Rev. Lett. (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).
[Crossref]

Physica D (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992).
[Crossref]

Sci. Rep. (3)

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep. 4, 5552 (2014).
[Crossref]

A. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7, 10687 (2017).
[Crossref]

Science (1)

W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

SIAM J. Imag. Sci. (1)

Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imag. Sci. 1, 248–272 (2008).
[Crossref]

Other (12)

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam,” http://www.laurawaller.com/research/diffusercam/ (2017). Accessed: 2017-11-17.

J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe, “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in ACM SIGPLAN Notices (2013), Vol. 48, pp. 519–530.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “DiffuserCam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–2.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

J. Nocedal and S. J. Wright, Numerical Optimization (Springer, 2006).

K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased Fresnel zone aperture,” in IEEE International Conference on Computational Photography (ICCP) (2017), pp. 76–82.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM Trans. Graph. (Proc. SIGGRAPH) (2006), Vol. 25.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” , Stanford University, 2005), pp. 3418–3421.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision Workshop (ICCVW) (IEEE, 2015), pp. 663–666.

P. R. Gill, J. Tringali, A. Schneider, S. Kabir, D. G. Stork, E. Erickson, and M. Kellam, “Thermal escher sensors: pixel-efficient lensless imagers based on tiled optics,” in Computational Optical Sensing and Imaging (Optical Society of America, 2017), paper CTu3B–3.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” (Massachusetts Institute of Technology, 2006).

A. Stylianou and R. Pless, “Sparklegeometry: glitter imaging for 3D point tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 10–17.

Supplementary Material (1)

NameDescription
» Supplement 1       Supplementary document

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. DiffuserCam setup and reconstruction pipeline. Our lensless system consists of a diffuser placed in front of a sensor (bumps on the diffuser are exaggerated for illustration). The system encodes a 3D scene into a 2D image on the sensor. A one-time calibration consists of scanning a point source axially while capturing images. Images are reconstructed computationally by solving a nonlinear inverse problem with a sparsity prior. The result is a 3D image reconstructed from a single 2D measurement.
Fig. 2.
Fig. 2. Caustic pattern shifts with lateral shifts of a point source in the scene and scales with axial shifts. (a) Ray-traced renderings of caustics as a point source moves laterally. For large shifts, part of the pattern is clipped by the sensor. (b) The caustics magnify as the source is brought closer.
Fig. 3.
Fig. 3. Experimentally determined field-of-view (FoV) and resolution. (a) System architecture with design parameters. (b) Angular pixel response of our sensor. We define the angular cutoff (αc) as the angle at which the response falls to 20%. (c) Reconstructed images of two points (captured separately) at varying separations laterally and axially, near the z=20mm depth plane. Points are considered resolved if they are separated by a dip of at least 20%. (d) To-scale nonuniform voxel grid for 3D reconstruction. The chosen voxel grid is based on the system geometry and Nyquist-sampled two-point resolution over the entire FoV. For visualization purposes, each box represents 20×20 voxels, as shown in red.
Fig. 4.
Fig. 4. Our computational camera has object-dependent performance, such that the resolution depends on the number of points. (a) To illustrate, we show here a situation with two points successfully resolved at the two-point resolution limit (Δx,Δz)=(45μm,336μm) at a depth of approximately 20 mm. (c) When the object consists of more points (16 points in a 4×4 grid in the xz plane) at the same spacing, however, the reconstruction fails. (b) and (d) Increasing the separation to (Δx,Δz)=(75μm,448μm) gives successful reconstructions. (e) and (f) A close-up of the raw data shows noticeable splitting of the caustic lines for the 16-point case, making the points distinguishable. Heuristically, the 16-point resolution cutoff is a good indicator of resolution for real-world objects.
Fig. 5.
Fig. 5. Our local condition number theory shows how the resolution varies with the object complexity. (a) Virtual point sources are simulated on a fixed grid and moved by integer numbers of voxels to change the separation distance. (b) Local condition numbers are plotted for sub-matrices corresponding to grids of neighboring point sources with varying separation (at a depth 20 mm from the sensor). As the number of sources increases, the condition number approaches a limit, indicating that resolution for complex objects can be approximated by a limited number (but more than two) sources.
Fig. 6.
Fig. 6. Experimental validation of the convolution model. (a)–(c) Close-ups of registered experimental PSFs for sources at 0°, 15°, and 30°. The PSF at 15° is visually similar to that on-axis, while the PSF at 30° has subtle differences. (d) Inner product between the on-axis PSF and registered off-axis PSFs as a function of source position. (e) Resulting spot size (normalized by on-axis spot). The convolution model holds well up to ±15°, beyond which resolution degrades (solid). Exhaustive calibration would improve the resolution (dashed), at the expense of complexity in computation and calibration.
Fig. 7.
Fig. 7. Experimental 3D reconstructions. (a) Tilted resolution target, which was reconstructed on a 4.2 MP lateral grid with 128 z-planes and cropped to 640×640×50 voxels. The large panel shows the max projection over z. Note that the spatial scale is not isotropic. Inset is a magnification of group 2 with an intensity cutline, showing that we resolve element 5 at a distance of 24 mm, which corresponds to a feature size of 79 μm (approximately twice the lateral voxel size of 35 μm at this depth). The degraded resolution matches our 16-point distinguishability (75 μm at 20 mm depth). Lower panels show depth slices from the recovered volume. (b) Reconstruction of a small plant, cropped to 480×320×128 voxels, rendered from multiple angles.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

b=Hv,
v^=argminv012bHv22+τΨv1.
b(x,y)=(x,y,z)v(x,y,z)h(x,y;x,y,z).
b(x,y)=z(x,y)v(x,y,z)h(x+mx,y+my;z)=Cz[v(xm,ym,z)*h(x,y;z)].
v^=argminw0,u,v12bDv22+τu1s.t.v=Mv,u=Ψv,w=v,
uk+1Tτμ2(Ψvk+ηk/μ2)vk+1(DD+μ1I)1(ξk+μ1Mvk+Db)wk+1max(ρk/μ3+vk,0)vk+1(μ1MM+μ2ΨΨ+μ3I)1rkξk+1ξk+μ1(Mvk+1vk+1)ηk+1ηk+μ2(Ψvk+1uk+1)ρk+1ρk+μ3(vk+1wk+1),
rk=(μ3wk+1ρk)+Ψ(μ2uk+1ηk)+M(μ1vk+1ξk).
FoV=β+min[αc,tan1(l+w2d)].
c=1z11z2.

Metrics